An International Symposium of the Association of Biomolecular Resource Facilities

Oral Presentation Abstracts

Primary tabs

  • Straight up, with a Twist: Innovative Enzyme Cocktail to Improve DNA extractions of Metagenomic Samples
    Tara Rock New York University
    The ABRF Metagenomics Research Group (MGRG) strives to improve upon and advance metagenomics methodologies. For the improvement of DNA extractions, the MGRG in partnership with Millipore-Sigma, developed an enzyme cocktail which contains 6 hydrolytic enzymes. The aim is to improve and increase the DNA extracted using this simple pre-treatment before any downstream kit of choice.
  • Adult and Fetal Globin Transcript Removal for mRNA Sequencing Projects
    Piotr Mieczkowski Department Of Genetics, Lineberger Comprehensive Cancer Center, SOM, University Of North Carolina At Chapel Hill
    Preterm birth (PTB) is delivery prior to 37 completed weeks of gestation, which occurs following spontaneous labor or is medically induced. Our ultimate goal is to perform mRNAseq on umbilical cord blood and placentas obtained from pregnant women who deliver preterm and from matched controls who deliver at term. However, cord blood total RNA extracted from the umbilical cord samples demands a globin depletion protocol that will be applied to the RNA prior to RNA sequencing. It is known that globin mRNA does not contribute high value RNA sequencing information and because as much as 70% of the mRNA in a blood total RNA sample can be globin mRNA. Thus, it is necessary to remove this globin RNA. Removing globin mRNA and rRNA from a blood RNA sample enables deeper sequencing for discovery of rare transcripts and splice variants and reduces the number of expensive sequencing reads that are wasted because they do not lead to prevention or treatment of disease. In this presentation we demonstrate NuGEN globin reduction optimized protocol for mRNA sequencing from adult and fetal samples.
  • Applications enabled by 600 base reads on the Ion S5™ System
    Madison Taylor Thermo Fisher Scientific
    Longer read lengths simplify genome assembly, haplotyping, metagenomics, and the design of library primers for targeted resequencing. Several new technologies were developed to enable the sequencing of templates with inserts over 600 bases: a fast isothermal templating technology, an ISP™ that is optimized for maximum template density, a new long-read sequencing polymerase, and instrument scripts that consume less reagents. We demonstrate the combination of these technologies to sequence 600 base long DNAs on an Ion S5 System and illustrate the applications enabled by these longer reads.
  • CTO
    Mostafa Ronaghi Illumina
    Recent advancements in genomic technologies are changing the scientific horizon, dramatically accelerating biomedical research. For wide implementation of these technologies, their accuracy, throughput, cost, and workflow need to be addressed. In the past ten years, the cost of full human genome sequencing has been reduced by 4-5 orders of magnitude, and reduction will continue by another 10-fold in the next few years. This year we introduced new tools that would enable large scale biological studies at lower cost. In this talk, we discuss how these tools would accelerate next wave of biological research.
  • The QUANTOM Tx™ Microbial Cell Counter, An Automated Rapid Single Cell Counter for Bacterial Cells
    John Kim Logos Biosystems, Inc.
    Accurately counting microbes in a sample is essential in many fields, including the food industry, water treatment plants, research labs, and clinical labs. There are several bacteria counting methods, such as using a hemocytometer, a spectrophotometer, flow cytometry, and colony counting method. Each method has its pros and cons. For example, using a hemocytometer and a microscope is an economical way to count bacterial cells, but it is tedious and prone to user subjectivity. Counting colony forming units can measure the live cells, but it requires hours of incubation time and effort to count them. Here we introduce a new image-based automated microbial cell counter, the QUANTOM Tx™, that accurately and rapidly counts bacterial cells in a single cell resolution. The QUANTOM Tx™ can generate an accurate counting result of bacterial cells within 15 minutes. The following steps are used to count bacterial cells with the QUANTOM Tx™: Mix bacterial cells with the QUANTOM™ Total Cell Staining Dye, which is a green fluorescent nucleic acid dye staining both live and dead bacterial cells. The cell loading buffer is added into the mixture, and then the mixture is loaded into the QUANTOM™ M50 Cell Counting Slide and centrifuged to immobilize and evenly distribute the cells throughout the counting chamber. The slide is inserted into the QUANTOM Tx™ to be imaged. The QUANTOM Tx™ captures up to 20 high resolution images and counts the cells in each automatically. The highly sophisticated software can distinguish individual cells in various arrangements such as tight clusters or in sequence to produce accurate and reliable total bacterial cell counts. The QUANTOM Tx™ can be a useful tool for researchers who routinely count bacterial cells. The QUANTOM Tx™ users would save a lot of time and be able to get accurate and reliable bacteria counting results.
  • The NCI Research Specialist Award (R50)
    Christine Siemon National Cancer Institute, NIH
    A new NCI funding mechanism designed to encourage the development of stable research career opportunities for exceptional scientists who want to continue to pursue research within the context of an existing NCI-funded basic, translational, clinical or population science cancer research program or core, but not serve as independent investigators. The Research Specialist Award is intended to provide salary support and sufficient autonomy so that individuals are not solely dependent on NCI grants held by others for career continuity.

Genomics

  • 3D-Printed Continuous Flow PCR Microfluidic Device for Field Monitoring of Bacterial DNA
    Elizabeth Hénaff Weill Cornell Medicine
    Environmental metagenomics – measuring bacterial species and gene content from environmental samples – is relevant in many contexts, including agriculture, land stewardship, and disease outbreak monitoring. Indeed, soil bacteria have been shown to influence crop outcome, the response to harmful algal blooms is largely dependent on response time, and pathogen mapping is relevant on the urban scale. Many of these issues are of direct concern to the public, and citizen scientists are increasingly becoming part of such studies, enabling data collection at an unprecedented scale. Furthermore, engaging citizens to monitor their environment has a number of positive impacts on the public perception of human and environmental safety. Indeed, recruiting non-scientists for sample collection has enabled projects such as Ocean Sampling Day (OSD; https://www.microb3.eu/myosd/how-join-myosd), the PathoMap study (http://www.pathomap.org), and the MetaSUB consortium (http://metasub.org) by recruiting numbers of volunteers across the world. The sense of agency derived from the ability to personally monitor one’s environment enables data-driven discussions around the microbial milieu and can help alleviate, or justify, people’s concerns around the enforcement of best environmental practices. While high-throughput whole-genome sequencing, as used by PathoMap and MetaSub, provides an in-depth view of environmental metagenomes and is necessary for full functional characterization, often the detection of indicator species or genes can be sufficient to engage in first response. Thus detection of specific DNA markers, for microbial species or functional plasmid identification, would be relevant for context-specific data collection. Here we describe a cheap, scalable and easy-to-use 3D printed device which implements continuous flow PCR for specific detection of DNA markers, and is robust enough that it may be used by non-scientists. The hardware plans (3D print files and circuit designs) will be made available under the Open Source Hardware Association (http://www.oshwa.org/) so that they may be replicated and implemented freely.
  • Complementary approaches to profiling nascent protein synthesis in peripheral neurons in vivo and in vitro
    Zachary Campbell UT-Dallas Genome Center And The Dept. Of Biological Sciences
    Translational control is a dominant theme in neuronal plasticity. Messenger RNA (mRNA) is subject to dynamic regulation by multi-protein regulatory complexes. These large assemblies enable signal-dependent control of protein synthesis. Tremendous progress has been made on the proximal signaling events that control translation regulation in the nervous system leading to neuronal plasticity (e.g. learning and memory, LTP/LTD, neurodevelopmental disorders, and many forms of chronic pain). However, astonishingly little is known about the downstream mRNA targets that are translated to produce the new proteins that mediate this plasticity. We are establishing a novel resource that comprehensively captures nascent protein synthesis levels in sensory neurons called nociceptors using next-generation sequencing to profile translation. Chronic pain is characterized by persistent plasticity in nociceptors and is a devastating condition with a lifetime incidence greater than 33%. Poorly managed pain creates an enormous burden on our healthcare system and produces tremendous human suffering. This resource provides insight into how pain evoking stimuli trigger dynamic alterations in the landscape of protein synthesis thereby facilitating nociceptor plasticity. The data have clear implications for improved pain treatment and will serve as a paradigm for understanding neuronal plasticity in other areas of neuroscience.
  • Cross-Site Comparison of Ribosomal Depletion Kits for Illumina RNAseq Library Construction
    Stuart Levine MIT
    Ribosomal RNA (rRNA) comprises at least 90% of total RNA extracted from mammalian tissue or cell line samples. Informative transcriptional profiling using massively parallel RNA sequencing technologies requires either enrichment of mature poly-adenylated transcripts or targeted depletion of the rRNA fraction. The latter method is of particular interest because it is compatible with degraded samples such as those extracted from FFPE, and it also captures transcripts that are not poly-adenylated such as some non-coding RNAs. Here we provide a cross-site study that evaluates the performance of ribosomal RNA removal kits from Illumina, Takara/Clontech, Kapa Biosystems, Lexogen, New England Biolabs and Qiagen on intact and degraded RNA samples. We find that all of the kits are capable of performing significant ribosomal depletion, though there are large differences in their ease of use. Most kits perform well on both intact and degraded samples and all identify ~14,000 protein coding genes from the Universal Human Reference RNA sample at >1FPKM, though the fraction of reads that are protein coding or in annotated lncRNAs varies between the different methodologies. These results provide a roadmap for labs on the strengths of each of these methods and how best to utilize them.
  • Offering Single-Cell RNA-Seq as a Core Service
    Anoja Perera Stowers Institute For Medical Research
    High-throughput transcriptome analysis of single cells enables gene expression measurement of individual cells and allows the discovery of heterogeneity within a given cell population. On the wet lab side, meeting the increasing demand for single-cell data can be challenging due to many factors, including cost of instrumentation, library construction, and labor requirements. Since this field is growing very rapidly, it also becomes a challenge to evaluate new instrumentation and make appropriate investments. Recently, we have explored several methods to address these challenges. Our initial efforts focused on cost reductions by reducing reaction volumes using small volume pipetting robots. Here, we looked at both the Formulatrix’s Mantis and TTP Labtech’s Mosquito. As more advanced single-cell instrumentation came on the market, we evaluated WaferGen’s ICELL8 and the 10X Genomics’ Chromium Single Cell System. These assessments have led to two options for single-cell RNA-Seq at our core. Using the 10X Genomics’ Chromium System, we are able to rapidly evaluate thousands of individual cells at a greatly reduced cost. Our second workflow involves setting up quarter-sized reactions using the Formulatrix’s Mantis robot. This method is used in studies where we want to further evaluate a selected population of cells as well as on rare cell populations. At the Stowers Institute for Medical Research, where the Molecular Biology Core Facility is a medium-sized operation, these two workflows allow us to meet the current single-cell transcriptomics needs.
  • An automated low-volume, high-throughput library prep for studying bacterial genomes
    Jon Penterman MIT
    In depth, genomic studies on bacterial isolates from clinical or environmental settings can be prohibitively expensive. The most significant expense for such studies is the preparation of sequencing libraries. Sequencing libraries can be prepared by ligating adaptors to end-repaired DNA (many kits) or by tagmentation and PCR enrichment (Illumina NexteraXT). For large-scale studies, the NexteraXT kit is a popular choice because the two-step, single tube protocol uses unprocessed gDNA as the input (versus fragmented gDNA for non-tagmentation protocols). Here we describe a high throughput, low volume NexteraXT protocol that significantly lowers library preparation costs. Central to this protocol is the Mosquito HTS robot, a small-volume liquid handler that aspirates and dispenses on a 96 or 384-well scale. We miniaturized the NexteraXT reaction to 1/12th the normal scale on the Mosquito and took advantage of a preexisting Tecan Evo robot to completely automate the remaining parts of the library prep service (normalization of DNA input for prep, library normalization, pooling). The sample dropout rate for this protocol is low, and overall sequencing coverage of both control and experimental samples is similar to that seen in NexteraXT libraries prepped at the normal scale. By reducing reagent usage and labor input on a per sample basis, we have made large bacterial gDNA sequencing projects more financially feasible.
  • Accelerating Research with 3D Biology™: Simultaneous Single-molecule Quantification of DNA, RNA, and Protein Using Molecular Barcodes.
    Niro Ramachandran NanoString Technologies
    NanoString is pioneering the field of 3D Biology™ technology to accelerate the rate of research and maximize the amount of information that can be generated from a given sample. 3D Biology is the ability to analyze combinations of DNA (detect SNVs and InDels), RNA (Gene Expression or Fusion transcript detection), and Protein (abundance and post- translational modifications) simultaneously on a NanoString nCounter® system. We will highlight the use of SNV Technology in detecting cancer driver mutations, the utility of multiplexed DNA-labeled antibody approaches to quantify protein expression levels from small amounts of sample (both lysate and FFPE), and demonstrate the utility of multi-analyte analysis and the novel insights this approach can uncover.
  • Only the shallow know themselves: Deep sequencing of Single Cell RNA
    Seth Crosby Washington University
    Current methods for quantifying molecular states of cells often depend on estimating the mean gene expression values from hundreds to millions of cells. Given the heterogeneity of cell populations, these mean values miss differences and, perhaps, interactions within a cell population. RNA with low copy number, which may exert important functions, is usually undetectable or regarded as noise in bulk cell-averaging methods. By combing a variety of small-volume library prep methods and massively parallel next generation sequencing (NGS), single-cell RNA sequencing (scRNA-seq) provides RNA expression profiles of individual cells. As a result, scRNA-seq can identify rare cell types within a cell population, creating and tracking subpopulation structures. Uncommon transcripts, which are obscured in bulk sequencing can be revealed. Even genetically identical cells, under the same environment, can display variability of gene and protein expression levels. I will discuss our experience with various scRNA platforms, the value of very deep sequencing of properly prepared scRNA libraries, and the importance of applying proper informatics tools to the various types (wide/shallow vs narrow/deep) of scRNA-seq data.
  • Clinical Whole Genome Sequencing: Challenges, Opportunities and Insights from the First Thousand Genomes Sequenced
    Shawn Levy HudsonAlpha Institute For Biotechnology
    Major technological and computing advancements have allowed routine generation of whole genome sequence data on hundreds of thousands of people over the last several years. As the ability to analyze and annotate genomes has improved, the clinical utility and impact has greatly enhanced the ability to discover and define the genetic causes of a wide variety of human phenotypes. In December of 2015 we launched a clinical laboratory focusing on whole-genome sequencing and we have used that infrastructure to sequence over 1,000 genomes for translational and clinical projects, delivering results back to patients and secondary findings to parents. During the course of these studies, we have learned a number of valuable lessons and have more appropriately calibrated our expectations and uses for whole genome sequencing. This presentation will highlight those successes and challenges and discuss the dynamic and powerful capabilities of genomics for both routine clinical use as well as in the treatment of critically ill patients.
  • Automating CRISPR mutation detection and zygosity determination
    Kyle Luttgeharm Advanced Analytical Technologies
    While CRISPR gene editing is rapidly advancing becoming more economical and efficient, Protocols to identify CRISPR mutations and determine the zygosity of these mutations are time consuming and often involve costly sequencing steps. To overcome this limitation, many researchers are turning to heteroduplexing and enzymatic mismatch cleavage assays to rapidly screen for mutated lines. Despite this growing interest, few studies to optimize heteroduplex cleavage assays in relation to different mutations and lengths of PCR products have been performed. Additionally, sequencing has continued to be required for zygosity determination of individual diploid cell lines/organisms. Using Advanced Analytical Technologies Inc. Fragment Analyzer™ Automated Capillary Electrophoresis System and synthetic genes to mimic different CRISPR mutations, we developed an optimized heteroduplex cleavage assay employing T7 Endonuclease I to detect a wide variety of common CRISPR mutations including both insertion/deletions and single nucleotide polymorphisms. In order to decrease the number of individual cell lines sequenced we developed statistical models that relate heteroduplex formation to the number of mutated alleles in individual diploid cell lines/organisms. This protocol allows for accurate prediction of monoallelic, diallelic homozygous, and diallelic heterozygous events. Having a single high-throughput protocol that results in cleavage of multiple types of mutations while also determining the number of mutated alleles would allow for efficient screening of CRISPR mutant populations at a level currently not feasible.
  • Advancements in NGS sample preparation for low input and single cells
    Andrew Farmer Takara Bio USA Inc
    Experimental approaches involving RNA-seq and DNA-seq have led to significant advancements in fields such as developmental biology and neuroscience, and are increasingly being applied towards the development of diagnosis and novel treatments for human disease. Our SMARTer NGS portfolio for DNA and RNA sequencing enables generation of libraries from single cell, degraded, and other low-input sample types . Together our products cater not only to generating sequencing libraries from difficult to obtain samples, but also to all major applications ( Differential gene expression analyses, Immune-profiling, Epigenomic profiling, Target enrichment, mutation detection for low frequency alleles and copy number variation). SMARTer methods perform consistently across a range of sample types and experimental applications, and are capable of processing low-input sample amounts. In this talk, we will present recent developments to the DNA and RNA seq portfolio.
  • Current Innovations for Metagenomics used in Antarctica
    Scott Tighe Uiversity Of Vermont
    Novel advancements in genomics by the metagenomics research group have made it possible to extract, isolate, and sequence DNA recovered from ancient microbial biofilms from buried paleomats in Antarctica. These techniques include high performance DNA extraction protocols using the multi-enzyme cocktail branded as MetaPolyZyme combined with a new hybrid DNA extraction mag-bead kit. These techniques allow for the recovery of high molecular weight DNA suitable for Oxford Nanopore sequencing both in the Crary lab in McMurdo Station and while in the field in Antarctica as well as high resolution sequencing using the PacBio and Illumina systems.
  • MetaSUB: Metagenomics Across the World's Cities
    Ebrahim Afshinnekoo Weill Cornell Medicine/New York Medical College
    The Metagenomics and Metadesign of the Subways and Urban Biomes (MetaSUB) International consortium is a novel, interdisciplinary initiative made up of experts across many fields including genomics, data analysis, engineering, public health, and design. Just as there is a standard of measurement of temperature, air pressure, wind currents– all of which are considered in the design of the built environment– the microbial ecosystem is just as dynamic. Thus, it should also be integrated into the design of cities. By developing and testing standards for the field and optimizing methods for urban sample collection, DNA/RNA isolation, taxa characterization, and data visualization, the MetaSUB International Consortium is pioneering an unprecedented study of urban mass-transit systems and cities around the world. These data will benefit city planners, public health officials, and designers, as well as discovery new species, biological systems, and biosynthetic gene clusters, thus enabling an era of more quantified, responsive, and “smarter cities.” During this talk we’ll share preliminary results from pilot studies carried out across the cities in the consortium including taxa classification, functional analysis, and antimicrobial resistance markers.
  • A profusion of confusion in genomic methods
    James Hadfield CRUK Cambridge Institute
    The number of next-generation sequencing (NGS) methods has grown to almost 400 in the past ten years. Each method includes specific processing steps that adapt NGS to address an expanding range a genomic applications allowing researchers to ask varied biological questions - but only if they know the method exists. Most methods are given names by their creators but even the most commonly used method “RNA-seq” is often used to refer to very different methodological approaches or biological applications. Method naming has not been controlled, but organising methods and structuring naming to allow new users to navigate the NGS publication landscape is overdue. I will present a few of the more confusing naming examples to highlight the problem, and describe a definitive list of methods, ordered by function, made available and maintained on a community wiki.
  • The eXtreme Microbiome Project “Down Under”: Metagenomic Adventures in the Southern Hemisphere.
    Ken McGrath Australian Genome Research Facility
    The eXtreme Microbiome Project (XMP) is a global scientific collaboration to characterize, discover, and develop new pipelines and protocols for extremophiles and novel organisms from a range of extreme environments. Two of the recent study locations are in the southern hemisphere: Lake Hillier, a bright pink hypersaline lake located on the Recherche Archipelago in Western Australia; and Lake Fryxell, a permanently frozen freshwater lake located in the Dry Valleys of Antarctica. Sampling in such remote environments comes with inherent challenges for sample collection and preservation, ranging from shark bite to frostbite. Despite these challenges, the XMP team has recovered samples of these microbial communities, and have analysed the metagenomes using a range of sequencing platforms, revealing the composition of the microbial communities that inhabit these extreme environments of our planet. Our results identify an abundance of highly specialised organisms that thrive in these locations, as well as demonstrate the utility of third-generation sequencing platforms for in situ analysis of microbial communities.
  • Precision Metagenomics: Rapid Metagenomic Analyses for Infectious Disease Diagnostics and Public Health Surveillance
    Ebrahim Afshinnekoo Weill Cornell Medicine/New York Medical College
    Next-generation sequencing technologies have ushered in the era of Precision Medicine, transforming the way we diagnose and treat cancer patients. Subsequently, the advent of these technologies has created a surge of microbiome and metagenomics studies over the last decade, many of which are intent upon investigating the host-gene-microbial interactions responsible for the development of chronic disorders. As we continue to discover more information about the etiology of complex chronic diseases associated with the human microbiome, the translational potential of metagenomics methods for the treatment and rapid diagnosis of infectious diseases is also becoming abundantly clear. Here, we present a robust protocol for the utilization and implementation of “precision metagenomics” across various platforms on clinical samples. Such a pipeline integrates DNA/RNA extraction, library preparation, sequencing, and bioinformatics analysis for taxa classification, antimicrobial resistance marker screening, and functional analysis. Moreover, the pipeline is built towards three tracks: STAT for rapid 24-hour processing as well as Comprehensive and Targeted tracks that take 5-7 days for less urgent samples. We present some pilot data demonstrating the applicability of these methods on cultured isolates and finally, we discuss the challenges that need to be addressed for its full integration in the clinical setting.
  • Perspectives for Whole Cell Microbial Reference Materials
    J. Russ Carmical Baylor College Of Medicine
    With the exception of the Microbiome Quality Control (MBQC), very little has been published on best practices and reference standards for microbiome and metagenomic studies. As evidenced by recent publication trends, researchers are moving the field toward commercial development at a rapid pace.  If these analyses are to ever evolve into reliable assays (e.g. for clinical diagnostics), the measurement process must be regularly assessed to ensure measurement quality.  A key aspect of this validation is the routine analysis of reference materials as positive controls. A reference material (RM) is any stable, abundant, and well characterized specimen that is used to assess the quantitative and/or qualitative validity of a measurement process. The focus will be on the use of whole cell microbial reference materials to characterize metagenomic analyses from start to finish. We’ll discuss 3 key categories (environmental samples, in vitro models for microbial ecosystems, pure microbial isolates) of reference standards and the challenges in characterizing those standards.
  • Genomics Research Group 2016-17 study: A multiplatform evaluation of single-cell RNA-seq methods.
    Sridar Chittur University At Albany, SUNY
    The Genomics Research Group (GRG) presentation will describe the current activities of the group in applying the latest tools and technologies for single cell transcriptome analysis to determine the advantages and disadvantages of each of the platforms. This project involves the comparison of gene expression profiles of individual SUM149PT cells treated with the histone deacetylase inhibitor TSA vs. untreated controls. The goals of this project are to demonstrate RNA sequencing (RNA-seq) methods for profiling the ultra-low amounts of RNA present in individual cells, and to demonstrate the use of the various systems for cell capture and RNA amplification including Fluidigm, Wafergen, fluorescence activated cell sorting (FACS), 10x Genomics, and Illumina’s joint venture with BioRad on the ddSEQ platform. In this session, we will discuss the technical challenges, results from each of these projects, and some key experimental considerations that will help leverage optimal results from each of these technologies.
  • Genome in a Bottle: So you've sequenced a genome, how well did you do?
    Justin Zook National Institute Of Standards And Technology
    Six new extensively characterized whole genome reference samples were released by the National Institute of Standards and Technology with the Genome in a Bottle Consortium in September 2016. These samples come from an Ashkenazim trio and a Chinese trio in the Personal Genome Project (PGP). Four of these are available from large batches of cell lines as NIST Reference Materials (RMs), in addition to the pilot GIAB genome (NA12878) Since the release of the pilot genome, we have developed the methods to form high-confidence variant calls from multiple datasets into a robust, reproducible process that were applied to five GIAB genomes for GRCh37 and GRCh38. The new high-confidence calls cover 88-90% of the genome, in comparison to ~78% of the genome covered by our previous version of calls from 2015. These data have been used extensively; e.g., the GIAB ftp at NCBI has had ~50,000 downloads from ~1000 unique IPs per month in 2016. We have also worked with the Global Alliance for Genomics and Health Benchmarking Team to develop standardized performance metrics and tools to compare variant calls to our benchmark calls. GIAB is currently integrating short, long, and linked read data to form high-confidence characterization of more difficult variants and difficult regions of the genome. GIAB is also exploring samples of additional ancestries and cancer samples for reference material development.
  • Metagenomic Analysis using the MinION Nanopore Sequencer
    Ken McGrath Australian Genome Research Facility
    Metagenomic DNA from microbial communities can be recovered from virtually anywhere on our planet, yet determining the composition of those communities remains a challenge, in part due to bioinformatics complexity of assembling sequencing reads from a cocktail of similar genomes. We compare the Oxford Nanopore MinION platform to other sequencing platforms and demonstrate that the longer reads from the MinION can compensate for the higher error rates, resulting in comparable metagenomics profiles. The reduced sample processing time and the enhance portability of the MinION make this platform a useful tool for studying microbial communities, particularly those in remote or extreme environments.

Imaging

  • Admin Issues Facing Light Microscopy Cores
    Joshua Rappoport Northwestern University
    This workshop will focus on administrative issues specific to light microscopy core facilities. Kurt Anderson recently moved from the Beatson Institute in Glasgow to the Crick Institute in London, and he will describe his experiences starting a large new light microscopy core facility. Kurt will explain his vision for a light microscopy facility fully integrated into a larger infrastructure network including electron microscopy and computational data analysis. From the initial blueprints to the selection of instrumentation and hiring of staff, Kurt will detail how he put together what promises to be one of the most highly dynamic light microscopy core facilities, within one of the most elite research institutions, in the world. Phil Hockberger from Northwestern University will speak about light microscopy core facilities from both his perceptive as one of the first light microscopists to develop a world class NIH funded two-photon microscopy facility, as well as his more recent work overseeing all core facilities at a large research intensive university. Phil will focus on issues such as best practices for core staff in professional development, and marketing to and engaging with facility users. Phil will speak about the past, present and future development of professional core facility scientist career paths, and various routes to promotion within and beyond individual core facilities, as well productive relationships with faculty and administrative oversight and support mechanisms. Finally, Phil will describe routes for facility sustainability and growth, including strategies for success in research collaborations and grant support from various mechanisms including for upgrading and purchasing of both replacement and novel light microscopy instrumentation.
  • Optogenetic probes to reveal and control cell signalling
    Robert E. Campbell University Of Alberta
    Biomolecular engineering of improved fluorescent proteins (FPs) and innovative FP-based probes has been a major driving force behind advances in cell biology and neuroscience for the past two decades. Among these tools, FP-based reporters (i.e., FP-containing proteins that change their fluorescence intensity or color in response to a biochemical change) have uniquely revolutionized the ability of biologists to ‘see’ the otherwise invisible world of intracellular biology and neural signalling. In this seminar I will describe our most recent efforts to use protein engineering to make a new generation of versatile FP-based tools optimized for in vivo imaging of neural activity. Specifically, I will present our efforts to convert red and near-infrared FPs into reporters for calcium ion, membrane potential, and neurotransmitters. In addition, I will briefly describe our most recent efforts to exploit FPs for optogenetic control of protein activity and gene expression.
  • Recent advances in super-resolution
    Sara Abrahamsson The Rockefeller University
    Super-resolution microscopy is a new and dynamic field that is currently finding its useful application in Biological research. Advanced instruments and techniques are increasingly accessible to Biological researchers at imaging facilities that are open to external users. At the same time, the super-resolution field itself is rapidly evolving. New methods - and groundbreaking new twists to existing methods - are constantly being developed. Super-resolution methodologies that improve resolution in all three spatial dimensions (3D) - such as 3D Structured Illumination Microscopy (3D SIM), 3D Stimulated Emission Depletion (3D STED) and interferometric Photoactivated and Localization Microscopy (iPALM) - are particularly interesting since structures and features in a biological specimen are most often distributed not simply next to each other but also above and below. Other crucial parameters in applied biomicroscopy are contrast and optical sectioning capability. These can often be more difficult to tackle than insufficient resolution. Finally, in the new emerging field of live-cell super-resolution imaging, major limiting factors are light dose and acquisition rate. Acquisition rate becomes especially challenging in 3D imaging where this information is obtained by scanning. In my work I am currently addressing this issue in an imaging system that combines 3D SIM with multifocus microscopy (MFM) technology to provide 3D super-resolution live-cell imaging capacity of dynamic biological processes.
  • The hidden life of protein zombies and their role in aging
    Martin Hetzer The Salk Institute For Biological Studies
    Age is the major risk factor for the development of neurodegenerative diseases such as Alzheimer’s disease (AD). Currently, AD alone impacts the lives of approximately 5 million Americans and their families. The disorder imposes an immense emotional burden on family members and caretakers. Compounding the problem is the reality that the number of patients will increase more than two-fold in the next 30 years and impose a financial cost of more than $1 trillion per year. One can hardly imagine the negative consequences for the wellbeing of our economies, our families and the future of mankind. The only proper response to this formidable challenge is to combat it through efforts that extend from the care of individual patients to the discovery of effective therapeutics to treat, and ideally, prevent it. We discovered a class of extremely long-lived proteins (LLPs) in the adult brain that functionally decline during aging. We speculate that biochemical changes and subsequent deterioration of LLPs may be responsible for the age-related impairment of cognitive performance and the onset/progression of neurodegenerative disorders such as AD. Proposed experiments will allow us to decipher the mechanisms underlying the functional integrity of LLPs and determine how they relate to pathologies in the brain.
  • Molecular Characterization of Leading Edge Protrusions in the Absence of Functional Arp2/3 Complex.
    Dorit Hanein Bioinformatics And Structural Biology Program, Sanford–Burnham Medical Research Institute
    Cells employ protrusive leading edges to navigate and promote their migration in diverse physiological environments. Classical models of leading edge protrusion rely on a treadmilling dendritic actin network that undergoes continuous assembly nucleated by the Arp2/3 complex, forming ruffling lamellipodia. Although the dendritic nucleation model has been rigorously evaluated in several computational studies, experimental evidence demonstrating a critical role for Arp2/3 in the generation of protrusive actin structures and cell motility has been far from clear. Most components of the pathway have been probed for their relevance by RNA interference or dominant-negative constructs. However, given that the Arp2/3 complex nucleates actin at nanomolar concentrations, even a dramatic knockdown could still leave behind a level sufficient to fully or partially support Arp2/3 complex-dependent functions. Our recent work renders the characterization of fibroblasts cells lacking functional the Arp2/3 complex. Characterization of the impact of the absence of functional Arp2/3 complex on these genetically matched cells included single cell spreading assays, wound healing assays, long-time single cell motility tracking, chemotaxis assays, fluorescence staining imaging with confocal or structured illumination microscopy [1.2]. ARPC3-/- fibroblasts maintained an ability to move but exhibited a strong defect in persistent directional migration in both wound healing and chemotaxis assays, while migrating at rates similar to wild-type cells. Here, we will highlight our advances on determining the molecular-level organization of the leading edge actin networks, through an integrated approach that employs electron cryo-tomography of whole mammalian cells in conjunction with correlative light microscopy. We show by correlative fluorescence and cryo-tomography that the nanometer-scale actin-network organization of smooth lamellipodia in wild-type cells is replaced by massive, bifurcating actin-based protrusions with fractal geometry linked to self-organized criticality. Agent-based modeling shows that the Arp2/3 complex suppresses the formation of these protrusions by locally fine-tuning actin network morphology, providing the switch for directional movement. References: 1. Suraneni P, Rubinstein B, Unruh JR, Durnin M, Hanein, D, and Li R (2012). The Arp2/3 complex is required for lamellipodia extension and directional fibroblast cell migration. J Cell Biol. 197, 239-251.(2012) 2. Suraneni1 P, Fogelson B, Rubinstein B, Noguera P, Volkmann N, Hanein D, Mogilner A, Li R (2015). A mechanism of leading edge protrusion in the absence of Arp2/3 complex. Mol Biol Cell (2015). 3. Anderson KL, Page C, Swift MF, Suraneni P, Janssen ME, Pollard TD, Li R, Volkmann N, Hanein D. Nano-scale actin-network characterization of fibroblast cells lacking functional Arp2/3 complex. J Struct Biol. (2016). 4. This work was supported by NIH program project grant P01 GM098412 and R01CA179087 (DH, NV). NIH grants S10 OD012372 (DH) and P01 GM098412-S1(DH) funded the purchase of the Titan Krios TEM and Falcon II direct detection imaging device.
  • Lattice light sheet microscope: Technical considerations for core facilities
    Teng-Leong Chew Howard Hughes Medical Institute Janelia Research Campus
    Light sheet microscopy, one of the most significant technological advances in recent years in optical microscopy, has revolutionized the ability of biologists to visualize dynamic life processes in multicellular systems. By employing plane illumination, this technique greatly minimizes light exposure to the sample as it limits the excitation light to the focal plane. However, the field of view of a plane illumination system practically varies with the thickness of light sheet created by conventional, Gaussian beam laser. To achieve a reasonable field of view, Gaussian light sheets are too thick to facilitate the necessary resolution for subcellular imaging. By switching to Bessel beam laser, Chen et al. (Science 346: 1257998, 2014) creates a new kind of light sheets based on optical lattices that can be much thinner, achieving unprecedented axial resolution as well as signal-to-noise ratio. This breakthrough subsequently facilitated the engineering of a new class of plane illumination microscopes called the lattice light sheet microscopes (LLSMs). The Advanced Imaging Center (AIC) at Howard Hughes Medical Institute Janelia Research Campus became the world’s first imaging center to successfully offer the LLSM as a shared instrument. Recently, the LLSM has become more widely available through the sharing of the instrument blueprint by Janelia as well as via commercial sources. To help with the decision-making process for imaging center directors interested in acquiring the LLSM, the AIC will discuss its experience of offering the LLSM as a multi-user instrument, and examines the capabilities, limitations, operational pitfalls, as well as common misconceptions of this unique microscope.
  • Writing successful shared instrument grants for imaging instruments
    Richard Cole Wadsworth Center, State University Of New York
    Microscopy instruments are costly to purchase and the imaging technologies are evolving so rapidly that most institutions cannot afford to stay technologically abreast with internal funds. Instrumentation grants at the federal level thus become a critical source of funding to meet this challenge. There are two main grant funding sources for the acquisition of shared microscopy instruments: The Major Research Instrument (MRI) grant from the National Science Foundation and the Shared Instrumentation Grant (SIG) from the National Institutes of Health. Effective planning for a shared instrumentation grant typically starts almost a year prior to submission – surveying the user base for the needed technology, arrange for instrument demo, and request internal cost-share funds or institutional commitment with the leadership of the home institution. A critical step is to get the identified “Major Users” to write a description of their research projects for the grant. These elements must be in hand before the actual writing of the grant can ensue. These instrumentation grants are competitive and the nuances of how these applications are evaluated and scored may not be obvious to many core directors on their first application. Missteps made in any of these grant elements will jeopardize the chances of success. In this presentation, the evaluation criteria, strategic planning, best practices, commonly seen mistakes, will be discussed. We will also walk through a case study.
  • Linking single-molecule dynamics to local cell activity
    Catherine Galbraith OHSU, OCSSB, KCI
    Advances in single molecule microscopy have made it possible to obtain high-resolution maps of the inside of cells. However, technical difficulties still limit the ability to obtain dense fields of single molecules in live cells. Even more challenging is that the disparity in the spatial and temporal scales pose make it difficult to explicitly connect molecular behaviors to cellular behaviors. Here we present an integrated, multi-scale live-cell imaging and data analysis framework that explicitly links transient spatiotemporal modulation of receptor density and mobility to cellular activity. Using integrin adhesion receptors we demonstrate variations in molecular behavior can predict localized cell protrusion. Integrins are the key receptors mediating cell-matrix connections and play a critical role in mediating linkages to the cytoskeleton essential for cell migration. Our framework uncovered an integrin spatial gradient across the entire morphologically active region of the cell, with the highest density and slowest diffusion simultaneously occurring at the cell edge. Moreover, we discovered transient increases in density and reductions in speed that indicated the onset of local cell protrusion. Despite inherent heterogeneity of stochastically sampled molecules, our approach is also capable of linking the behavior to cell behavior using >90% of the molecules imaged. Through the use of receptor mutants we demonstrate that the distribution and mobility of receptors rely on unique binding domains. Moreover, our imaging and analysis framework demonstrates the ability to define unique molecular signatures dependent upon receptor conformational state. Thus, our studies demonstrate an explicit coupling between individual molecular dynamics and local cellular events, providing a paradigm for dissecting the molecular behaviors underlie the cellular functions observed with conventional microscopes.
  • Imaging Drosophila Brain Activity
    Jing Wang UC San Diego
    Understanding how the action of neurons, synapses, circuits and the interaction between different brain regions underlie brain function and dysfunction is a core challenge for neuroscience. Remarkable advances in brain imaging technologies, such as two-photon microscopy and genetically encoded activity sensors, have opened new avenues for linking neural activity to behavior. However, animals from insects to mammals exhibit ever-changing patterns of brain activity in response to the same stimulation, depending on the state of the brain and the body. The action of neuromodulators – biogenic amines, neuropeptides, and hormones – mediates rapid and slow state shifts over a timescale of seconds to minutes or even hours. Thus, it is imperative to develop a non-invasive imaging system to maintain an intact neuromodulatory system. The fruit fly Drosophila melanogaster is an attractive model organism for studying the neuronal basis of behavior. However, most imaging studies in Drosophila require surgical removal of the cuticle to create a window for light penetration, which disrupts the intricate neuromodulatory system. Unfortunately, the infrared laser at the wavelengths used for two-photon excitation of currently available activity probes is absorbed by the pigmented cuticle. Here we demonstrate the ability to monitor neural activity in flies with intact cuticle at cellular and subcellular resolution. The use of three-photon excitation overcomes the heating problem associated with two-photon excitation.
  • LMRG Study 3: 3D QC Samples for Evaluation of Confocal Microscopes
    Erika Wee ABIF McGill
    Here we present the third study of the Association of Biomolecular Resource Facilities (ABRF) Light Microscopy Research Group (LMRG). In LMRG, our goal is to promote scientific exchange between researchers, specifically those in core facilities in order to increase our general knowledge and experience. We seek to provide a forum for multi-site experiments exploring “standards” for the field of light microscopy. The study is aimed at creating a 3D biologically relevant test slide and imaging protocol to test for 1) System resolution and distortions in 2D and 3D, 2) the dependence of intensity quantification and image signal to noise of the microscope on imaging depth and 3) the dependence of the microscope sensitivity on imaging depth.
  • Case Studies in Modern Bioimage Analysis: 3D, Machine Learning, and Super Resolution
    Hunter Elliott Harvard Medical School
    Recent advances in 3D imaging and the increasing popularity of 3D model systems have resulted in a proliferation of higher-dimensional microscopy data. Developments in 2D and 3D super-resolution have produced data at unprecedented length scales. Simultaneously, machine learning has become increasingly dominant within the computer vision field. The confluence of these recent trends has made it an exciting time to be a bioimage analyst. We will present several short vignettes highlighting these trends: 3D analysis ranging from millimeter-scale tissue samples to nanoscale subcellular structures. Supervised deep learning for cellular classification from phase-contrast images. And finally, unsupervised machine learning for STORM super-resolution data analysis and phenotypic profiling.
  • IsoView: High-speed, Live Imaging of Large Biological Specimens with Isotropic Spatial Resolution
    Raghav Chhetri HHMI Janelia Research Campus
    To image fast cellular dynamics at an isotropic, sub-cellular resolution across a large specimen, with high imaging speeds and minimal photo-damage, we recently developed isotropic multiview (IsoView) light-sheet microscopy. IsoView microscopy images large specimens at a high spatio-temporal resolution via simultaneous light-sheet illumination and fluorescence detection along four orthogonal directions. The four-views in combination yield a system resolution of 450 nm in all three dimensions after a high-throughput multiview-deconvolution. IsoView enables longitudinal in vivo imaging of fast dynamic processes, such as cell movements in an entire developing embryo and neuronal activity throughout an entire brain or nervous system. Using IsoView microscopy, we performed whole-animal functional imaging of Drosophila embryos and larvae at a spatial resolution of 1.1-2.5 microns and at a temporal resolution of 2 Hz for up to 9 hours. We also performed whole-brain functional imaging in larval zebrafish and multicolor imaging of fast cellular dynamics across entire, gastrulating Drosophila embryos with isotropic, sub-cellular resolution. Compared with conventional light-sheet microscopy, IsoView microscopy improves spatial resolution at least sevenfold and decreases resolution anisotropy at least threefold. Additionally, IsoView microscopy effectively doubles the penetration depth and provides sub-second temporal resolution for specimens 400-fold larger than could previously be imaged with other high-resolution light-sheet techniques.

Proteomics

  • An ‘Omics Renaissance or Stuck in the Dark Ages? Monitoring and Improving Data Quality in Clinical Proteomics and Metabolomics Studies
    J. Will Thompson Duke Proteomics And Metabolomics Shared Resource
    Analysis of proteins and metabolites by mass spectrometry is currently enjoying a renaissance in many ways. Identification of unknown proteins, and even localization of post-translational modifications with high precision and on a grand scale, is routine. It is possible to qualitatively and quantitatively profile thousands of protein groups, or metabolite features, in a single sample with a single analysis. Multiplexed targeted LC-MS/MS can quantify hundreds of analytes in only a few minutes. Exciting new sample preparation, mass spectrometry, and data analysis techniques are emerging every day. However, significant challenges still exist for our field. Compared to genomic approaches, coverage is still limited. Compared to the longitudinal precision and accuracy required in the clinical diagnostic use of mass spectrometry, ‘omics techniques lag far behind. And in terms of the throughput required for addressing the Precision Medicine Initiative, most proteomics and metabolomics analyses take far too long and are far too expensive. This presentation will focus on practical techniques implemented in our laboratory and others to try and address some of these shortcomings, including efforts to improve the throughput of unbiased proteomics profiling, track precision and bias in proteomic and metabolomic experiments using the Study Pool QC, and the use of reference pools for longitudinal performance monitoring in targeted quantitative proteomics and metabolomics studies.
  • Patching Holes in Your Bottom-up Label and Label-free Quantitative Proteomic Workflows
    Tony Herren University Of California, Davis
    Reliable quantitation of label and label-free mass spectrometry (MS) data remains a significant challenge despite much progress in the field on both the hardware and software fronts. In particular, proper quantitation and control within the workflow prior to mass spectrometry data dependent acquisition (DDA) is of crucial and often overlooked importance. Here, we will explore several of these “pre-mass spec” topics in greater detail, including quality control and optimization of sample preparation and liquid chromatography. Specifically, the importance of accurate quantitation of protein and peptide inputs and outputs during sample processing steps (extraction, digestion, C18 cleanup, enrichment/depletion) for both label and label-free workflows will be addressed. Protein recovery data from our lab suggests that recoveries throughout the sample preparation process come with significant loss and must be empirically determined at each step. Missed cleavages during enzymatic digestion, poor labeling efficiency, and addition of enrichment and/or depletion steps to sample workflows are all confounding factors for label and label-free quantitation and will also be discussed. Additionally, the influence of liquid chromatography performance on DDA quantitation across multiple sample runs in label and label-free workflows will be examined, including the effects of retention time drift, ambient environmental conditions, gradient length, peak capacity, and instrument duty cycle. These issues will be discussed in the context of a typical core facility bottom up proteomics workflow with practical tools and strategies for addressing them. Finally, a comparison of quantitative sensitivity will be made between sample data acquired using label-free MS and tandem mass tag (TMT 10-plex) data acquired using different MS parameters on a Thermo Fusion Lumos.
  • Spatial Metabolomics via MALDI Imaging Mass Spectrometry: A Case Study in Lysosomal Storage Disease, Gangliosides and Gene Therapy
    Scott Shaffer University Of Massachusetts Medical School
    Gangliosides are glycosphingolipids composed of a ceramide base and a carbohydrate chain containing one or more sialic acids. GM1 gangliosidosis is an autosomal recessive lysosomal storage disorder caused by an enzyme deficiency of β-galactosidase leading to toxic accumulation of GM1 in the central nervous system and progressive neurodegeneration. Gene therapy mediated delivery of viral vector encoding enzyme has shown great potential for the treatment of such diseases by restoring deficient enzyme levels. In this work MALDI imaging mass spectrometry is used to measure the spatial distribution of gangliosides, ganglioside metabolites and lipids in a GM1 gangliosidosis mouse brain model, including animals following adeno-associated virus (AAV) gene therapy. Data was acquired with a Nd:YAG laser at 60µm spatial resolution using a Waters Synapt G2-Si MALDI mass spectrometer with integrated travelling wave ion mobility separation. Overall, we demonstrate an approach that measures gangliosides and their metabolites with high molecular specificity while also offering the ability to detect unanticipated, off-targeted effects induced by both disease and by gene therapy. Key aspects of building a tissue imaging capability within a core facility will be discussed.
  • Chemical Proteomics Reveals the Target Space of Hundreds of Clinical Kinase Inhibitors
    Bernhard Kuster Technical University Of Munich
    Kinase inhibitors have developed into important cancer drugs because de-regulated protein kinases are often driving the disease . Efforts in biotech and pharma have resulted in more than 30 such molecules being approved for use in humans and several hundred are undergoing clinical trials. As most kinase inhibitors target the ATP binding pocket, selectivity among the 500 human kinase is a recurring question. Polypharmacology can be beneficial as well as detrimental in clinical practice, hence, knowing the full target profile of a drug is important but rarely available. We have used a chemical proteomics approach termed kinobeads to profile 240 clinical kinase inhibitors in a dose dependent fashion against a total of 320 protein kinases and some 2,000 other kinobead binding proteins. In this presentation, I will outline how this information can be used to identify molecular targets of toxicity, re-purposing existing drugs or combinations for new indications or provide starting points for new drug discovery campaigns.
  • iPRG2016: New submission interface and its ideas
    JOON-YONG LEE Pacific Northwest National Laboratory
    For the annual Proteome Informatics Research Group (iPRG) Study, the submission has traditionally been done by participants ftp uploads. Although this is simple for participants, it does not support format validation. Because results are not required to follow a specific format, method description was incomplete, which limited reproducibility of the study. To tackle these issues, we present how we set up the iPRG2016 website in this presentation. We will describe how we build a new submission interface by adopting GitHub interaction and format validator to improve the automated submission process. Moreover, we will show how python notebooks and R markdown documents under a GitHub repository help to readily and transparently share methodologies and promote continued community participation.
  • Chemical Isotope Labeling LC-MS for Routine Quantitative Metabolomic Profiling with High Coverage
    Liang Li University Of Alberta
    A key step in metabolomics is to perform relative quantification of metabolomic changes among different samples. High-coverage metabolomic profiling will benefit metabolomics research in systems biology and disease biomarker discovery. To increase coverage, multiple analytical tools are often used to generate a combined metabolomic data set. The objective of our research is to develop and apply an analytical platform for in-depth metabolomic profiling based on chemical isotope labeling (CIL) LC-MS. It uses differential isotope mass tags to label a metabolite in two comparative samples (e.g., 12C-labeling of an individual sample and 13C-labeling of a pooled sample or control), followed by mixing the light-labeled sample and the heavy-labeled control and LC-MS analysis of the resultant mixture. Individual metabolites are detected as peak pairs in MS. The MS or chromatographic intensity ratio of a peak pair can be used to measure the relative concentration of the same metabolite in the sample vs. the control. For a metabolomics study involving the analysis of many different samples, the same heavy-labeled control is spiked to all the light-labeled individual samples. Thus, the intensity ratios of a given peak pair from LC-MS analyses of all the light-/heavy-labeled mixtures reflect the relative concentration changes of a metabolite in these samples. CIL LC-MS can overcome the technical problems such as matrix effects, ion suppression and instrument drifts to generate more precise and accurate quantitative results, compared to conventional LC-MS. CIL LC-MS can also significantly increase the detectability of metabolites by rationally designing the labeling reagents to target a group of metabolites (e.g., all amines) to improve both LC separation and MS sensitivity. In this presentation, recent advances in CIL LC-MS for quantitative metabolomics will be described and some recent applications of the technique for disease biomarker discovery research as well as biological studies will be discussed.
  • Characterization of the myometrial proteome in disparate states of pregnancy using SPS MS3 workflows.
    David Quilici University Of Nevada Reno
    Reliable quantitative analysis of global protein expression changes is integral to understanding mechanisms of disease. Global expression of myometrial proteins involved in the premature induction of labor compared to normal induction of labor were analyzed by isobaric labeling with a TMT 10-Plex using MultiNotch MS3. In this study we looked at technical and biological reproducibility in addition to the comparison of pre-term labor to term labor myometrial tissue samples. We found a very low level of variation in the technical (<0.01%) and biological (0.05%) replicates. Within the comparative study we identified over 4,000 protein groups with high confidence (FDR < 0.05) and ~400 of these showed a significant change between the two groups. Affected pathways were then identified using Ingenuity pathway analysis software (IPA). Further analysis was performed using the targeted TMT approach known as TOMAHAQ (Triggered by Offset, Multiplexed, Accurate-Mass, High-Resolution, and Absolute Quantification) on 38 peptides and phosphopeptides corresponding to proteins within the identified pathways affected by premature induction in an effort to determine the role of phospho-signalling.
  • Workflow Interest Network Research Project Presentation
    Emily Chen Columbia University Medical Center
    A new research group, the Workflow Interest Network (WIN), was established in 2016. Our current focuses are 1) to collaborate with other ABRF members and mass spectrometry-based research groups to identify key factors that contribute to poor reproducibility and inter-laboratory variability, and 2) to propose benchmarks for MS-based proteomics analysis as well as quality control procedures to improve reproducibility. In 2016, we have launched a test study to examine the LC-MS/MS performance among 10 MS-based core laboratories, using two sets of peptide standards and complex lysates. Preliminary analysis of the test study will be presented in the concurrent workshop. We are highly encouraged by the results of our test study. We believe that it should be possible to promote scientific reproducibility by comparing different analytic platforms and providing benchmarks for instrument performance based on the observed capabilities across a large number of datasets. Bioinformatics tools that allow rapid analysis and evaluation of LC and MS performance will also be discussed. Finally, we will announce an expansion of our test study to the broader community, inviting laboratories to participate and contribute to this benchmarking study. The study is designed to require only a very reasonable time commitment, and participating laboratories will gain essential information on their instruments’ performance in the course of helping to build a valuable benchmarking and QC resource.
  • MS1-based quantification of low abundance proteins that were not identified using a MS/MS database search approach
    Yan Wang University Of Maryland
    With development in instrumentation and informatics tools, it is becoming routine to identify more than 2000 proteins in whole cell lysates via a shotgun proteomics approach. A remaining challenge is to assess changes in abundance of proteins that are at the limit of detection. In the data dependent analysis (DDA) approach the same peptide is often identified in one sample but missed in another. Presumably the missing data is due to the precursor not being isolated for fragmentation. In such cases, relative quantification could be determined from the MS1 peak intensity after using features such as accurate mass and retention time to identify the correct MS1 signal. In this study, we evaluated 4 different bioinformatic approaches in their ability to perform this analysis. In the PRG 2016 study, 4 non-mammalian proteins were spiked into 25 µg of whole HeLA cell lysate at 4 different levels: 0, 20, 100, and 500 fmols. Six non-fractionated datasets encompassing 4 separate analytical runs each and including analyses on Orbitrap Fusion, Q Exactive, and Orbitrap Velos instruments were selected for further study. In each set at least 1 peptide from each of the 4 spiked-in proteins was identified in at least one sample. Peptides from spiked-in proteins were quantified after using retention time and accurate mass information for identification. Programs used were: PEAKS (Bioinformatics Solutions, Inc.); Progenesis (Waters Corp.); MaxQuant (Max Planck Institute of Biochemistry) and Skyline (University of Washington). All evaluated software packages extracted quantitative information from MS1 spectra that did not yield Peptide Spectral Matches in samples with low concentrations of spike-in proteins. False quantification of peptides in the zero spike-in sample was observed. This is attributed to carry-over between runs and mis-assignment of noise in the signal.
  • sPRG-ABRF 2016-2017: Development and Characterization of a Stable-Isotope Labeled Phosphopeptide Standard
    Antonius Koller Columbia University
    The mission of the ABRF proteomics Standards Research Group (sPRG) is to design and develop standards and resources for mass-spectrometry-based proteomics experiments. Recent advances in methodology have made phosphopeptide analysis a tractable problem for core facilities. Here we report on the development of a two-year sPRG study designed to target various issues encountered in phosphopeptide experiments. We have constructed a pool of heavy-labeled phosphopeptides that will enable core facilities to rapidly develop assays. Our pool contains over 150 phosphopeptides that have previously been observed in mass spectrometry data sets. The specific peptides have been chosen to cover as many known biologically interesting phosphosites as possible, from seven different signaling pathways: AMPK signaling, death and apoptosis signaling, ErbB signaling, insulin/IGF-1 signaling, mTOR signaling, PI3K/AKT signaling, and stress (p38/SAPK/JNK) signaling. We feel this pool will enable researchers to test the effectiveness of their enrichment workflows and to provide a benchmark for a cross-lab study. Currently, the standard is being tested in the sPRG members' laboratories to establish its properties. Later this year we will invite ABRF members and non-members to participate in the second half of our study, using this controlled standard in a HeLa S3 background to evaluate their phosphoproteomic data acquisition and analysis workflows. We hope this standard is helpful in a number of ways, including enabling phosphopeptide sample workflow development, as an internal enrichment and chromatography calibrant, and as a pre-built biological assay for a wide variety of signaling pathways.
  • Systems Biology Guided by Metabolomics
    Gary Siuzdak Scripps Center For Metabolomics And Mass Spectrometry
    Systems-wide analysis has been designed and implemented into our cloud-based metabolomic platform (XCMSOnline.scripps.edu and METLIN.scripps.edu) to guide large scale multi-omic experiments. This data streaming autonomous approach superimposes metabolomic data directly onto metabolic pathways, which is then integrated with transcriptomic and proteomic data. To date, the utility of this platform has been demonstrated on thousands of studies and implemented within XCMS' smartphone app. Here I will focus on the technology and demonstrate its utility and the insight it has provided in examining therapeutic remyelination in multiple sclerosis and neurodegeneration in HIV.
  • The iPRG-2016 Proteome Informatics Research Group Study: Inferring Proteoforms from Bottom-up Proteomics Data
    Magnus Palmblad Leiden University
    In 2016, the ABRF Proteome Informatics Research Group (iPRG) conducted a study on proteoform inference from bottom-up proteomics data. For this study, we acquired data from samples spiked with overlapping oligopeptides, so-called Protein Epitope Signature Tags, recombinantly expressed in E. coli into a background of E. coli proteins. This is a unique dataset in that we have ground truth on the proteform composition of each sample, and each sample contains hundreds of different proteoforms. Tandem mass spectra were acquired on a Q-Exactive Orbitrap in data-dependent acquisition mode and made available in raw and mzML formats. Participants were asked to use a method of their choosing and report the false-discovery rate or posterior error probabilities for each proteoform in a list provided in the FASTA format. In general, most participants solved the task well, but some differences were observed, suggesting possible improvements and refinements. This time we also decided to try to show different ways to improve and add value to such studies. We therefore created a dedicated website on a virtual private server to contain all aspects the study, including the submission interface. We had learned from previous studies that it is difficult for some participants to submit their results in the desired format, no matter how carefully specified. This has generated substantial additional work during the evaluation phase of previous iPRG studies. In the 2016 study, we therefore built a submission parser/validator that checked whether a submission was provided in the correct format before accepting it. The validator also provided feedback to the submitter when the uploaded results were not in the correct format. This also helped in the evaluation and comparisons of submissions. Another novelty in the 2016 study was the acceptance of submissions in R Markdown or IPython notebook formats, containing both explanation of the method and an executable script to rerun and compare submissions. These methods will be anonymized and made available for reuse by researchers conducting this type of analyses. The study will therefore live on, and as new methods and software become available, these can be benchmarked against the best solutions at the time of the study. It is also possible to combine elements from several submitted methods. All code running on the VPS behind the iPRG 2016 Website, including the submission validator, will be available to other ABRF Research Groups. In the presentation, we will also discuss lessons learned from the novel technical aspects of this iPRG study, including the use of R Markdown and IPython notebooks.

Trending Topics

  • Highly multiplexed simultaneous measurement of cell-surface proteins and the transcriptome in single cells.
    Marlon Stoeckius New York Genome Center
    Large-scale, unbiased identification of distinct cell types in complex cell mixtures has been enabled by recent advances in high-throughput single-cell transcriptomics. However, these methods are unable to provide additional phenotypic information, such as the protein levels of well-established cell surface markers. Current approaches to simultaneously detect and/or measure transcripts and proteins in single cells are based on 1) indexed cell sorting in combination with RNA-sequencing or 2) proximity ligation/extension assay in combination with digital PCR. These assays are limited in scale and/or can only profile a few genes and proteins in parallel. To overcome these limitations, we have devised a method, Cellular Indexing of Transcriptome and Epitopes by sequencing (CITE-seq), that combines unbiased genome-wide expression profiling with the measurement of specific protein markers in thousands of single cells using droplet microfluidics. We conjugate monoclonal antibodies to oligonucleotides containing unique antibody identifier sequences. We then label a cell suspension with DNA-barcoded antibodies and single cells are subsequently encapsulated into nanoliter-sized aqueous droplets in a microfluidic apparatus. In each droplet, antibody and cDNA molecules are indexed with the same unique barcode and are converted into libraries that are amplified independently and mixed in appropriate proportions for sequencing in the same lane. In proof-of-principle experiments using a suspension of mixed human and mouse cells and established high-throughput single cell sequencing protocols, we unambiguously identify human and mouse cells based on their species-specific cell surface proteins and independently on their transcriptome. We then use CITE-seq to classify cells in the immune system, which has been extensively characterized on the level of cell surface marker expression. We show that we are able to achieve better resolution of cell types by adding an extra dimension to the data. CITE-seq allows in-depth characterization of single cells by simultaneous measurement of gene-expression levels and cell-surface proteins, is highly scalable, only limited by the number of specific antibodies that are available.
  • iABRF Roundtable Discussion: Advancing Core Technology Sciences and Communication through Establishment of a Worldwide Core Federation
    Timothy Hunter University Of Vermont
    Core facility laboratories are critical to the research mission of the institute and community they serve. Many core focused associations, societies, and workshops are established worldwide with similar goals to advance the core sciences. ABRF has an interest to identify ways that the ABRF could best encourage the establishment and growth of core facility organizations and meetings around the world and on ways that the ABRF could best interact and coordinate with such organizations and meetings. Towards this end, the International ABRF Committee supports to work collaboratively with existing and emerging worldwide groups towards the establishment of a worldwide federation of core facility interest groups, in which the core facility interest groups from different countries are all treated and represented as equals. The mission of this federation would be to ensure international communication and collaboration around core facility matters. The benefit to the ABRF would be (1) increasing ABRF participation in efforts to advance core science and administration world-wide, and (2) to coordinate with and collaborate with core facility interest groups around the world. This session will have representation from numerous other core interest groups to help ascertain the common goals, their mandate, and how we might be able to form and interact under such a worldwide federation of core facility interest groups. The session will encourage attendee interaction and presented in a roundtable format.
  • Channel-free dead cell exclusion in FACS/flow cytometry (or "n+1 into n" does go)
    Roy Edward BioStatus Limited
    Dead cell exclusion is a common requirement for flow cytometry on fresh, unfixed samples. In multi-colour phenotyping this occupies one or more channels reducing dimensionality and options for antibody-chromophore pairings. This limitation is obvious but further exacerbated if dead cell exclusion is identified or demanded retrospectively. Then, panel re-formatting into two tubes (linking antigens increase cost) or transfer to a higher dimensionality platform may be required. Importantly, additional demands are then placed on core facility capacity / throughput. Meanwhile, dead cell exclusion with DAPI occupies channels valuable for the new violet-excited chromophores and, alternatively, propidium iodide occludes R-PE, a bright and widely conjugated chromophore. A novel yet simple remedy is achieved using DRAQ7, an anthraquinone-based far-red fluorescing viability probe, validated in flow cytometry and fluorescence microscopy. Its absorbance spectrum and practice show DRAQ7 can be detected using excitation wavelengths of blue to red lasers; potentially then by two lasers in one analysis. This unique property locates DRAQ7+ events in a unique region of bivariate plots for available far-red/NIR channels. A “live” gate is then set on the preferred bivariate plot for that experimental set-up, excluding dead cells in all channels. In one example, a high-throughput flow cytometry platform limited to 4-channels was hampered by use of propidium iodide reducing antigen phenotyping to 3 channels, bleed through to other channels, and reducing from 16 to 8 the finite number of phenotypes that might be elicited from the antigen analysis. Substitution with DRAQ7 re-enabled use of all 4 channels for antigen expression analysis. It is noteworthy that there is no need for compensation and that this simple method can be applied retrospectively (e.g. on reviewers' request) without disturbing antibody panel design. Due to DRAQ7’s previously demonstrated ultra-low toxicity this method can also be used in cell sorting.
  • Name It! Store It! Protect It! Identifying Data Management Protocols for Core Facilities
    Matthew Fenchel Cincinnati Children’s Hospital Medical Center;
    This 75 minute session is jam packed with the foundation of basic principles for core directors and administrators to gain a 101 understanding of file naming, data storage, and data management. Four speakers will be presenting on some best practices of how to name your files, how to manage the storage of data – especially large file sizes – and how to protect research data. We will be providing a short discussion and handout about NIH expectations and policies on data management. This is a jam packed session that will help core directors of all scientific disciplines and administrators understand the basics (and challenges) of managing data in a research environment. It is our hope to demonstrate some of the best practices that work well for our institutions, to help others to overcome some of the many challenges that we all face when it comes to managing data.
  • The Road NOT Taken – A MoFlo Astrios Sort Logic Path to Discovery
    Lora Barsky Beckman Coulter
    It is widely known that cell sorters provide rapid quantification and cell purification by incorporating multi-parametric approaches using fluorescent dyes and labels. Contemporary researchers incorporate transgenic fluorescent proteins for identification and use cell sorting as a pass through technology to capture cells for downstream studies. We’d all agree that knowing how to identify the target and learning how the cell sorter prioritizes cell capture is critical to a researcher’s success yet empirical testing of how sort logic impacts outcome is rarely a specific aim. As such, many investigators continue with the status quo when at the flow core – they use the same sorter and sort conditions – the stress of getting to a triplicate is far too great to spur exploration. In this presentation, I will share a story of how an investigator improved his RNAseq data by utilizing MoFlo Astrios sort modes and gate logic.
  • 3D Printing for the Core
    C. Jeff Morgan University Of Georgia
    Three-Dimensional (3D) printing is a disruptive technology that puts the prototyping and manufacturing process directly into the hands of the inventor. Rapid prototyping and the additive manufacturing process promise to reshape the laboratory and shift the learning paradigm in the classroom. We will explore the basic concepts of 3D printing, rapid prototyping and 3D replication, as a core asset. We will also discuss specialized materials, where to find resources, and innovations in the field of 3D printing. This workshop will be in visual presentation format with a step-by-step walkthrough of the 3D printing process, with demonstration.
  • Network-based prioritization of de novo mutations in Autism and Epilepsy
    Kathleen Fisch UC San Diego Center For Computational Biology & Bioinformatics
    Objective: Up to 30% of patients with autism have epilepsy. The genetic basis of autism and epilepsy has not been well characterized. We analyzed data from whole exome sequencing (WES) on individuals with autism and epilepsy to prioritize pathogenic rare de novo variants. Methods: Detailed clinical and demographic data of over 90 individuals with autism who subsequently developed epilepsy were collected. Whole exome sequencing was performed to identify rare de novo variants. We developed a model to predict genes at the interface of autism and epilepsy and performed network analysis of signaling pathways to identify variants for validation by Sanger sequencing. Results: Rare de novo variants were identified in 73 of 92 cases (79%). Our model predicted 9 genes, termed “hot genes”, at the interface of both disorders (PAK1, SCN5A, NTRK1, GRIN2B, HDAC4, GRIN2A, MYH2, NRXN1). Network analysis identified the MAP kinase and calcium signaling pathways as being over-represented in our cohort. Sanger sequencing of selected de novo variants in 23 cases was performed with a 98% validation rate. The most commonly mutated gene in our cohort was GRIN2A which was found in 3 individuals with generalized epilepsy. We identified likely pathogenic de novo variants in several genes not previously implicated in autism or epilepsy including hot genes MYH2, and PAK1 as well as MAPK/Ca signaling genes HSPA1B, FGFR1, PPP3CA and SLC8A1. The MAPK pathway was associated with severe intellectual disability in our cohort. Interpretation: Rare de novo variants likely contribute to the development of autism and epilepsy. A significant portion of our cohort had a positive family history of autism, epilepsy, psychiatric disease or autoimmune disorders suggesting the genetics of autism and epilepsy are complex with contributions from common and rare inherited variants as well as disease causing de novo variants.
  • Precision Immunology Through Deeper Single Cell Profiling
    Pratip Chattopadhyay NIH
    Three trends have dominated biomedical research over the last decade. The first, the NIH Roadmap’s Single Cell Analysis Program, was founded on the principle that cells are extremely heterogenous, and that this heterogeneity is important in health and disease. For this reason, cells must be characterized individually, rather than by insensitive and misleading analysis of bulk cell populations. This trend renewed appreciation for cellular heterogeneity, and incited a revolution of new technologies that could comprehensively analyze single cells (the second trend, deep profiling). Finally, a third biomedical research trend was sparked by President Obama’s Precision Medicine Initiative, which aims to define genomic and proteomic differences between patient groups, and use this information to inform treatment decisions. In this talk, I will discuss my work at the intersection of these three trends, and demonstrate the value of new technologies for comprehensive and complete cellular analysis. I will provide examples of how deep knowledge about immune responses can be attained, using examples drawn from our recent work in HIV vaccine settings, immunotherapy, and fundamental immunology. This talk will highlight our work developing 30 parameter flow cytometry, single cell RNA sequencing, and new bioinformatic tools and include some discussion of how microfluidics and nanotechnologies can fit into a pipeline that includes the above technologies.
  • A method for in situ multiplexed targeted protein profiling of circulating tumor cells
    James Hicks University Of Southern California
    Cancer evolves in the patient from initiation to widespread metastatic disease through a series of changes through both natural progression and response to treatment pressure. This spatio-temporal evolution, while influenced by many factors, has to be characterized at the genomic and proteomic levels to develop and implement patient specific treatment approaches. While DNA and RNA based approaches are in routine research use in single cell biology today, there is a need for protein analysis of tumor cells in patient’s blood. The described method is an extension of the HD-SCA (High definition single cell analysis) assay liquid biopsy approach where rare circulating tumor cells (CTCs) are identified using 4 color fluorescence staining, and further characterized by Imaging Mass Cytometry (IMC-CyTOF, Fluidigm corp) enabling simultaneous targeted proteomic analysis for a panel of up to 38 markers. A panel of cancer and blood cell relevant protein markers and corresponding antibodies have been identified. Each antibody has been conjugated to a lanthanide metal isotope required for IMC-CyTOF analysis and evaluated using various cell lines spiked in leukocytes from normal blood and plated on glass slides. All spiked cells were identified with the HD-SCA assay and a subset was further stained, relocated and analyzed with IMC for the feasibility of this approach for multiplexed protein profiling of CTCs on a single-cell level. Surrounding leukocytes are used both as internal quality control and for data normalization purposes, enabling reproducible scoring of markers against background levels. We believe that the described method will allow the fluid biopsy to take an important leap towards personalized medicine in clinical practice.

Core Administration

  • Core Operations Reporting: Generating Operational Tools for Specific Stakeholder Groups
    Jay Fox University Of Virginia School Of Medicine
    There is a critical need to understand core operations to ensure successful and efficient delivery of value to stakeholders. Different stakeholders, such as staff, core directors, departmental chairs and deans have different perspectives of core operations and different interpretations of how those operations impact their individual domains. Most core have the benefit of access to ample data that when harvested and presented appropriately have offer a clear assessment of core operations that is tailored to the specific needs and view points of various stakeholder groups. The the University of Virginia Office of Research Core Operations (ORCA) we have complied a number of specific reporting tools tailored to the needs of core staffs, clients and those with institutional oversight over core operations. In this presentation we will present examples of these reporting tools, describe how they are compiled, and how the information in the tools is of value to specific stakeholders.
  • Communicating Science to the Public: Break on through to the other side
    Hudson Freeze FASEB And SBP Medical Discovery Institute
    That important topic and The Doors rhythmic answer may not be so far-fetched. Practicing is not simple; it’s time consuming, plus there’s a foreign language requirement. A few perspectives will be offered, but solutions will come from your own efforts outside the laboratory.
  • Rigor and Reproducibility
    AMY WILKERSON Session Organizer, The Rockefeller University
    Independent verification of research results is essential to scientific progress and to public trust of the scientific method. Growing concern over increasing reports of the lack of transparency and reproducibility in biomedical research has led to increased scrutiny of scientific publications and a general call for development of best practices for generating and reporting research findings. Specific examples of how the science community is responding include the NIH guidelines for addressing rigor and reproducibility in grant applications and progress reports, the consensus “Principles and Guidelines in Reporting Preclinical Research, and efforts to further strengthen quality controls and data reporting and retention practices by core facilities. During this session, experts in sponsored program administration, scientific publishing, and core facility management will provide information about how each sector is addressing the call for increased rigor and reproducibility in biomedical science. In addition, each presenter will provide suggestions on how core facilities can support researchers in addressing new funding agency requirements and for enhancing rigor and reproducibility overall.
  • Management for Scientists, Part 2: Core Business Problem Solving
    Robert Carnahan Vanderbilt University
    How will Kurt handle his BIG core getting BIGGeR? That is the focus of this hands-on problem-solving workshop. Come prepared to work! This session will begin with an overview of problem-solving strategies for organizations. Attendees will then be divided into teams and be guided through a case study that incorporates the types of pressures and decisions that commonly face core facilities. The goal is to use this exercise to develop and refine systematic approaches that can be applied to nearly any problem.
  • Combining Tools to Enhance Scientific Research
    Cheryl Kim La Jolla Institute For Allergy And Immunology (LJI)
    Core facilities at the La Jolla Institute (LJI) provide scientists with powerful technologies necessary to understand more about the cells of the immune system and their function in a wide range of diseases. While each technique has it’s own advantages, combining these tools can provide comprehensive details about the immune system as a whole. In this brief talk, we will present a few different projects involving multiple core facilities and techniques, including Sequencing, Microscopy and Cytometry, to enhance scientific research. In the first project, we will discuss the methods used to improve cell sorting of rare immune cell populations, such as antigen specific T cells, and subsequent global gene expression analysis by RNASeq. In another project, we will discuss deeper profiling and characterization of immune cell subsets by combining imaging, cytometry and transcriptomics.
  • Research Development & Core Facilities Infrastructure
    Karin Scarpinato Florida Atlantic University
    Core facilities are laboratories in academic institutions that provide state-of-the-art instrumentation and world-class technical expertise whose costs are shared by researchers on a fee-for-service basis, and/or supported by the institution. As such, core facilities enable the researchers to obtain access to services/instrumentation that otherwise are too expensive to have in their own labs. To that end, core facilities extend the scope of research programs and accelerate scientific discoveries. Historically, core facilities have not taken a key role in research development. However, over time, academic institutions and federal funding agencies have recognized the importance of shared resources, and are re-evaluating best practices for operations and efficiency. In the systematic evaluation of core facility operations and its infrastructure, many principles of research development are recognized which lends to the suggestion that RD offices can add value to strategic development of individual core facilities and the network of core facilities infrastructure institutionally, regionally and nationally. Principles that govern core facilities that can be enhanced by RD include, but are not limited to strategic planning, interdisciplinary team building, grant writing, seed funding programs and limited submissions management. This panel discussion will provide examples of how RD principles apply to the development of core facilities infrastructure, and how RD offices can assist in improving operations of such facilities. These examples will be followed by an open discussion and exchange of ideas with the audience. By supporting core facilities, RD offices expend their impact in promoting research beyond the single-investigator/research teams. In turn, these services enhance the office outputs: new initiatives, collaborations, grant applications and outreach
  • Cross core collaborations; building bridges between resources
    Matthew Cochran University Of Rochester
    Broad ranging research projects often require input and collaboration from multiple Shared Resource laboratories. While occasionally these cross over projects go on without the knowledge of the core personnel involved it should be a goal of shared resources to promote these collaborative efforts in order to provide the best advice and service possible. In this workshop we are going to discuss cross core collaborations in three different ways. First we’ll discuss the administrative perspective on promoting collaborations and encouraging interaction between cores. Second, from the core directors perspective, we’ll discuss potential issues and ideas to improve workflow between labs. Lastly, the implementation of the Polaris from Fluidigm in a shared resource will be discussed as an example of new instrumentation meant to bridge the gap between core technologies.
  • A Balanced Approach to Return on Investment (ROI) for Research Core Facilities
    Justine Karungi, MBA, FACHE Hoglund Brain Imaging Center, University Of Kansas Medical Center
    In these times of economic constraint and increasing research costs, shared resource cores have become a cost effective and essential platform for researchers who seek to investigate complex translational research questions. Cores produce significant value that cannot be captured using traditional financial metrics. Benchmarking studies conducted by ABRF and other organizations indicate that most research cores do not fully recover operating expenses. As such, these “operational losses” represent institutional investment which, if well planned and managed, produce future returns for the institution’s research community that extend far beyond subsidized pricing. Current literature indicates that there is no single measure that can provide an accurate representation of the full picture of the returns on research investments. This presentation attempts to provide instruction and examples using the Balanced Score Card (BSC), (Kaplan and Norton), as a tool for assessing the Return on Investment (ROI) on research core facilities. The BSC supplements traditional financial measures with criteria to measure performance in three additional areas - customers, internal business processes and learning and growth. The presenters will also discuss and share their experiences and good practices on how they have utilized these ROI approaches to streamline their core operations and make sound investment decisions and strategies to further the mission of their institutions and to meet the expectations of their various investors and key stakeholders.
  • Informatics Core: Adding value to Data Produced in Core Facilities
    Claudia Neuhauser University Of Minnesota
    In 2014, the University of Minnesota Informatics Institute (UMII) was founded to foster and accelerates data-intensive research across the University system in agriculture, arts, design, engineering, environment, health, humanities, and social sciences through informatics services, competitive grants, and consultation. Listening to researchers across the University and in design thinking workshops, it became clear that increasingly data management and analysis are becoming a major bottleneck in research labs. Core facilities in genomics, proteomics, and imaging are producing large amounts of data that are usually delivered to users without much prior processing. With bioinformatics being a fast moving field, it is increasingly difficult for labs to stay on top of the latest analysis tools. Furthermore, quality control of data and basic analysis are in need to become more standardized. To aid researchers in the management and analysis of their data, UMII hired analysts to run workflows and a data wrangler to help with the data management. Over the past year, UMII became part of Research Computing, which includes the Minnesota Supercomputing Institute and U-Spatial. These three institutes provide a full range of services that transform raw data into usable products through standardized workflows and research collaborations.
  • Realizing Commercial Value from Product Opportunities that Arise from Academic Research: An Introduction to the SBIR/STTR Programs
    Ron Orlando UGA And GlycoScientific
    An objective of basic scientific research is the acquisition of knowledge, typically without the obligation to apply it to practical ends. Often during this endeavor, scientists will make a discovery with inherent commercial worth. The value of this work is difficult to ascertain since the technology may be un-validated, may lack intellectual property protection, and has an unknown market potential. These and other vulnerabilities often cause the technology to be undervalued and make it too risky to attract funding from the typical early stage investment strategies, particularly angel investors/venture capitalists. Hence, the potential value is often not realized. The US government has established two grant/contract programs that can be used to bridge this gap between academic concept and commercial product. These are the Small Business Initiated Research (SBIR) and the Small Business Technology Transfer (STTR) programs, which provide over $3 Billion in funding annually. These initiatives intended to help small businesses conduct research and development and aim to increase private sector commercialization of innovations derived from Federal research expenditures. NIH also provides market insight, through the Niche Assessment Program, that can be used to help small businesses strategically position their technology in the marketplace. These and other programs are put inplace to invigorate small buisness growth. This presentation will discuss how to leverage these opportunities to commercialize academic research. Particular attention will be paid on how to obtain capital from these funding mechanisms, compare SBIR to STTR programs, and describe how to decide which program to utilize at each stage of company and idea development. The presenter will draw on his experience to provide examples of how SBIR/STTR funding can be used to create commercial hardware, software, and research reagents. This presentation will also describe a strategy to extract value from product opportunities that arose from academic studies.
  • Bioinformatics of TMT Multiplex Data, UNR-Style
    Karen Schlauch University Of Nevada Reno
    Recent advances in mass spectrometry using the Thermo Scientific Orbitrap Fusion Tribrid Mass Spectrometer allow for the identification and multiplex quantification of many proteins across larger and larger sample sizes. Managing variation across sample protein abundances is not (yet) as standard as with microarrays or RNA-seq platforms. The Nevada INBRE Bioinformatics Core at the University of Nevada, Reno offers several specific methods of analyzing TMT multiplex data run on the Orbitrap Fusion and extracted using Proteome Discoverer at our adjoining Nevada Proteomics Core. A pilot study run in our Cores measured the biological variability in protein abundance of 3000 identified proteins across ten human tissue samples; our Core uses these measures as baseline variability for quality control in human tissue experiments. To further manage intra-cohort variability of protein abundances, our Core developed a technique to identify outliers per protein and per cohort that controls variation without excluding data from entire proteins or samples. We show that our method notably decreases the coefficient of variation of abundance measures (by 2-fold or more) within cohorts and thus leads to more statistically powerful hypothesis tests across cohorts. Our Core performs statistically sound hypothesis tests on these processed data to identify differentially expressed proteins or peptides at either the protein level or the peptide level; tests are specifically based on data distribution and experimental hypotheses. For studies involving post-translational modifications, statistical tests are applied only on unique modified peptides of each post-translational modified protein to take into consideration that abundances of modified peptides make up only a portion of each modified protein’s abundance (sometimes as low as < 10%). We show that our technique offers stronger statistical tests across cohorts for post-translational modifications. To conclude, our Bioinformatics and Proteomics Cores offer novel and specific techniques for TMT data on one or many mutiplex experiments.

Other

  • ABRF-MRG2016 Metabolomics Research Group Data Analysis Study
    Amrita Cheema Georgetown University
    Metabolomics is an evolving field. One of the major bottlenecks in the field is the varied application of bioinformatics and statistical approaches for pre- and post-processing of global metabolomic profiling data sets collected using high resolution mass spectrometry platforms. Several publications now recognize that data analysis outcome variability is caused by different data treatment approaches. Yet, there is a lack of inter-laboratory reproducibility studies that have looked at the contribution of data analysis techniques toward variability/overlap of results. Thus our study design recapitulates a typical metabolomics experiment where the goal is difference detection of features between two groups. The goal of MRG 2016 study is to identify the contribution of data pre and post processing methodologies on data outcome. Specifically, for this study we have used urine samples (a commonly used matrix for metabolomics-based biomarker studies) from mice exposed to 5 Gray of external beam gamma rays and those exposed to sham irradiation (control group). The data files were made available to study participants for comparative analysis using commonly used bioinformatics and/or biostatistics approaches in their laboratory. The participants were asked to report back top 50 metabolites contributing significantly to the group differences. We have received several responses and the findings from the study are being consolidated for MRG presentation at the ABRF 2017 meeting.
  • Untargeted Global Lipidomics in the Systems Biology Tri-ome Era
    John Asara Beth Israel Deaconess Medical Center / Harvard Medical School
    Global profiling of lipids is emerging as a go to -omics technology in recent years as high resolution instrumentation and software improves. Systems biology approaches to understanding the workings of a cell or tumor as it becomes dysregulated from diseases such as cancer are becoming possible with advancements in proteomics, polar metabolomics and non-polar lipidomics. I will focus primarily on our untargeted lipidomics platform that uses a data-dependent acquisition (DDA) strategy with positive/negative polarity switching on a QExactive HF Orbitrap with commercial software for identifying a broad range of lipids and performing alignment and relative quantification (LipidSearch and Elements) (Breitkopf et. al., Metabolomics, 2017). It is all based on MS/MS fragmentation spectra for verifying lipid identifications from a 30 min. high-throughput RP-LC-MS/MS experiment. We will demonstrate the lipidomics platform and its place in systems biology from a serial-omics liquid-liquid extraction from a single cell or tumor sample in breast cancer, multiple myeloma or urine samples whereby the other extraction partitions can be used for other -omics technologies.
  • Bridging Topic III: Management for Scientists, Part 1: Core Business & Graduate Education Partnership
    Robert Carnahan Vanderbilt University
    Management for Scientists is a unique educational collaboration between core facilities and the Vanderbilt University career development office for biomedical trainees. This twelve-week course targets both trainees and core personnel, providing a distillation of the tools and skills needed to be successful in business, whether your business is management of core lab, directing a startup, applying for grants, or all of the above. After a didactic learning phase, teams of trainees with a core facility leader work on defining a solution to an actual business-related problem currently being experienced in the core lab. This session will feature both a trainee and core lab director participant discussing the impact of the course on their personal development and on the functioning of the host facility. It will also present lessons learned on building and sustaining an educational partnership involving core labs.
  • The ABRF-NRMN Partnership: Facilitating Professional Mentorship
    Philip Hockberger Northwestern University
    This presentation will describe the launch of a new, exciting partnership between the ABRF Career Development Committee and the National Research Mentorship Network (NRMN), a nationwide consortium of biomedical professionals and institutions collaborating to provide researchers across the biomedical, behavioral, clinical and social sciences with evidence-based mentorship and professional development programming. The NRMN is an NIH-funded organization that emphasizes the benefits and challenges of diversity, inclusivity and culture within mentoring relationships, and more broadly within the professional research community. Hockberger's presentation will provide an overview of the partnership, the program, and how ABRF members can take advantage of this long-awaited opportunity.

Platinum Sponsors of the ABRF