An International Symposium of the Association of Biomolecular Resource Facilities

Oral Presentation Abstracts

Primary tabs

  • Straight up, with a Twist: Innovative Enzyme Cocktail to Improve DNA extractions of Metagenomic Samples
    Tara Rock, New York University
    The ABRF Metagenomics Research Group (MGRG) strives to improve upon and advance metagenomics methodologies. For the improvement of DNA extractions, the MGRG in partnership with Millipore-Sigma, developed an enzyme cocktail which contains 6 hydrolytic enzymes. The aim is to improve and increase the DNA extracted using this simple pre-treatment before any downstream kit of choice.
  • The hidden life of protein zombies and their role in aging
    Martin Hetzer, The Salk Institute For Biological Studies
    Age is the major risk factor for the development of neurodegenerative diseases such as Alzheimer’s disease (AD). Currently, AD alone impacts the lives of approximately 5 million Americans and their families. The disorder imposes an immense emotional burden on family members and caretakers. Compounding the problem is the reality that the number of patients will increase more than two-fold in the next 30 years and impose a financial cost of more than $1 trillion per year. One can hardly imagine the negative consequences for the wellbeing of our economies, our families and the future of mankind. The only proper response to this formidable challenge is to combat it through efforts that extend from the care of individual patients to the discovery of effective therapeutics to treat, and ideally, prevent it. We discovered a class of extremely long-lived proteins (LLPs) in the adult brain that functionally decline during aging. We speculate that biochemical changes and subsequent deterioration of LLPs may be responsible for the age-related impairment of cognitive performance and the onset/progression of neurodegenerative disorders such as AD. Proposed experiments will allow us to decipher the mechanisms underlying the functional integrity of LLPs and determine how they relate to pathologies in the brain.
  • Adult and Fetal Globin Transcript Removal for mRNA Sequencing Projects
    Piotr Mieczkowski, Department Of Genetics, Lineberger Comprehensive Cancer Center, SOM, University Of North Carolina At Chapel Hill
    Preterm birth (PTB) is delivery prior to 37 completed weeks of gestation, which occurs following spontaneous labor or is medically induced. Our ultimate goal is to perform mRNAseq on umbilical cord blood and placentas obtained from pregnant women who deliver preterm and from matched controls who deliver at term. However, cord blood total RNA extracted from the umbilical cord samples demands a globin depletion protocol that will be applied to the RNA prior to RNA sequencing. It is known that globin mRNA does not contribute high value RNA sequencing information and because as much as 70% of the mRNA in a blood total RNA sample can be globin mRNA. Thus, it is necessary to remove this globin RNA. Removing globin mRNA and rRNA from a blood RNA sample enables deeper sequencing for discovery of rare transcripts and splice variants and reduces the number of expensive sequencing reads that are wasted because they do not lead to prevention or treatment of disease. In this presentation we demonstrate NuGEN globin reduction optimized protocol for mRNA sequencing from adult and fetal samples.
  • Applications enabled by 600 base reads on the Ion S5™ System
    Madison Taylor, Thermo Fisher Scientific
    Longer read lengths simplify genome assembly, haplotyping, metagenomics, and the design of library primers for targeted resequencing. Several new technologies were developed to enable the sequencing of templates with inserts over 600 bases: a fast isothermal templating technology, an ISP™ that is optimized for maximum template density, a new long-read sequencing polymerase, and instrument scripts that consume less reagents. We demonstrate the combination of these technologies to sequence 600 base long DNAs on an Ion S5 System and illustrate the applications enabled by these longer reads.
  • CTO
    Mostafa Ronaghi , Illumina
    Recent advancements in genomic technologies are changing the scientific horizon, dramatically accelerating biomedical research. For wide implementation of these technologies, their accuracy, throughput, cost, and workflow need to be addressed. In the past ten years, the cost of full human genome sequencing has been reduced by 4-5 orders of magnitude, and reduction will continue by another 10-fold in the next few years. This year we introduced new tools that would enable large scale biological studies at lower cost. In this talk, we discuss how these tools would accelerate next wave of biological research.
  • The NCI Research Specialist Award (R50)
    Christine Siemon, National Cancer Institute, NIH
    A new NCI funding mechanism designed to encourage the development of stable research career opportunities for exceptional scientists who want to continue to pursue research within the context of an existing NCI-funded basic, translational, clinical or population science cancer research program or core, but not serve as independent investigators. The Research Specialist Award is intended to provide salary support and sufficient autonomy so that individuals are not solely dependent on NCI grants held by others for career continuity.
  • The QUANTOM Tx™ Microbial Cell Counter, An Automated Rapid Single Cell Counter for Bacterial Cells
    John Kim, Logos Biosystems, Inc.
    Accurately counting microbes in a sample is essential in many fields, including the food industry, water treatment plants, research labs, and clinical labs. There are several bacteria counting methods, such as using a hemocytometer, a spectrophotometer, flow cytometry, and colony counting method. Each method has its pros and cons. For example, using a hemocytometer and a microscope is an economical way to count bacterial cells, but it is tedious and prone to user subjectivity. Counting colony forming units can measure the live cells, but it requires hours of incubation time and effort to count them. Here we introduce a new image-based automated microbial cell counter, the QUANTOM Tx™, that accurately and rapidly counts bacterial cells in a single cell resolution. The QUANTOM Tx™ can generate an accurate counting result of bacterial cells within 15 minutes. The following steps are used to count bacterial cells with the QUANTOM Tx™: Mix bacterial cells with the QUANTOM™ Total Cell Staining Dye, which is a green fluorescent nucleic acid dye staining both live and dead bacterial cells. The cell loading buffer is added into the mixture, and then the mixture is loaded into the QUANTOM™ M50 Cell Counting Slide and centrifuged to immobilize and evenly distribute the cells throughout the counting chamber. The slide is inserted into the QUANTOM Tx™ to be imaged. The QUANTOM Tx™ captures up to 20 high resolution images and counts the cells in each automatically. The highly sophisticated software can distinguish individual cells in various arrangements such as tight clusters or in sequence to produce accurate and reliable total bacterial cell counts. The QUANTOM Tx™ can be a useful tool for researchers who routinely count bacterial cells. The QUANTOM Tx™ users would save a lot of time and be able to get accurate and reliable bacteria counting results.

Genomics

  • Complementary approaches to profiling nascent protein synthesis in peripheral neurons in vivo and in vitro
    Zachary Campbell, UT-Dallas Genome Center And The Dept. Of Biological Sciences
    Translational control is a dominant theme in neuronal plasticity. Messenger RNA (mRNA) is subject to dynamic regulation by multi-protein regulatory complexes. These large assemblies enable signal-dependent control of protein synthesis. Tremendous progress has been made on the proximal signaling events that control translation regulation in the nervous system leading to neuronal plasticity (e.g. learning and memory, LTP/LTD, neurodevelopmental disorders, and many forms of chronic pain). However, astonishingly little is known about the downstream mRNA targets that are translated to produce the new proteins that mediate this plasticity. We are establishing a novel resource that comprehensively captures nascent protein synthesis levels in sensory neurons called nociceptors using next-generation sequencing to profile translation. Chronic pain is characterized by persistent plasticity in nociceptors and is a devastating condition with a lifetime incidence greater than 33%. Poorly managed pain creates an enormous burden on our healthcare system and produces tremendous human suffering. This resource provides insight into how pain evoking stimuli trigger dynamic alterations in the landscape of protein synthesis thereby facilitating nociceptor plasticity. The data have clear implications for improved pain treatment and will serve as a paradigm for understanding neuronal plasticity in other areas of neuroscience.
  • Cross-Site Comparison of Ribosomal Depletion Kits for Illumina RNAseq Library Construction
    Stuart Levine, MIT
    Ribosomal RNA (rRNA) comprises at least 90% of total RNA extracted from mammalian tissue or cell line samples. Informative transcriptional profiling using massively parallel RNA sequencing technologies requires either enrichment of mature poly-adenylated transcripts or targeted depletion of the rRNA fraction. The latter method is of particular interest because it is compatible with degraded samples such as those extracted from FFPE, and it also captures transcripts that are not poly-adenylated such as some non-coding RNAs. Here we provide a cross-site study that evaluates the performance of ribosomal RNA removal kits from Illumina, Takara/Clontech, Kapa Biosystems, Lexogen, New England Biolabs and Qiagen on intact and degraded RNA samples. We find that all of the kits are capable of performing significant ribosomal depletion, though there are large differences in their ease of use. Most kits perform well on both intact and degraded samples and all identify ~14,000 protein coding genes from the Universal Human Reference RNA sample at >1FPKM, though the fraction of reads that are protein coding or in annotated lncRNAs varies between the different methodologies. These results provide a roadmap for labs on the strengths of each of these methods and how best to utilize them.
  • Advancements in NGS sample preparation for low input and single cells
    Andrew Farmer, Takara Bio USA Inc
  • An automated low-volume, high-throughput library prep for studying bacterial genomes
    Jon Penterman, MIT
    In depth, genomic studies on bacterial isolates from clinical or environmental settings can be prohibitively expensive. The most significant expense for such studies is the preparation of sequencing libraries. Sequencing libraries can be prepared by ligating adaptors to end-repaired DNA (many kits) or by tagmentation and PCR enrichment (Illumina NexteraXT). For large-scale studies, the NexteraXT kit is a popular choice because the two-step, single tube protocol uses unprocessed gDNA as the input (versus fragmented gDNA for non-tagmentation protocols). Here we describe a high throughput, low volume NexteraXT protocol that significantly lowers library preparation costs. Central to this protocol is the Mosquito HTS robot, a small-volume liquid handler that aspirates and dispenses on a 96 or 384-well scale. We miniaturized the NexteraXT reaction to 1/12th the normal scale on the Mosquito and took advantage of a preexisting Tecan Evo robot to completely automate the remaining parts of the library prep service (normalization of DNA input for prep, library normalization, pooling). The sample dropout rate for this protocol is low, and overall sequencing coverage of both control and experimental samples is similar to that seen in NexteraXT libraries prepped at the normal scale. By reducing reagent usage and labor input on a per sample basis, we have made large bacterial gDNA sequencing projects more financially feasible.
  • Automating CRISPR mutation detection and zygosity determination
    Kyle Luttgeharm, Advanced Analytical Technologies
    While CRISPR gene editing is rapidly advancing becoming more economical and efficient, Protocols to identify CRISPR mutations and determine the zygosity of these mutations are time consuming and often involve costly sequencing steps. To overcome this limitation, many researchers are turning to heteroduplexing and enzymatic mismatch cleavage assays to rapidly screen for mutated lines. Despite this growing interest, few studies to optimize heteroduplex cleavage assays in relation to different mutations and lengths of PCR products have been performed. Additionally, sequencing has continued to be required for zygosity determination of individual diploid cell lines/organisms. Using Advanced Analytical Technologies Inc. Fragment Analyzer™ Automated Capillary Electrophoresis System and synthetic genes to mimic different CRISPR mutations, we developed an optimized heteroduplex cleavage assay employing T7 Endonuclease I to detect a wide variety of common CRISPR mutations including both insertion/deletions and single nucleotide polymorphisms. In order to decrease the number of individual cell lines sequenced we developed statistical models that relate heteroduplex formation to the number of mutated alleles in individual diploid cell lines/organisms. This protocol allows for accurate prediction of monoallelic, diallelic homozygous, and diallelic heterozygous events. Having a single high-throughput protocol that results in cleavage of multiple types of mutations while also determining the number of mutated alleles would allow for efficient screening of CRISPR mutant populations at a level currently not feasible.
  • Clinical Whole Genome Sequencing: Challenges, Opportunities and Insights from the First Thousand Genomes Sequenced
    Shawn Levy, HudsonAlpha Institute For Biotechnology
    Major technological and computing advancements have allowed routine generation of whole genome sequence data on hundreds of thousands of people over the last several years. As the ability to analyze and annotate genomes has improved, the clinical utility and impact has greatly enhanced the ability to discover and define the genetic causes of a wide variety of human phenotypes. In December of 2015 we launched a clinical laboratory focusing on whole-genome sequencing and we have used that infrastructure to sequence over 1,000 genomes for translational and clinical projects, delivering results back to patients and secondary findings to parents. During the course of these studies, we have learned a number of valuable lessons and have more appropriately calibrated our expectations and uses for whole genome sequencing. This presentation will highlight those successes and challenges and discuss the dynamic and powerful capabilities of genomics for both routine clinical use as well as in the treatment of critically ill patients.
  • The eXtreme Microbiome Project “Down Under”: Metagenomic Adventures in the Southern Hemisphere.
    Ken McGrath, Australian Genome Research Facility
    The eXtreme Microbiome Project (XMP) is a global scientific collaboration to characterize, discover, and develop new pipelines and protocols for extremophiles and novel organisms from a range of extreme environments. Two of the recent study locations are in the southern hemisphere: Lake Hillier, a bright pink hypersaline lake located on the Recherche Archipelago in Western Australia; and Lake Fryxell, a permanently frozen freshwater lake located in the Dry Valleys of Antarctica. Sampling in such remote environments comes with inherent challenges for sample collection and preservation, ranging from shark bite to frostbite. Despite these challenges, the XMP team has recovered samples of these microbial communities, and have analysed the metagenomes using a range of sequencing platforms, revealing the composition of the microbial communities that inhabit these extreme environments of our planet. Our results identify an abundance of highly specialised organisms that thrive in these locations, as well as demonstrate the utility of third-generation sequencing platforms for in situ analysis of microbial communities.
  • Current Innovations for Metagenomics used in Antarctica
    Scott Tighe, Uiversity Of Vermont
    Novel advancements in genomics by the metagenomics research group have made it possible to extract, isolate, and sequence DNA recovered from ancient microbial biofilms from buried paleomats in Antarctica. These techniques include high performance DNA extraction protocols using the multi-enzyme cocktail branded as MetaPolyZyme combined with a new hybrid DNA extraction mag-bead kit. These techniques allow for the recovery of high molecular weight DNA suitable for Oxford Nanopore sequencing both in the Crary lab in McMurdo Station and while in the field in Antarctica as well as high resolution sequencing using the PacBio and Illumina systems.
  • Advancements in NGS sample preparation for low input and single cells
    Andrew Farmer, Takara Bio USA Inc
    Experimental approaches involving RNA-seq and DNA-seq have led to significant advancements in fields such as developmental biology and neuroscience, and are increasingly being applied towards the development of diagnosis and novel treatments for human disease. Our SMARTer NGS portfolio for DNA and RNA sequencing enables generation of libraries from single cell, degraded, and other low-input sample types . Together our products cater not only to generating sequencing libraries from difficult to obtain samples, but also to all major applications ( Differential gene expression analyses, Immune-profiling, Epigenomic profiling, Target enrichment, mutation detection for low frequency alleles and copy number variation). SMARTer methods perform consistently across a range of sample types and experimental applications, and are capable of processing low-input sample amounts. In this talk, we will present recent developments to the DNA and RNA seq portfolio.
  • MetaSUB: Metagenomics Across the World's Cities
    Ebrahim Afshinnekoo, Weill Cornell Medicine/New York Medical College
    The Metagenomics and Metadesign of the Subways and Urban Biomes (MetaSUB) International consortium is a novel, interdisciplinary initiative made up of experts across many fields including genomics, data analysis, engineering, public health, and design. Just as there is a standard of measurement of temperature, air pressure, wind currents– all of which are considered in the design of the built environment– the microbial ecosystem is just as dynamic. Thus, it should also be integrated into the design of cities. By developing and testing standards for the field and optimizing methods for urban sample collection, DNA/RNA isolation, taxa characterization, and data visualization, the MetaSUB International Consortium is pioneering an unprecedented study of urban mass-transit systems and cities around the world. These data will benefit city planners, public health officials, and designers, as well as discovery new species, biological systems, and biosynthetic gene clusters, thus enabling an era of more quantified, responsive, and “smarter cities.” During this talk we’ll share preliminary results from pilot studies carried out across the cities in the consortium including taxa classification, functional analysis, and antimicrobial resistance markers.
  • Perspectives for Whole Cell Microbial Reference Materials
    J. Russ Carmical, Baylor College Of Medicine
    With the exception of the Microbiome Quality Control (MBQC), very little has been published on best practices and reference standards for microbiome and metagenomic studies. As evidenced by recent publication trends, researchers are moving the field toward commercial development at a rapid pace.  If these analyses are to ever evolve into reliable assays (e.g. for clinical diagnostics), the measurement process must be regularly assessed to ensure measurement quality.  A key aspect of this validation is the routine analysis of reference materials as positive controls. A reference material (RM) is any stable, abundant, and well characterized specimen that is used to assess the quantitative and/or qualitative validity of a measurement process. The focus will be on the use of whole cell microbial reference materials to characterize metagenomic analyses from start to finish. We’ll discuss 3 key categories (environmental samples, in vitro models for microbial ecosystems, pure microbial isolates) of reference standards and the challenges in characterizing those standards.
  • Precision Metagenomics: Rapid Metagenomic Analyses for Infectious Disease Diagnostics and Public Health Surveillance
    Ebrahim Afshinnekoo, Weill Cornell Medicine/New York Medical College
    Next-generation sequencing technologies have ushered in the era of Precision Medicine, transforming the way we diagnose and treat cancer patients. Subsequently, the advent of these technologies has created a surge of microbiome and metagenomics studies over the last decade, many of which are intent upon investigating the host-gene-microbial interactions responsible for the development of chronic disorders. As we continue to discover more information about the etiology of complex chronic diseases associated with the human microbiome, the translational potential of metagenomics methods for the treatment and rapid diagnosis of infectious diseases is also becoming abundantly clear. Here, we present a robust protocol for the utilization and implementation of “precision metagenomics” across various platforms on clinical samples. Such a pipeline integrates DNA/RNA extraction, library preparation, sequencing, and bioinformatics analysis for taxa classification, antimicrobial resistance marker screening, and functional analysis. Moreover, the pipeline is built towards three tracks: STAT for rapid 24-hour processing as well as Comprehensive and Targeted tracks that take 5-7 days for less urgent samples. We present some pilot data demonstrating the applicability of these methods on cultured isolates and finally, we discuss the challenges that need to be addressed for its full integration in the clinical setting.
  • Genomics Research Group 2016-17 study: A multiplatform evaluation of single-cell RNA-seq methods.
    Sridar Chittur, University At Albany, SUNY
    The Genomics Research Group (GRG) presentation will describe the current activities of the group in applying the latest tools and technologies for single cell transcriptome analysis to determine the advantages and disadvantages of each of the platforms. This project involves the comparison of gene expression profiles of individual SUM149PT cells treated with the histone deacetylase inhibitor TSA vs. untreated controls. The goals of this project are to demonstrate RNA sequencing (RNA-seq) methods for profiling the ultra-low amounts of RNA present in individual cells, and to demonstrate the use of the various systems for cell capture and RNA amplification including Fluidigm, Wafergen, fluorescence activated cell sorting (FACS), 10x Genomics, and Illumina’s joint venture with BioRad on the ddSEQ platform. In this session, we will discuss the technical challenges, results from each of these projects, and some key experimental considerations that will help leverage optimal results from each of these technologies.
  • 3D-Printed Continuous Flow PCR Millifluidic Device for Field Monitoring of Bacterial DNA
    Elizabeth Henaff, Weill Cornell Medical College
    Environmental metagenomics – measuring bacterial species and gene content from environmental samples – is relevant in many contexts, including agriculture, land stewardship, and disease outbreak monitoring. Indeed, soil bacteria have been shown to influence crop outcome, the response to harmful algal blooms is largely dependent on response time, and pathogen mapping is relevant on the urban scale. While high-throughput sequencing provides an in-depth view of these phenomena, often the detection of indicator species or genes can be sufficient to engage in first response. Here we describe a cheap, scalable and easy-to-use 3D printed device for specific detection of DNA markers, using continuous flow PCR. It can be used for microbial species or functional plasmid detection, and is robust enough that it may be used by non-scientists. Citizen science can enable data collection at an unprecedented scale, and engaging citizens to monitor their environment has a number of positive impacts on the public perception of human and environmental safety. Indeed, recruiting non-scientists for sample collection has enabled projects such as Ocean Sampling Day (OSD; https://www.microb3.eu/myosd/how-join-myosd), the PathoMap study (http://www.pathomap.org), and the MetaSUB consortium (http://metasub.org) by recruiting numbers of student volunteers across the world. The sense of agency derived from the ability to personally monitor one’s environment will enable data-driven discussions around microbial milieu and help alleviate people’s concerns around the enforcement of best practices. The hardware plans (3D print files and circuit designs) will be made available under the Open Source Hardware Association (http://www.oshwa.org/) so that they may be replicated and implemented freely.
  • Metagenomic Analysis using the MinION Nanopore Sequencer
    Ken McGrath, Australian Genome Research Facility
    Metagenomic DNA from microbial communities can be recovered from virtually anywhere on our planet, yet determining the composition of those communities remains a challenge, in part due to bioinformatics complexity of assembling sequencing reads from a cocktail of similar genomes. We compare the Oxford Nanopore MinION platform to other sequencing platforms and demonstrate that the longer reads from the MinION can compensate for the higher error rates, resulting in comparable metagenomics profiles. The reduced sample processing time and the enhance portability of the MinION make this platform a useful tool for studying microbial communities, particularly those in remote or extreme environments.

Imaging

  • Imaging Drosophila Brain Activity
    Jing Wang, UC San Diego
    Understanding how the action of neurons, synapses, circuits and the interaction between different brain regions underlie brain function and dysfunction is a core challenge for neuroscience. Remarkable advances in brain imaging technologies, such as two-photon microscopy and genetically encoded activity sensors, have opened new avenues for linking neural activity to behavior. However, animals from insects to mammals exhibit ever-changing patterns of brain activity in response to the same stimulation, depending on the state of the brain and the body. The action of neuromodulators – biogenic amines, neuropeptides, and hormones – mediates rapid and slow state shifts over a timescale of seconds to minutes or even hours. Thus, it is imperative to develop a non-invasive imaging system to maintain an intact neuromodulatory system. The fruit fly Drosophila melanogaster is an attractive model organism for studying the neuronal basis of behavior. However, most imaging studies in Drosophila require surgical removal of the cuticle to create a window for light penetration, which disrupts the intricate neuromodulatory system. Unfortunately, the infrared laser at the wavelengths used for two-photon excitation of currently available activity probes is absorbed by the pigmented cuticle. Here we demonstrate the ability to monitor neural activity in flies with intact cuticle at cellular and subcellular resolution. The use of three-photon excitation overcomes the heating problem associated with two-photon excitation.
  • IsoView: High-speed, Live Imaging of Large Biological Specimens with Isotropic Spatial Resolution
    Raghav Chhetri, HHMI Janelia Research Campus
    To image fast cellular dynamics at an isotropic, sub-cellular resolution across a large specimen, with high imaging speeds and minimal photo-damage, we recently developed isotropic multiview (IsoView) light-sheet microscopy. IsoView microscopy images large specimens at a high spatio-temporal resolution via simultaneous light-sheet illumination and fluorescence detection along four orthogonal directions. The four-views in combination yield a system resolution of 450 nm in all three dimensions after a high-throughput multiview-deconvolution. IsoView enables longitudinal in vivo imaging of fast dynamic processes, such as cell movements in an entire developing embryo and neuronal activity throughout an entire brain or nervous system. Using IsoView microscopy, we performed whole-animal functional imaging of Drosophila embryos and larvae at a spatial resolution of 1.1-2.5 microns and at a temporal resolution of 2 Hz for up to 9 hours. We also performed whole-brain functional imaging in larval zebrafish and multicolor imaging of fast cellular dynamics across entire, gastrulating Drosophila embryos with isotropic, sub-cellular resolution. Compared with conventional light-sheet microscopy, IsoView microscopy improves spatial resolution at least sevenfold and decreases resolution anisotropy at least threefold. Additionally, IsoView microscopy effectively doubles the penetration depth and provides sub-second temporal resolution for specimens 400-fold larger than could previously be imaged with other high-resolution light-sheet techniques.
  • Recent advanced in super-resolution
    Sara Abrahamsson, The Rockefeller University
    Super-resolution microscopy is a new and dynamic field that is currently finding its useful application in Biological research. Advanced instruments and techniques are increasingly accessible to Biological researchers at imaging facilities that are open to external users. At the same time, the super-resolution field itself is rapidly evolving. New methods - and groundbreaking new twists to existing methods - are constantly being developed. Super-resolution methodologies that improve resolution in all three spatial dimensions (3D) - such as 3D Structured Illumination Microscopy (3D SIM), 3D Stimulated Emission Depletion (3D STED) and interferometric Photoactivated and Localization Microscopy (iPALM) - are particularly interesting since structures and features in a biological specimen are most often distributed not simply next to each other but also above and below. Other crucial parameters in applied biomicroscopy are contrast and optical sectioning capability. These can often be more difficult to tackle than insufficient resolution. Finally, in the new emerging field of live-cell super-resolution imaging, major limiting factors are light dose and acquisition rate. Acquisition rate becomes especially challenging in 3D imaging where this information is obtained by scanning. In my work I am currently addressing this issue in an imaging system that combines 3D SIM with multifocus microscopy (MFM) technology to provide 3D super-resolution live-cell imaging capacity of dynamic biological processes.
  • Optogenetic probes to reveal and control cell signalling
    Robert E. Campbell, University Of Alberta
    Biomolecular engineering of improved fluorescent proteins (FPs) and innovative FP-based probes has been a major driving force behind advances in cell biology and neuroscience for the past two decades. Among these tools, FP-based reporters (i.e., FP-containing proteins that change their fluorescence intensity or color in response to a biochemical change) have uniquely revolutionized the ability of biologists to ‘see’ the otherwise invisible world of intracellular biology and neural signalling. In this seminar I will describe our most recent efforts to use protein engineering to make a new generation of versatile FP-based tools optimized for in vivo imaging of neural activity. Specifically, I will present our efforts to convert red and near-infrared FPs into reporters for calcium ion, membrane potential, and neurotransmitters. In addition, I will briefly describe our most recent efforts to exploit FPs for optogenetic control of protein activity and gene expression.
  • Case Studies in Modern Bioimage Analysis: 3D, Machine Learning, and Super Resolution
    Hunter Elliott, Harvard Medical School
    Recent advances in 3D imaging and the increasing popularity of 3D model systems have resulted in a proliferation of higher-dimensional microscopy data. Developments in 2D and 3D super-resolution have produced data at unprecedented length scales. Simultaneously, machine learning has become increasingly dominant within the computer vision field. The confluence of these recent trends has made it an exciting time to be a bioimage analyst. We will present several short vignettes highlighting these trends: 3D analysis ranging from millimeter-scale tissue samples to nanoscale subcellular structures. Supervised deep learning for cellular classification from phase-contrast images. And finally, unsupervised machine learning for STORM super-resolution data analysis and phenotypic profiling.
  • Emerging Life Science Applications of FTIR Imaging: Providing Spatially Resolved Molecular Information for Disease Research
    Carolina Livi, Agilent Technologies
    The era of high throughput Omics technologies has provided vast data sets of information. Big data projects have now also moved into the single cell domain and the importance of understanding the spatial differences in tissues and the cell type specific profiles is clear. Fourier Transform Infrared (FTIR) chemical imaging has been an important tool in many facets of analytical chemistry for almost 15 years and in recent times has found important uses in life science research. FTIR imaging uniquely provides a rapid 2D chemical snapshot of the sample, spatially resolved to a few microns across a large field of view FTIR imaging provides a molecular chemical image of the sample nondestructively and without the need for stains. With the appropriate calibration and machine learning algorithms, various tissue types and diseased states can be determined in an automated fashion relying purely on the inherent chemical contrast within the sample. This automation removes the human subjectivity in classification and provides a more quantitative assessment. Applications on FFPE samples, range from tissue segmentation to disease grading research and have been conducted on various disease tissues such as a prostate, breast, colon and lung cancers. Recent results demonstrate an automated high-throughput assessment of prostate biopsy tissue using FTIR imaging and large Tissue Micro Array (TMA) samples. This method could accurately predict various important tissue structures; such as epithelium, smooth muscle, lymphocytes, blood, and necrotic regions. These results demonstrate that FTIR imaging has advanced to a point where it is practical and feasible to adopt as part of life science workflows where it can improve sample quality assessment and provides a new tool with which to study various biological systems.
  • LMRG Study 3: 3D QC Samples for Evaluation of Confocal Microscopes
    Erika Wee, ABIF McGill
    Here we present the third study of the Association of Biomolecular Resource Facilities (ABRF) Light Microscopy Research Group (LMRG). In LMRG, our goal is to promote scientific exchange between researchers, specifically those in core facilities in order to increase our general knowledge and experience. We seek to provide a forum for multi-site experiments exploring “standards” for the field of light microscopy. The study is aimed at creating a 3D biologically relevant test slide and imaging protocol to test for 1) System resolution and distortions in 2D and 3D, 2) the dependence of intensity quantification and image signal to noise of the microscope on imaging depth and 3) the dependence of the microscope sensitivity on imaging depth.
  • Linking single-molecule dynamics to local cell activity
    Catherine Galbraith, OHSU, OCSSB, KCI
    Advances in single molecule microscopy have made it possible to obtain high-resolution maps of the inside of cells. However, technical difficulties still limit the ability to obtain dense fields of single molecules in live cells. Even more challenging is that the disparity in the spatial and temporal scales pose make it difficult to explicitly connect molecular behaviors to cellular behaviors. Here we present an integrated, multi-scale live-cell imaging and data analysis framework that explicitly links transient spatiotemporal modulation of receptor density and mobility to cellular activity. Using integrin adhesion receptors we demonstrate variations in molecular behavior can predict localized cell protrusion. Integrins are the key receptors mediating cell-matrix connections and play a critical role in mediating linkages to the cytoskeleton essential for cell migration. Our framework uncovered an integrin spatial gradient across the entire morphologically active region of the cell, with the highest density and slowest diffusion simultaneously occurring at the cell edge. Moreover, we discovered transient increases in density and reductions in speed that indicated the onset of local cell protrusion. Despite inherent heterogeneity of stochastically sampled molecules, our approach is also capable of linking the behavior to cell behavior using >90% of the molecules imaged. Through the use of receptor mutants we demonstrate that the distribution and mobility of receptors rely on unique binding domains. Moreover, our imaging and analysis framework demonstrates the ability to define unique molecular signatures dependent upon receptor conformational state. Thus, our studies demonstrate an explicit coupling between individual molecular dynamics and local cellular events, providing a paradigm for dissecting the molecular behaviors underlie the cellular functions observed with conventional microscopes.

Proteomics

  • The iPRG-2016 Proteome Informatics Research Group Study: Inferring Proteoforms from Bottom-up Proteomics Data
    Magnus Palmblad, Leiden University
    In 2016, the ABRG Proteome Informatics Research Group (iPRG) conducted a study on proteoform inference from bottom-up proteomics data. For this study, we acquired data from samples spiked with overlapping oligopeptides, so-called Protein Epitope Signature Tags, recombinantly expressed in E. coli into a background of E. coli proteins. This is a unique dataset in that we have ground truth on the proteform composition of each sample, and each sample contains hundreds of different proteoforms. Tandem mass spectra were acquired on a Q-Exactive Orbitrap in data-dependent acquisition mode and made available in raw and mzML formats. Participants were asked to use a method of their choosing and report the false-discovery rate or posterior error probabilities for each proteoform in a list provided in the FASTA format. In general, most participants solved the task well, but some differences were observed, suggesting possible improvements and refinements. This time we also decided to try to show different ways to improve and add value to such studies. We therefore created a dedicated website on a virtual private server to contain all aspects the study, including the submission interface. We had learned from previous studies that it is difficult for some participants to submit their results in the desired format, no matter how carefully specified. This has generated substantial additional work during the evaluation phase of previous iPRG studies. In the 2016 study, we therefore built a submission parser/validator that checked whether a submission was provided in the correct format before accepting it. The validator also provided feedback to the submitter when the uploaded results were not in the correct format. This also helped in the evaluation and comparisons of submissions. Another novelty in the 2016 study was the acceptance of submissions in R Markdown or IPython notebook formats, containing both explanation of the method and an executable script to rerun and compare submissions. These methods will be anonymized and made available for reuse by researchers conducting this type of analyses. The study will therefore live on, and as new methods and software become available, these can be benchmarked against the best solutions at the time of the study. It is also possible to combine elements from several submitted methods. All code running on the VPS behind the iPRG 2016 Website, including the submission validator, will be available to other ABRF Research Groups. In the presentation, we will also discuss lessons learned from the novel technical aspects of this iPRG study, including the use of R Markdown and IPython notebooks.
  • An ‘Omics Renaissance or Stuck in the Dark Ages? Monitoring and Improving Data Quality in Clinical Proteomics and Metabolomics Studies
    J. Will Thompson, Duke Proteomics And Metabolomics Shared Resource
    Analysis of proteins and metabolites by mass spectrometry is currently enjoying a renaissance in many ways. Identification of unknown proteins, and even localization of post-translational modifications with high precision and on a grand scale, is routine. It is possible to qualitatively and quantitatively profile thousands of protein groups, or metabolite features, in a single sample with a single analysis. Multiplexed targeted LC-MS/MS can quantify hundreds of analytes in only a few minutes. Exciting new sample preparation, mass spectrometry, and data analysis techniques are emerging every day. However, significant challenges still exist for our field. Compared to genomic approaches, coverage is still limited. Compared to the longitudinal precision and accuracy required in the clinical diagnostic use of mass spectrometry, ‘omics techniques lag far behind. And in terms of the throughput required for addressing the Precision Medicine Initiative, most proteomics and metabolomics analyses take far too long and are far too expensive. This presentation will focus on practical techniques implemented in our laboratory and others to try and address some of these shortcomings, including efforts to improve the throughput of unbiased proteomics profiling, track precision and bias in proteomic and metabolomic experiments using the Study Pool QC, and the use of reference pools for longitudinal performance monitoring in targeted quantitative proteomics and metabolomics studies.
  • Chemical Proteomics Reveals the Target Space of Hundreds of Clinical Kinase Inhibitors
    Bernhard Kuster, Technical University Of Munich
    Kinase inhibitors have developed into important cancer drugs because de-regulated protein kinases are often driving the disease . Efforts in biotech and pharma have resulted in more than 30 such molecules being approved for use in humans and several hundred are undergoing clinical trials. As most kinase inhibitors target the ATP binding pocket, selectivity among the 500 human kinase is a recurring question. Polypharmacology can be beneficial as well as detrimental in clinical practice, hence, knowing the full target profile of a drug is important but rarely available. We have used a chemical proteomics approach termed kinobeads to profile 240 clinical kinase inhibitors in a dose dependent fashion against a total of 320 protein kinases and some 2,000 other kinobead binding proteins. In this presentation, I will outline how this information can be used to identify molecular targets of toxicity, re-purposing existing drugs or combinations for new indications or provide starting points for new drug discovery campaigns.
  • Patching Holes in Your Bottom-up Label and Label-free Quantitative Proteomic Workflows
    Tony Herren, University Of California, Davis
    Reliable quantitation of label and label-free mass spectrometry (MS) data remains a significant challenge despite much progress in the field on both the hardware and software fronts. In particular, proper quantitation and control within the workflow prior to mass spectrometry data dependent acquisition (DDA) is of crucial and often overlooked importance. Here, we will explore several of these “pre-mass spec” topics in greater detail, including quality control and optimization of sample preparation and liquid chromatography. Specifically, the importance of accurate quantitation of protein and peptide inputs and outputs during sample processing steps (extraction, digestion, C18 cleanup, enrichment/depletion) for both label and label-free workflows will be addressed. Protein recovery data from our lab suggests that recoveries throughout the sample preparation process come with significant loss and must be empirically determined at each step. Missed cleavages during enzymatic digestion, poor labeling efficiency, and addition of enrichment and/or depletion steps to sample workflows are all confounding factors for label and label-free quantitation and will also be discussed. Additionally, the influence of liquid chromatography performance on DDA quantitation across multiple sample runs in label and label-free workflows will be examined, including the effects of retention time drift, ambient environmental conditions, gradient length, peak capacity, and instrument duty cycle. These issues will be discussed in the context of a typical core facility bottom up proteomics workflow with practical tools and strategies for addressing them. Finally, a comparison of quantitative sensitivity will be made between sample data acquired using label-free MS and tandem mass tag (TMT 10-plex) data acquired using different MS parameters on a Thermo Fusion Lumos.
  • Test speaker abstract submission - please delete
    Allis Chien, Stanford
    Abstract text
  • Chemical Isotope Labeling LC-MS for Routine Quantitative Metabolomic Profiling with High Coverage
    Liang Li, University Of Alberta
    A key step in metabolomics is to perform relative quantification of metabolomic changes among different samples. High-coverage metabolomic profiling will benefit metabolomics research in systems biology and disease biomarker discovery. To increase coverage, multiple analytical tools are often used to generate a combined metabolomic data set. The objective of our research is to develop and apply an analytical platform for in-depth metabolomic profiling based on chemical isotope labeling (CIL) LC-MS. It uses differential isotope mass tags to label a metabolite in two comparative samples (e.g., 12C-labeling of an individual sample and 13C-labeling of a pooled sample or control), followed by mixing the light-labeled sample and the heavy-labeled control and LC-MS analysis of the resultant mixture. Individual metabolites are detected as peak pairs in MS. The MS or chromatographic intensity ratio of a peak pair can be used to measure the relative concentration of the same metabolite in the sample vs. the control. For a metabolomics study involving the analysis of many different samples, the same heavy-labeled control is spiked to all the light-labeled individual samples. Thus, the intensity ratios of a given peak pair from LC-MS analyses of all the light-/heavy-labeled mixtures reflect the relative concentration changes of a metabolite in these samples. CIL LC-MS can overcome the technical problems such as matrix effects, ion suppression and instrument drifts to generate more precise and accurate quantitative results, compared to conventional LC-MS. CIL LC-MS can also significantly increase the detectability of metabolites by rationally designing the labeling reagents to target a group of metabolites (e.g., all amines) to improve both LC separation and MS sensitivity. In this presentation, recent advances in CIL LC-MS for quantitative metabolomics will be described and some recent applications of the technique for disease biomarker discovery research as well as biological studies will be discussed.
  • Workflow Interest Network Research Project Presentation
    Emily Chen, Columbia University Medical Center
    A new research group, the Workflow Interest Network (WIN), was established in 2016. Our current focuses are 1) to collaborate with other ABRF members and mass spectrometry-based research groups to identify key factors that contribute to poor reproducibility and inter-laboratory variability, and 2) to propose benchmarks for MS-based proteomics analysis as well as quality control procedures to improve reproducibility. In 2016, we have launched a test study to examine the LC-MS/MS performance among 10 MS-based core laboratories, using two sets of peptide standards and complex lysates. Preliminary analysis of the test study will be presented in the concurrent workshop. We are highly encouraged by the results of our test study. We believe that it should be possible to promote scientific reproducibility by comparing different analytic platforms and providing benchmarks for instrument performance based on the observed capabilities across a large number of datasets. Bioinformatics tools that allow rapid analysis and evaluation of LC and MS performance will also be discussed. Finally, we will announce an expansion of our test study to the broader community, inviting laboratories to participate and contribute to this benchmarking study. The study is designed to require only a very reasonable time commitment, and participating laboratories will gain essential information on their instruments’ performance in the course of helping to build a valuable benchmarking and QC resource.
  • Characterization of the myometrial proteome in disparate states of pregnancy using SPS MS3 workflows.
    David Quilici, University Of Nevada Reno
    Reliable quantitative analysis of global protein expression changes is integral to understanding mechanisms of disease. Global expression of myometrial proteins involved in the premature induction of labor compared to normal induction of labor were analyzed by isobaric labeling with a TMT 10-Plex using MultiNotch MS3. In this study we looked at technical and biological reproducibility in addition to the comparison of pre-term labor to term labor myometrial tissue samples. We found a very low level of variation in the technical (<0.01%) and biological (0.05%) replicates. Within the comparative study we identified over 4,000 protein groups with high confidence (FDR < 0.05) and ~400 of these showed a significant change between the two groups. Affected pathways were then identified using Ingenuity pathway analysis software (IPA). Further analysis was performed using the targeted TMT approach known as TOMAHAQ (Triggered by Offset, Multiplexed, Accurate-Mass, High-Resolution, and Absolute Quantification) on 38 peptides and phosphopeptides corresponding to proteins within the identified pathways affected by premature induction in an effort to determine the role of phospho-signalling.
  • MS1-based quantification of low abundance proteins that were not identified using a MS/MS database search approach
    Yan Wang, University Of Maryland
    With development in instrumentation and informatics tools, it is becoming routine to identify more than 2000 proteins in whole cell lysates via a shotgun proteomics approach. A remaining challenge is to assess changes in abundance of proteins that are at the limit of detection. In the data dependent analysis (DDA) approach the same peptide is often identified in one sample but missed in another. Presumably the missing data is due to the precursor not being isolated for fragmentation. In such cases, relative quantification could be determined from the MS1 peak intensity after using features such as accurate mass and retention time to identify the correct MS1 signal. In this study, we evaluated 4 different bioinformatic approaches in their ability to perform this analysis. In the PRG 2016 study, 4 non-mammalian proteins were spiked into 25 µg of whole HeLA cell lysate at 4 different levels: 0, 20, 100, and 500 fmols. Six non-fractionated datasets encompassing 4 separate analytical runs each and including analyses on Orbitrap Fusion, Q Exactive, and Orbitrap Velos instruments were selected for further study. In each set at least 1 peptide from each of the 4 spiked-in proteins was identified in at least one sample. Peptides from spiked-in proteins were quantified after using retention time and accurate mass information for identification. Programs used were: PEAKS (Bioinformatics Solutions, Inc.); Progenesis (Waters Corp.); MaxQuant (Max Planck Institute of Biochemistry) and Skyline (University of Washington). All evaluated software packages extracted quantitative information from MS1 spectra that did not yield Peptide Spectral Matches in samples with low concentrations of spike-in proteins. False quantification of peptides in the zero spike-in sample was observed. This is attributed to carry-over between runs and mis-assignment of noise in the signal.
  • sPRG-ABRF 2016-2017: Development and Characterization of a Stable-Isotope Labeled Phosphopeptide Standard
    Antonius Koller, Columbia University
    The mission of the ABRF proteomics Standards Research Group (sPRG) is to design and develop standards and resources for mass-spectrometry-based proteomics experiments. Recent advances in methodology have made phosphopeptide analysis a tractable problem for core facilities. Here we report on the development of a two-year sPRG study designed to target various issues encountered in phosphopeptide experiments. We have constructed a pool of heavy-labeled phosphopeptides that will enable core facilities to rapidly develop assays. Our pool contains over 150 phosphopeptides that have previously been observed in mass spectrometry data sets. The specific peptides have been chosen to cover as many known biologically interesting phosphosites as possible, from seven different signaling pathways: AMPK signaling, death and apoptosis signaling, ErbB signaling, insulin/IGF-1 signaling, mTOR signaling, PI3K/AKT signaling, and stress (p38/SAPK/JNK) signaling. We feel this pool will enable researchers to test the effectiveness of their enrichment workflows and to provide a benchmark for a cross-lab study. Currently, the standard is being tested in the sPRG members' laboratories to establish its properties. Later this year we will invite ABRF members and non-members to participate in the second half of our study, using this controlled standard in a HeLa S3 background to evaluate their phosphoproteomic data acquisition and analysis workflows. We hope this standard is helpful in a number of ways, including enabling phosphopeptide sample workflow development, as an internal enrichment and chromatography calibrant, and as a pre-built biological assay for a wide variety of signaling pathways.

Trending Topics

  • Highly multiplexed simultaneous measurement of cell-surface proteins and the transcriptome in single cells.
    Marlon Stoeckius, New York Genome Center
    Large-scale, unbiased identification of distinct cell types in complex cell mixtures has been enabled by recent advances in high-throughput single-cell transcriptomics. However, these methods are unable to provide additional phenotypic information, such as the protein levels of well-established cell surface markers. Current approaches to simultaneously detect and/or measure transcripts and proteins in single cells are based on 1) indexed cell sorting in combination with RNA-sequencing or 2) proximity ligation/extension assay in combination with digital PCR. These assays are limited in scale and/or can only profile a few genes and proteins in parallel. To overcome these limitations, we have devised a method, Cellular Indexing of Transcriptome and Epitopes by sequencing (CITE-seq), that combines unbiased genome-wide expression profiling with the measurement of specific protein markers in thousands of single cells using droplet microfluidics. We conjugate monoclonal antibodies to oligonucleotides containing unique antibody identifier sequences. We then label a cell suspension with DNA-barcoded antibodies and single cells are subsequently encapsulated into nanoliter-sized aqueous droplets in a microfluidic apparatus. In each droplet, antibody and cDNA molecules are indexed with the same unique barcode and are converted into libraries that are amplified independently and mixed in appropriate proportions for sequencing in the same lane. In proof-of-principle experiments using a suspension of mixed human and mouse cells and established high-throughput single cell sequencing protocols, we unambiguously identify human and mouse cells based on their species-specific cell surface proteins and independently on their transcriptome. We then use CITE-seq to classify cells in the immune system, which has been extensively characterized on the level of cell surface marker expression. We show that we are able to achieve better resolution of cell types by adding an extra dimension to the data. CITE-seq allows in-depth characterization of single cells by simultaneous measurement of gene-expression levels and cell-surface proteins, is highly scalable, only limited by the number of specific antibodies that are available.
  • Channel-free dead cell exclusion in FACS/flow cytometry (or "n+1 into n" does go)
    Roy Edward, BioStatus Limited
    Dead cell exclusion is a common requirement for flow cytometry on fresh, unfixed samples. In multi-colour phenotyping this occupies one or more channels reducing dimensionality and options for antibody-chromophore pairings. This limitation is obvious but further exacerbated if dead cell exclusion is identified or demanded retrospectively. Then, panel re-formatting into two tubes (linking antigens increase cost) or transfer to a higher dimensionality platform may be required. Importantly, additional demands are then placed on core facility capacity / throughput. Meanwhile, dead cell exclusion with DAPI occupies channels valuable for the new violet-excited chromophores and, alternatively, propidium iodide occludes R-PE, a bright and widely conjugated chromophore. A novel yet simple remedy is achieved using DRAQ7, an anthraquinone-based far-red fluorescing viability probe, validated in flow cytometry and fluorescence microscopy. Its absorbance spectrum and practice show DRAQ7 can be detected using excitation wavelengths of blue to red lasers; potentially then by two lasers in one analysis. This unique property locates DRAQ7+ events in a unique region of bivariate plots for available far-red/NIR channels. A “live” gate is then set on the preferred bivariate plot for that experimental set-up, excluding dead cells in all channels. In one example, a high-throughput flow cytometry platform limited to 4-channels was hampered by use of propidium iodide reducing antigen phenotyping to 3 channels, bleed through to other channels, and reducing from 16 to 8 the finite number of phenotypes that might be elicited from the antigen analysis. Substitution with DRAQ7 re-enabled use of all 4 channels for antigen expression analysis. It is noteworthy that there is no need for compensation and that this simple method can be applied retrospectively (e.g. on reviewers' request) without disturbing antibody panel design. Due to DRAQ7’s previously demonstrated ultra-low toxicity this method can also be used in cell sorting.
  • iABRF Roundtable Discussion: Advancing Core Technology Sciences and Communication through Establishment of a Worldwide Core Federation
    Timothy Hunter, University Of Vermont
    Core facility laboratories are critical to the research mission of the institute and community they serve. Many core focused associations, societies, and workshops are established worldwide with similar goals to advance the core sciences. ABRF has an interest to identify ways that the ABRF could best encourage the establishment and growth of core facility organizations and meetings around the world and on ways that the ABRF could best interact and coordinate with such organizations and meetings. Towards this end, the International ABRF Committee supports to work collaboratively with existing and emerging worldwide groups towards the establishment of a worldwide federation of core facility interest groups, in which the core facility interest groups from different countries are all treated and represented as equals. The mission of this federation would be to ensure international communication and collaboration around core facility matters. The benefit to the ABRF would be (1) increasing ABRF participation in efforts to advance core science and administration world-wide, and (2) to coordinate with and collaborate with core facility interest groups around the world. This session will have representation from numerous other core interest groups to help ascertain the common goals, their mandate, and how we might be able to form and interact under such a worldwide federation of core facility interest groups. The session will encourage attendee interaction and presented in a roundtable format.
  • The Road NOT Taken – A MoFlo Astrios Sort Logic Path to Discovery
    Lora Barsky, Beckman Coulter
    It is widely known that cell sorters provide rapid quantification and cell purification by incorporating multi-parametric approaches using fluorescent dyes and labels. Contemporary researchers incorporate transgenic fluorescent proteins for identification and use cell sorting as a pass through technology to capture cells for downstream studies. We’d all agree that knowing how to identify the target and learning how the cell sorter prioritizes cell capture is critical to a researcher’s success yet empirical testing of how sort logic impacts outcome is rarely a specific aim. As such, many investigators continue with the status quo when at the flow core – they use the same sorter and sort conditions – the stress of getting to a triplicate is far too great to spur exploration. In this presentation, I will share a story of how an investigator improved his RNAseq data by utilizing MoFlo Astrios sort modes and gate logic.
  • 3D Printing for the Core
    C. Jeff Morgan, University Of Georgia
    Three-Dimensional (3D) printing is a disruptive technology that puts the prototyping and manufacturing process directly into the hands of the inventor. Rapid prototyping and the additive manufacturing process promise to reshape the laboratory and shift the learning paradigm in the classroom. We will explore the basic concepts of 3D printing, rapid prototyping and 3D replication, as a core asset. We will also discuss specialized materials, where to find resources, and innovations in the field of 3D printing. This workshop will be in visual presentation format with a step-by-step walkthrough of the 3D printing process, with demonstration.

Core Administration

  • Core Operations Reporting: Generating Operational Tools for Specific Stakeholder Groups
    Jay Fox, University Of Virginia School Of Medicine
    There is a critical need to understand core operations to ensure successful and efficient delivery of value to stakeholders. Different stakeholders, such as staff, core directors, departmental chairs and deans have different perspectives of core operations and different interpretations of how those operations impact their individual domains. Most core have the benefit of access to ample data that when harvested and presented appropriately have offer a clear assessment of core operations that is tailored to the specific needs and view points of various stakeholder groups. The the University of Virginia Office of Research Core Operations (ORCA) we have complied a number of specific reporting tools tailored to the needs of core staffs, clients and those with institutional oversight over core operations. In this presentation we will present examples of these reporting tools, describe how they are compiled, and how the information in the tools is of value to specific stakeholders.
  • Realizing Commercial Value from Product Opportunities that Arise from Academic Research: An Introduction to the SBIR/STTR Programs
    Ron Orlando, UGA And GlycoScientific
    An objective of basic scientific research is the acquisition of knowledge, typically without the obligation to apply it to practical ends. Often during this endeavor, scientists will make a discovery with inherent commercial worth. The value of this work is difficult to ascertain since the technology may be un-validated, may lack intellectual property protection, and has an unknown market potential. These and other vulnerabilities often cause the technology to be undervalued and make it too risky to attract funding from the typical early stage investment strategies, particularly angel investors/venture capitalists. Hence, the potential value is often not realized. The US government has established two grant/contract programs that can be used to bridge this gap between academic concept and commercial product. These are the Small Business Initiated Research (SBIR) and the Small Business Technology Transfer (STTR) programs, which provide over $3 Billion in funding annually. These initiatives intended to help small businesses conduct research and development and aim to increase private sector commercialization of innovations derived from Federal research expenditures. NIH also provides market insight, through the Niche Assessment Program, that can be used to help small businesses strategically position their technology in the marketplace. These and other programs are put inplace to invigorate small buisness growth. This presentation will discuss how to leverage these opportunities to commercialize academic research. Particular attention will be paid on how to obtain capital from these funding mechanisms, compare SBIR to STTR programs, and describe how to decide which program to utilize at each stage of company and idea development. The presenter will draw on his experience to provide examples of how SBIR/STTR funding can be used to create commercial hardware, software, and research reagents. This presentation will also describe a strategy to extract value from product opportunities that arose from academic studies.
  • Rigor and Reproducibility
    AMY WILKERSON, Session Organizer, The Rockefeller University
    Independent verification of research results is essential to scientific progress and to public trust of the scientific method. Growing concern over increasing reports of the lack of transparency and reproducibility in biomedical research has led to increased scrutiny of scientific publications and a general call for development of best practices for generating and reporting research findings. Specific examples of how the science community is responding include the NIH guidelines for addressing rigor and reproducibility in grant applications and progress reports, the consensus “Principles and Guidelines in Reporting Preclinical Research, and efforts to further strengthen quality controls and data reporting and retention practices by core facilities. During this session, experts in sponsored program administration, scientific publishing, and core facility management will provide information about how each sector is addressing the call for increased rigor and reproducibility in biomedical science. In addition, each presenter will provide suggestions on how core facilities can support researchers in addressing new funding agency requirements and for enhancing rigor and reproducibility overall.
  • Informatics Core: Adding value to Data Produced in Core Facilities
    Claudia Neuhauser, University Of Minnesota
    In 2014, the University of Minnesota Informatics Institute (UMII) was founded to foster and accelerates data-intensive research across the University system in agriculture, arts, design, engineering, environment, health, humanities, and social sciences through informatics services, competitive grants, and consultation. Listening to researchers across the University and in design thinking workshops, it became clear that increasingly data management and analysis are becoming a major bottleneck in research labs. Core facilities in genomics, proteomics, and imaging are producing large amounts of data that are usually delivered to users without much prior processing. With bioinformatics being a fast moving field, it is increasingly difficult for labs to stay on top of the latest analysis tools. Furthermore, quality control of data and basic analysis are in need to become more standardized. To aid researchers in the management and analysis of their data, UMII hired analysts to run workflows and a data wrangler to help with the data management. Over the past year, UMII became part of Research Computing, which includes the Minnesota Supercomputing Institute and U-Spatial. These three institutes provide a full range of services that transform raw data into usable products through standardized workflows and research collaborations.
  • Communicating Science to the Public: Break on through to the other side
    Hudson Freeze, FASEB And SBP Medical Discovery Institute
    That important topic and The Doors rhythmic answer may not be so far-fetched. Practicing is not simple; it’s time consuming, plus there’s a foreign language requirement. A few perspectives will be offered, but solutions will come from your own efforts outside the laboratory.
  • Bioinformatics of TMT Multiplex Data, UNR-Style
    Karen Schlauch, University Of Nevada Reno
    Recent advances in mass spectrometry using the Thermo Scientific Orbitrap Fusion Tribrid Mass Spectrometer allow for the identification and multiplex quantification of many proteins across larger and larger sample sizes. Managing variation across sample protein abundances is not (yet) as standard as with microarrays or RNA-seq platforms. The Nevada INBRE Bioinformatics Core at the University of Nevada, Reno offers several specific methods of analyzing TMT multiplex data run on the Orbitrap Fusion and extracted using Proteome Discoverer at our adjoining Nevada Proteomics Core. A pilot study run in our Cores measured the biological variability in protein abundance of 3000 identified proteins across ten human tissue samples; our Core uses these measures as baseline variability for quality control in human tissue experiments. To further manage intra-cohort variability of protein abundances, our Core developed a technique to identify outliers per protein and per cohort that controls variation without excluding data from entire proteins or samples. We show that our method notably decreases the coefficient of variation of abundance measures (by 2-fold or more) within cohorts and thus leads to more statistically powerful hypothesis tests across cohorts. Our Core performs statistically sound hypothesis tests on these processed data to identify differentially expressed proteins or peptides at either the protein level or the peptide level; tests are specifically based on data distribution and experimental hypotheses. For studies involving post-translational modifications, statistical tests are applied only on unique modified peptides of each post-translational modified protein to take into consideration that abundances of modified peptides make up only a portion of each modified protein’s abundance (sometimes as low as < 10%). We show that our technique offers stronger statistical tests across cohorts for post-translational modifications. To conclude, our Bioinformatics and Proteomics Cores offer novel and specific techniques for TMT data on one or many mutiplex experiments.
  • Combining Tools to Enhance Scientific Research
    Cheryl Kim, La Jolla Institute For Allergy And Immunology (LJI)
    Core facilities at the La Jolla Institute (LJI) provide scientists with powerful technologies necessary to understand more about the cells of the immune system and their function in a wide range of diseases. While each technique has it’s own advantages, combining these tools can provide comprehensive details about the immune system as a whole. In this brief talk, we will present a few different projects involving multiple core facilities and techniques, including Sequencing, Microscopy and Cytometry, to enhance scientific research. In the first project, we will discuss the methods used to improve cell sorting of rare immune cell populations, such as antigen specific T cells, and subsequent global gene expression analysis by RNASeq. In another project, we will discuss deeper profiling and characterization of immune cell subsets by combining imaging, cytometry and transcriptomics.
  • Research Development & Core Facilities Infrastructure
    Karin Scarpinato, Florida Atlantic University
    Core facilities are laboratories in academic institutions that provide state-of-the-art instrumentation and world-class technical expertise whose costs are shared by researchers on a fee-for-service basis, and/or supported by the institution. As such, core facilities enable the researchers to obtain access to services/instrumentation that otherwise are too expensive to have in their own labs. To that end, core facilities extend the scope of research programs and accelerate scientific discoveries. Historically, core facilities have not taken a key role in research development. However, over time, academic institutions and federal funding agencies have recognized the importance of shared resources, and are re-evaluating best practices for operations and efficiency. In the systematic evaluation of core facility operations and its infrastructure, many principles of research development are recognized which lends to the suggestion that RD offices can add value to strategic development of individual core facilities and the network of core facilities infrastructure institutionally, regionally and nationally. Principles that govern core facilities that can be enhanced by RD include, but are not limited to strategic planning, interdisciplinary team building, grant writing, seed funding programs and limited submissions management. This panel discussion will provide examples of how RD principles apply to the development of core facilities infrastructure, and how RD offices can assist in improving operations of such facilities. These examples will be followed by an open discussion and exchange of ideas with the audience. By supporting core facilities, RD offices expend their impact in promoting research beyond the single-investigator/research teams. In turn, these services enhance the office outputs: new initiatives, collaborations, grant applications and outreach
  • Management for Scientists, Part 2: Core Business Problem Solving
    Robert Carnahan, Vanderbilt University
    How will Kurt handle his BIG core getting BIGGeR? That is the focus of this hands-on problem-solving workshop. Come prepared to work! This session will begin with an overview of problem-solving strategies for organizations. Attendees will then be divided into teams and be guided through a case study that incorporates the types of pressures and decisions that commonly face core facilities. The goal is to use this exercise to develop and refine systematic approaches that can be applied to nearly any problem.
  • A Balanced Approach to Return on Investment (ROI) for Research Core Facilities
    Justine Karungi, MBA, FACHE, Hoglund Brain Imaging Center, University Of Kansas Medical Center
    In these times of economic constraint and increasing research costs, shared resource cores have become a cost effective and essential platform for researchers who seek to investigate complex translational research questions. Cores produce significant value that cannot be captured using traditional financial metrics. Benchmarking studies conducted by ABRF and other organizations indicate that most research cores do not fully recover operating expenses. As such, these “operational losses” represent institutional investment which, if well planned and managed, produce future returns for the institution’s research community that extend far beyond subsidized pricing. Current literature indicates that there is no single measure that can provide an accurate representation of the full picture of the returns on research investments. This presentation attempts to provide instruction and examples using the Balanced Score Card (BSC), (Kaplan and Norton), as a tool for assessing the Return on Investment (ROI) on research core facilities. The BSC supplements traditional financial measures with criteria to measure performance in three additional areas - customers, internal business processes and learning and growth. The presenters will also discuss and share their experiences and good practices on how they have utilized these ROI approaches to streamline their core operations and make sound investment decisions and strategies to further the mission of their institutions and to meet the expectations of their various investors and key stakeholders.

Other

  • Untargeted Global Lipidomics in the Systems Biology Tri-ome Era
    John Asara, Beth Israel Deaconess Medical Center / Harvard Medical School
    Global profiling of lipids is emerging as a go to -omics technology in recent years as high resolution instrumentation and software improves. Systems biology approaches to understanding the workings of a cell or tumor as it becomes dysregulated from diseases such as cancer are becoming possible with advancements in proteomics, polar metabolomics and non-polar lipidomics. I will focus primarily on our untargeted lipidomics platform that uses a data-dependent acquisition (DDA) strategy with positive/negative polarity switching on a QExactive HF Orbitrap with commercial software for identifying a broad range of lipids and performing alignment and relative quantification (LipidSearch and Elements) (Breitkopf et. al., Metabolomics, 2017). It is all based on MS/MS fragmentation spectra for verifying lipid identifications from a 30 min. high-throughput RP-LC-MS/MS experiment. We will demonstrate the lipidomics platform and its place in systems biology from a serial-omics liquid-liquid extraction from a single cell or tumor sample in breast cancer, multiple myeloma or urine samples whereby the other extraction partitions can be used for other -omics technologies.
  • ABRF-MRG2016 Metabolomics Research Group Data Analysis Study
    Amrita Cheema, Georgetown University
    Metabolomics is an evolving field. One of the major bottlenecks in the field is the varied application of bioinformatics and statistical approaches for pre- and post-processing of global metabolomic profiling data sets collected using high resolution mass spectrometry platforms. Several publications now recognize that data analysis outcome variability is caused by different data treatment approaches. Yet, there is a lack of inter-laboratory reproducibility studies that have looked at the contribution of data analysis techniques toward variability/overlap of results. Thus our study design recapitulates a typical metabolomics experiment where the goal is difference detection of features between two groups. The goal of MRG 2016 study is to identify the contribution of data pre and post processing methodologies on data outcome. Specifically, for this study we have used urine samples (a commonly used matrix for metabolomics-based biomarker studies) from mice exposed to 5 Gray of external beam gamma rays and those exposed to sham irradiation (control group). The data files were made available to study participants for comparative analysis using commonly used bioinformatics and/or biostatistics approaches in their laboratory. The participants were asked to report back top 50 metabolites contributing significantly to the group differences. We have received several responses and the findings from the study are being consolidated for MRG presentation at the ABRF 2017 meeting.
  • Bridging Topic III: Management for Scientists, Part 1: Core Business & Graduate Education Partnership
    Robert Carnahan, Vanderbilt University
    Management for Scientists is a unique educational collaboration between core facilities and the Vanderbilt University career development office for biomedical trainees. This twelve-week course targets both trainees and core personnel, providing a distillation of the tools and skills needed to be successful in business, whether your business is management of core lab, directing a startup, applying for grants, or all of the above. After a didactic learning phase, teams of trainees with a core facility leader work on defining a solution to an actual business-related problem currently being experienced in the core lab. This session will feature both a trainee and core lab director participant discussing the impact of the course on their personal development and on the functioning of the host facility. It will also present lessons learned on building and sustaining an educational partnership involving core labs.
  • Accelerating Research with 3D Biology™: Simultaneous Single-molecule Quantification of DNA, RNA, and Protein Using Molecular Barcodes.
    Niro Ramachandran , NanoString Technologies
    NanoString is pioneering the field of 3D Biology™ technology to accelerate the rate of research and maximize the amount of information that can be generated from a given sample. 3D Biology is the ability to analyze combinations of DNA (detect SNVs and InDels), RNA (Gene Expression or Fusion transcript detection), and Protein (abundance and post- translational modifications) simultaneously on a NanoString nCounter® system. We will highlight the use of SNV Technology in detecting cancer driver mutations, the utility of multiplexed DNA-labeled antibody approaches to quantify protein expression levels from small amounts of sample (both lysate and FFPE), and demonstrate the utility of multi-analyte analysis and the novel insights this approach can uncover.

Platinum Sponsors of the ABRF