Poster Session I and Wine & Cheese Reception
(313) LipidQuan–A Plug and Play Solution for Targeted Lipid Profiling
Lipid metabolism is complex and involves a large number of metabolic reactions resulting in an enormous number and variety of actual lipids within living cells. LIPIDMAPS currently stores more than 40000 lipid structures. The dynamic range of lipid concentrations in biological systems can vary by 106 or more (from nanomolar fatty acids to attomolar eicosanoid lipid mediators). The level of precision of most systems-wide measurements is not yet sufficient to detail specific levels or concentrations of cellular components.
Comparing lipidomic data across laboratories requires absolute quantification since relative values can vary widely not only between laboratories and between instruments due to various factors including: analyst errors, sample preparation differences (e.g. extraction methods) and ion suppression when using ESI MS. The lack of accurate characterisation of the lipid species also severely hinders interpretation of lipid metabolism associated with disease and physiological states.
The proposed platform involves integrated high throughput analytical tool for accurate and robust measurement (>1500 injections) of lipid from sample preparation through to data handling and pathway elucidation. The platform can also be used for more in depth targeting of specific class of lipids of interest. Validation of the chromatographic method is performed at multiple sites by different analysts to show robustness and ease of method transfer. A rapid total lipid extraction method involving IPA and MTBE for plasma shows promising results. This will grant researchers more flexibility depending on their specific needs and requirements.
The calibration, system suitability and QC standards used in this platform is sourced pre-mixed from commercial vendors (Avanti Lipids). SymphonyTM Software is used to automate the entire workflow and integrated with Skyline for data processing to enhance efficiency and flexibility. Once quantitative data has been generated and processed, pathway mapping tools can be used to determine the biological relevance of changes in concentration, and make data comparisons between laboratories.
(143) MagMax DNA Multi-Sample Ultra 2.0 genomic DNA purification from DBS cards on the KingFisher instrument
The Mag Max™ DNA Multi-Sample Ultra 2.0 Kit (Mag Max™ Ultra 2.0) is developed for high-throughput purification of genomic DNA (gDNA) from a variety of sample matrices including: buccal swab, buffy coat, saliva, and whole blood using the KingFisher automated instrument platform. However, with the rise of direct-to-consumer (DTC) genetic testing companies, at-home sample collection of whole blood stored on dried blood spot (DBS) cards has become more commonplace. DBS cards such as Whatman 903™ offer many advantages over other matrices; as nucleic acids are immobilized and bound on the filter paper, the process of drying excludes water which renders proteases or nucleases inactive. Whatman FTA Classic DBS cards offer an added level of protection with chemical treatment of the filter paper resulting in lysed cells, denatured proteins and inactivated pathogens. For these reasons, DBS cards are shipped and stored at room temperature, and transported via regular mail. This study demonstrates workflow improvements to Proteinase K (PK) digest in the Mag Max™ Ultra 2.0 workflow enabling gDNA isolation from DBS cards spotted with either fresh blood or venous blood treated with anticoagulants such as K2EDTA.
(161) Streamlining DNA Sequencing and Bioinformatics Analysis Using Software Containers
Advances in software containerization are revolutionizing the way applications are distributed and executed. Containers are stand-alone software environments that encapsulate all dependencies an application may need, are built from well-defined recipes, and are immutable and portable, ensuring reliability and reproducibility of results.
The Bioinformatics Core of the Interdisciplinary Center for Biotechnology Research (ICBR) is using containers to streamline the management of Next-Gen Sequencing (NGS) data generated by the center’s Sequencing Core. NGS data analysis usually begins with a sequence of quality-control and cleanup steps that are common to most applications. These include trimming reads on the basis of quality, generating reports, and producing basic statistics on the sequencing run output (e.g. number of reads per sample, fraction of low-quality reads, etc). These initial steps have been containerized and are now executed automatically after each sequencing run, before the datasets are handed over to the Bioinformatics Core for analysis. This strategy offers three advantages. First, QC reports are immediately available after the sequencing run is complete and can be delivered to the customer right away. Second, any problems with the data can be detected, and if necessary addressed, before starting the analysis, saving precious time. Third, Bioinformatics Core staff are freed from having to perform these routine tasks and are able to focus on the actual analysis of the data.
We describe the implementation of the containers, and how they were integrated into the standard workflow of the sequencing core. Examples include generation of QC reports via FASTQC and MULTIQC as well as read trimming via Trimmomatic or fastp. We also report on a preliminary evaluation of the benefits in terms of faster project turnaround and customer feedback. Future plans include integration with CrossLabs, using custom forms to select the specific pre-processing steps to be performed after each sequencing run.
(167) Design, Analysis, and Validation of Error-Correcting Internal Spike-In Controls for Metagenomics
Robust next-generation sequencing (NGS) metagenomics assays require defined detection limits and process traceability from sample collection to bioinformatic analysis. DNA sequence spike-ins can serve as qualitative controls, barcode and track samples, and provide absolute concentration data to address these challenges. Here we describe the design, analysis, and validation of error-correcting internal spike-in controls for metagenomics.
We developed software to design synthetic sequences and use a Hamming (3, 1) code to encode a sequence descriptor, barcode, and manufacturing lot information. This error-correcting encoding makes the spike-ins’ detection robust to DNA base substitutions, insertions, and deletions. These errors can occur either during manufacturing or as part of sequencing. Designed sequences are not homologous to known reference genomes and contain no homopolymer runs. Finally, we added support in the One Codex Platform for automatically detecting and analyzing these spike-in sequences.
We tested this software and spike-in control design by generating synthetic sequences, synthesizing them, and adding them at multiple concentrations as part of a clinical NGS workflow. Initial results across a range of sample types and library preparations are presented.
(315) MetaboQuan-R: A Series of Rapid, Targeted UPLC-MS/MS Methods for Metabolomics Research in a Core Laboratory
Series of rapid UPLC-MS/MS methods have been developed on a single platform with identical analysis workflow for high throughput measurement of derivatized amino acids, acylcarnitines, bile acids, and free fatty acids in human serum to support metabolomics research. The CORTECS UPLC Column technology enables the use of a generic chromatographic approach for separation and measurement of various metabolites without compromising throughput. The separation of isomers (amino acids and bile acids) are achieved in analytical runtimes of <3mins making these methods powerful and are well suited for a Core laboratory. Furthermore, these methods have been demonstrated to be suitable for the analysis of physiologically relevant levels of metabolites in human during targeted multi-omics analysis.
Human Serum Sample Preparation: A generic sample preparation methods involving protein precipitation with methanol (1:4 serum:methanol) suffices the extraction of acylcarnitines, bile acids, and free fatty acids from human serum. For amino acid analysis, serum samples were prepared using the Waters™ AccQTag Kit following the Kit protocol.
LC conditions: UPLC separation was performed with an ACQUITY UPLC I-Class System (fixed loop), equipped with a CORTECS T3 2.7 µm (2.1 × 30 mm) analytical column. A sample of 2 µL was injected at a flow rate of 1.3 mL/min. Mobile phase A was 0.01% formic acid (aq) and Mobile phase B was 50% isopropanol in acetonitrile containing 0.01% formic acid. The LC gradient and column equilibration times were optimized for each class of metabolites. The analytical column temperature was maintained at 60°C.
MS conditions: Multiple Reaction Monitoring (MRM) analyses were performed using a Xevo TQ-S micro mass spectrometer. All experiments were performed in electrospray ionization mode.
Informatics: Method information was imported onto the LC-MS system using the Quanpedia functionality within MassLynx. This extendable and searchable database produces LC and MS methods as well as processing methods for use in TargetLynx for compound quantification.
(155) RipTide™ High Throughput NGS Library Prep for Genotyping in Populations
High throughput genotyping technologies are required for large-scale population genetics. Evolutionary biology studies, human disease research and large-scale agricultural breeding programs all lend themselves to technologies that are able to provide more information at lower cost. Over the past decade, genotyping technology has transitioned from PCR-based SNP assays to microarrays, and is now shifting toward high-throughput genotyping by sequencing (GBS). The RipTide High Throughput Rapid DNA Library Prep allows for the preparation of NGS libraries from up to 960 individually barcoded samples in a few hours with automation. When combined with low coverage sequencing and imputation-based genotype analysis, the result is an order of magnitude greater information at a significantly reduced cost. Here we present data on 96 Zea mays (maize) samples consisting of 4 parent populations and 92 recombinant inbred lines (RILs). For each sample, hundreds of thousands to millions of haplotype markers, including SNVs and structural variants, are accurately detected. A minimum of 95% complete coverage of direct and imputed markers is obtained for each RIL. The approach can be applied to any species, regardless of genome size or GC content. In this study, a median of >1 million markers were genotyped by sequencing on an Illumina HiSeq 4000 instrument for an estimated cost of library construction and sequencing of < $25 per sample.
(301) Analysis of Human Nuclear Protein Complexes by Quantitative Mass Spectrometry Profiling
Analysis of protein complexes provides insights into how the ensemble of expressed proteome is organized into functional units. While there have been advances in techniques for proteome-wide profiling of cytoplasmic protein complexes, information about human nuclear protein complexes are very limited. To close this gap, we combined native size exclusion chromatography (SEC) with label-free quantitative MS profiling to characterize hundreds of nuclear protein complexes isolated from human glioblastoma multiforme T98G cells. We identified 1,794 proteins that overlapped between two biological replicates of which 1,244 proteins were characterized as existing within stably associated putative complexes. co-IP experiments confirmed the interaction of PARP1 with Ku70/Ku80 proteins and HDAC1 (histone deacetylase complex 1) and CHD4. HDAC1/2 also co-migrated with various SIN3A and nucleosome remodeling and deacetylase components in SEC fractionation including SIN3A, SAP30, RBBP4, RBBP7, and NCOR1. Co-elution of HDAC1/2/3 with both the KDM1A and RCOR1 further confirmed that these proteins are integral components of human deacetylase complexes. Our approach also demonstrated the ability to identify potential moonlighting complexes and novel complexes containing uncharacterized proteins. In our presentation, we will discuss the utility of SEC fractionation coupled with label-free LC–MS profiling to determine novel protein complexes.
(327) Preliminary Study on Protein Profiling from Raw Honey
Honey is known for its medicinal value. Several components including flavonoids, polyphenolic compounds, alkaloids, and glycosides were reported from honey that are responsible for its antimicrobial, anti-inflammatory and antioxidant activities. The major component of honey is carbohydrate but other macro and micronutrients including proteins are also present in low amount. Proteins from honey were either identified to be originated from plant or honey bee. Proteome of honey is not completely discovered yet and has untapped potential to identify proteins that may have diverse functions. In this preliminary study, we aim to extract and conduct protein profiling from honey. The unpasteurized raw organic honey was purchased from local market. It was mixed with Tris-HCl, pH 8.00 buffer in 1:1 ratio. Extraction of proteins was performed using ammonium sulphate precipitation followed by dialysis (3.5kDa cutoff). The gel filtration chromatography was performed on a Superdex 200 16/600 column. The fractions collected were further purified by RP-HPLC on Vydac-C4 (2.1x250mm) column. The crude protein and eluted fractions were analyzed by electrophoresis using 12% Tris/tricine gels. The crude honey protein was carboimidomethyl modified and digested with TPCK treated trypsin. Mass spectrometry analysis was performed by using Impact II QTOF MS/MS and data generated searched against SWISS-Prot database. The electrophoretic profile showed presence of proteins in the range of 25-75kDa. Mass spectrometry data base search revealed presence of similar proteins such as five types of major royal jelly proteins I-V, chymotrypsin inhibitor from Apis Mellifera and neuropeptide SIFamide receptor from fruit fly Drosophila melanogaster. The mass data was also searched utilizing PEAKS software with SWISS-Prot database and germin-like proteins from Arabidopsis thaliana were also identified in honey emphasizing the presence of characteristic plant proteins.
(317) Purification, Characterization and Cytotoxic Activity of Peptides and Proteins from Bitter Melon (Momordica charantia) Seeds
Medicinal plants are rich source of pharmaceutically active peptides and proteins. Momordica charantia, a traditional medicinal plant commonly known as bitter melon, is a member of a family Cucurbitaceae and has been explored for various human diseases such as peptic ulcer, malaria, diabetes, infectious diseases and cancer. However, to date fewer studies related to bitter melon proteome have been reported despite rapid advancement in proteomics. Therefore, studies aiming to explore novel biologically active proteins and whole proteome of the bitter melon are needed for better understanding. In present study, a novel trypsin inhibitor peptide was purified by 2D-LC and amino acid sequenced was established by Edman protein sequencing. Purified peptides and proteins were also explored for their anticancer effects. Crude seed proteins were extracted in phosphate buffered saline (PBS) and separated on gel filtration chromatography column (HiLoad 16/600 Superdex 200). The collected fractions were resolved by SDS-PAGE gel electrophoresis and screened against MCF-7 human breast cancer cell line. Active fraction 12 with an IC50 value of 100 μg/ml was further purified by RP-HPLC and analyzed by MALDI for purity. The purified 3.2 kDa trypsin inhibitor peptide was modified by 4-vinyl pyridine and sequenced by using Edman PPSQ sequencer. BLAST result showed 96% sequence similarity with MCTI, 94% with MCTIII and 69% similarity with other reported trypsin inhibitors from Momordica genus. Further, to identify seed proteins, fractions 5-13 were digested with trypsin and LC MS/MS was performed. The peak mass list was searched against the SwissProt Viridiplantae database using Mascot search engine. A total 275 proteins of wide molecular weight and pI range were identified. These findings expand our knowledge related to bitter melon seed proteome and add a new member to the existing family of trypsin inhibitor peptides.
(131) Exploring the benefits of using commercially available pre-plated DNA-seq reagents for high throughput NGS library prep
The breadth of clinical genomics testing research has been steadily increasing as the cost of Next Generation Sequencing (NGS) decreases. Currently, a major bottleneck in a lab’s NGS workflow is library preparation which, when performed manually at the bench can be labor-intensive, time-consuming, error-prone, and operator-dependent. As throughput demands grow, automation of library preparation helps reduce some of these issues, though not entirely since reagent preparation and plate setup steps are still typically performed manually. Here we highlight the benefit of a commercially-available pre-plated NGS library prep kit versus the standard, tube-based version of the same kit that requires upstream preparation prior to automation. To demonstrate these benefits, library prep was performed using gDNA and FFPE samples on the PerkinElmer® Sciclone® NGSx workstation using the NEXTFLEX® Rapid XP DNA-Seq reagents. Setup times of (1) manual mixing and plating reagents from tubes and (2) pre-plated reagents were compared, as well as the reproducibility and quality of both methods. We find a 5- to 10-fold decrease in robot setup time, depending on the technician. Additionally, libraries generated using the pre-plated reagents were more reproducible and showed no failures or reagent plating errors. These results highlight the ability of pre-plated library prep reagents to save time and minimize cost, all while providing reliable results from run to run. These features would be especially beneficial to any laboratory that needs a robust high-throughput DNA-seq solution for their unique genomic clinical research testing applications.
(127) Enzymatic Methyl-Seq: Next Generation Methylomes
DNA methylation is important for gene regulation. The ability to accurately identify 5-methylcytosine (5mC) and 5-hydroxymethylcytosine (5hmC) gives us greater insight into potential gene regulatory mechanisms. Bisulfite sequencing (BS) is traditionally used to detect methylated C's, however, BS does have its drawbacks. DNA is commonly damaged and degraded by the chemical bisulfite reaction resulting in libraries that demonstrate high GC-bias and are enriched for methylated regions. To overcome these limitations, we developed an enzymatic approach, NEBNextÒ Enzymatic Methyl-Seq (EM-SeqÔ), for methylation detection that minimizes DNA damage, resulting in longer fragments and minimal GC bias.
Human NA12878 Illumina libraries were prepared using bisulfite and EM-Seq methods. Libraries generated with DNA inputs ranging from 10 ng to 200 ng were sequenced using Illumina’s NovaSeq 6000. Reads were adapter trimmed (trimadap) and aligned to GRCh38 using BWAMeth. Aggregate metrics like GC bias and insert size distribution (Picard) were assessed before evaluating methylation status of individual Cs (MethylDackel). MethylKit was used for correlation analysis. EM-Seq libraries have longer inserts, lower duplication rates, a higher percentage of mapped reads and less GC bias compared to bisulfite converted libraries. Global methylation levels are similar between EM-seq and whole genome bisulfite libraries (WGBS) indicating overall detection of methylated C's is similar. However, CpG correlation plots demonstrated higher correlation coefficients indicating that EM-Seq libraries are more consistent than WGBS across replicates and input amount. GC Bias and dinucleotide distribution showed that EM-Seq has more even dinucleotide representation compared to the AT rich representation observed for WGBS. EM-seq’s more even coverage allows for a higher percentage of CpG's to be assessed leading to more consistent evaluation of methylation across key genomic features (TSS, CpG island, etc.). EM-seq is more robust than WGBS, works over a wide range of DNA input amounts, has superior sequencing metrics, and detects more CpG's.
(211) Meta-Max:An easy to use calibration tool to maximize the value of fluorescence microscopy data
Fluorescence microscopy continues to become a more and more sensitive and versatile tool for many branches of science, thanks to many advances in fluorescent labeling as well as microscope technology and image processing. As we continue to push the limits of what is technically possible, the quality of data obtained through fluorescence microscopy is increasingly determined by factors that are often not be readily visible in the image: the image acquisition settings, microscope properties and data-processing steps often contribute significantly to the experimental outcome and therefore need to be known and understood for proper interpretation and comparison.
Accurate meta-data collection and optical calibration of the microscope go a long way towards allowing imaging data to be properly evaluated and compared ; however, there are certain crucial pieces of information that simply are not captured in even the most rigorous and precise routines for record-keeping and calibration, as they simply cannot be measured without the aid of (often costly, cumbersome and complicated) external devices. Here, we present an inexpensive, easy-to-use calibration device that, among other things, allows the user to measure excitation power and perform basic detector calibration routines. In doing so, the “M_e_t_a_M_a_x_” _tool provides crucial meta-data to evaluate potential photo-toxicity and allows current and future model-based data processing tools to get as much quantitative information as possible out of the images.
(325) Maximize the output of routine proteome analyses by using micro pillar array column technology
As an alternative to the conventional packed bed nano LC columns that are frequently used in bottom-up proteomics research, PharmaFluidics offers micromachined nano LC chip columns known as micro pillar array columns (μPAC™). The inherent high permeability and low ‘on-column’ dispersion obtained by the perfect order of the separation bed makes μPAC™ based chromatography unique in its kind. The peak dispersion originating from heterogeneous flow paths in the separation bed is eliminated (no A-term contributions) and therefore components remain much more concentrated during separation resulting in unprecedented separation performance. The freestanding nature of the pillars also leads to much lower backpressure allowing an high operational flow rate flexibility with exceptional peak capacities.
Complementary to its landmark 200cm long column which is ideally suited to perform comprehensive proteome research, a 50cm long μPAC™ column is now available which can be used in a more routine research setting. With an internal volume of 3μL, this column is perfectly suited to perform high throughput analyses with shorter gradient solvent times (30, 60 and 90 minute gradients) and it can be used over a wide range of flow rates, between 100 and 2000 nL/min. Recently performed experiments with 500ng of HeLa cell digest indicate that an increase in protein identifications up to 50% and a gain of 70% in peptide identifications can be achieved when comparing the 50cm µPAC™ column to the current state-of-the-art in packed bed columns. The conventional packed bed columns (2 different vendors) used for this benchmarking experiment were 15cm in length and were packed with sub 2µm porous silica particles. LC pump pressures needed to operate these classical columns at a flow rate of 300 nL/min range between 200 and 300 bar, whereas only 40 bar was need to operate the 50cm µPAC™ column at the same conditions.
(139) Infectious Disease Metagenomics – Error Mitigation And Best Practices For The Clinical Routine Use Of Metagenomic Sequencing
Shotgun metagenomic sequencing is increasingly adopted by the biomedical community for clinical infection diagnosis and for surveillance applications. Benefits include a highly accurate, unbiased, and culture independent characterization of microbial communities. As a consequence, metagenomics is complementing traditional infectious disease tools, such as culture, AST and PCR.
Despite its potential for clinical microbiology, many laboratories are challenged by the method’s disruptive effect on traditional lab workflows and by the complexities inherent to establishing a robust, standardized, and validated workflow in the clinical lab. Metagenomics is uniquely sensitive to the introduction of contamination and bias along almost every step of the workflow which can impact accuracy, precision, and a timely and actionable diagnosis. Therefore, the optimization and standardization of pre-sequencing, sequencing, and post-sequencing steps have to be carefully considered.
In this presentation we shed light on failure-modes and present mitigation strategies employed at the CosmosID CLIA-certified NGS Service Laboratory.
We address the optimization and validation of laboratory methods designed to avoid laboratory contamination and to control for the introduction of bias or contamination. The use of internal standards, including positive and negative controls are an important part of quality control.
Also, the bioinformatic analysis of metagenomic data remains a challenge for many laboratories. A myriad of published algorithms scientifically explore different approaches for deconvoluting the valuable biological signal from bias and error introduced during the pre-sequencing and sequencing phases. While the clinically informative and actionable unit in microbiology is a strain, not a genus or species, most available methods fail to taxonomically classify detected microbes with sub-species level resolution. We present data from independent validations demonstrating that CosmosID algorithms and proprietary databases enable classification of microbes with strain-level resolution and industry-leading sensitivity and precision.
(309) Impact of enrichment strategy on observed expression in fresh frozen tissues
Human tissues obtained during clinical and surgical procedures are an invaluable resource for determining diagnosis, treatment response, and disease progression. In many cases, these biological specimens are preserved for future analyses, most commonly by formaldehyde fixation and paraffin embedding (FFPE). FFPE has the advantage that such specimens are stable at room temperature for years, but introduces additional complexity in terms of sample preparation, protein modifications or degradation. Frozen tissues are costly to store, but are preferred for post translational modification analyses even though they suffer some degradation on long timescales. Significant interest remains in optimizing recovery from both storage and sample processing conditions for these precious specimens. Here, we compare fresh frozen samples of different biological origin and complexity, such as heart, brain and lung tissues, and assess the differences between peptides enriched by hydrophilic interaction chromatography (HILIC) bound to MagReSyn polymer particles as compared to standard reversed phase C18 enrichment. For example, we lysed and extracted proteins from ~18mg of frozen normal autopsy heart tissue, digested with Trypsin/LysC and analyzed them using Orbitrap mass spectrometry. From a 500ng injection, we observed 1,749 proteins from C18 prep and 1,678 proteins from HILIC prep with 20k peptide spectrum matches in both cases. The proteins identified from both preparations are broadly similar in terms of protein families and function. But with more detailed analysis of the particular proteins and pathways observed, such as through tools like GeneTrail2, we observe each preparation prefers particular pathway members. Here, HILIC preparation favors identification of proteins involved in glycolysis while C18 preparation identified proteins involved in GTP hydrolysis and ribosomal assembly. We will further analyze additional tissues, including lung tissue with adenocarcinoma and brain tissue with glioblastoma, with an eye to understanding the advantages or disadvantages of particular preparations based on the biological hypothesis being tested.
(177) Cross Site Evaluation of Sanger Sequencing Dye Chemistries
Sanger sequencing remains an essential tool utilized by researchers and despite competition from commercial providers, many sequencing core facilities continue to offer Sanger sequencing services to their customer base. By reducing costs and providing rapid turnaround times, in-house Sanger sequencing remains a viable core service, often helping to subsidize more costly services such as next generation sequencing. While Applied Biosystems’ BigDye ™ Terminator chemistry was once the only solution available for Sanger DNA sequencing, several new products employing novel dye chemistries and reaction configurations have entered the market. Currently, it is unclear how these new chemistries perform on various DNA templates, including difficult templates or their amenability to commonly employed cost-saving measures such as dye dilution and reaction miniaturization. With this goal in mind, we compared the quality of Sanger sequencing data produced by kits available from several vendors using control and difficult-to-sequence DNA templates under various reaction conditions. This study will serve as a valuable resource to core facilities conducting Sanger sequencing, providing guidelines on appropriate protocols to use with each kit and determining the most cost effective solutions for Sanger sequencing while maintaining high quality results.
(113) ampPD – An Automated Primer Design Tool for Highly Multiplexed, Single-tube Tiled Amplicon PCR for Resequencing of Tumor Samples
Introduction: PCR-based target enrichment is widely used to prepare libraries for next generation sequencing (NGS). Amplicon tiling is often used for assay design but can result in amplicon overlap that reduces assay sensitivity. Pillar Biosciences’ SLIMamp® enrichment technology enables efficient amplicon tiling even with overlapping amplicons. Here, we present an automated primer design tool (ampPD) for amplicon tiling and evaluate the performance of an assay designed against the TP53 tumor suppressor gene.
The ampPD workflow includes target preparation, candidate primer generation, primer selection and pooling. All 11 TP53coding exons were targeted for amplification and performance of the assay design was evaluated by synthesizing and pooling the ampPD output primers to prepare SLIMamp libraries from 10ng, 5ng and 1ng input DNA from FFPE tumor samples. Resulting libraries were normalized, pooled and sequenced on an Illumina® MiSeq®.
Automated TP53 primer design generated an initial pool of 627 compatible primers that was reduced to an optimized pool of 19 amplicons, of which 16 amplicons were overlapping. The total design took roughly 10 seconds to complete. The assay displayed high coverage uniformity, with 100% of targeted based covered >0.2x mean coverage for all samples. For the 5ng and 10ng samples, variant detection was highly reproducible and 100% concordant with known outcome and a median 86% on-target rate. For the 1ng samples, all positive variants were detected but with increased background noise and a lower median on-target rate.
The ampPD tool is a rapid and robust primer design pipeline for amplicon tiling assays. Using TP53 as a model, the resulting primers provided uniform coverage and sensitive variant detection at FFPE-derived DNA inputs >1ng. Future work will extend the design pipeline to larger and more complex panels, enabling rapid creation of single-tube targeted sequencing assays for a variety of applications.
(163) The Complete Microbiome Workflow: Deciphering the Secrets of the Human Microbiome for Future Diagnostic Solutions
Human Microbiome is an exciting and rapidly expanding field of research, aimed at studying bacteria, fungi and viruses (beneficial as well as pathogenic) densely populating in our bodies. Human Microbiome project demonstrated that microbiome dysbiosis in gastrointestinal tract might eventually cause various diseases including Crohn’s, C.Diff, U.colitis, Diabetes and cancer. Correlation of Microbiome compositions to these chronic diseases promises a great deal for development of innovative diagnostics and therapeutics. We developed a new automated high-throughput Microbiome kit for purification of fecal Total Nucleic Acid (DNA and RNA) with superior yields and quality. This magnetic bead-based kit enables processing 96 stool samples on KingFisher Flex platform within just 1h. Gene expression analysis on these Total Nucleic Acid extractions, using TaqMan assays for diverse bacterial species confirmed superior depletion of inhibitors of enzymatic reactions and lowest Ct values vs other commercial kits. This presentation will highlight several case studies that were conducted utilizing the Ion Torrent platform for 16S sequencing to characterize the microbiome compositions in fecal samples originating from various donors. One case study explored the impact of diet on gut microbiome profile and NGS data indicated that the gut microbiome can respond to dietary perturbations, as we observed several shifts in the abundances of certain bacterial families in GI tract of a person, who was on a nutritionist recommended diet. Another case study was performed with fecal samples collected from a person taking probiotics, to explore whether there are any respective changes in the gut microbiome profile. 16S sequencing results detected a significant shift in firmicutes/bacteroidetes levels with the intake of probiotics. In conclusion, the workflow that we developed to harness the power of microbiome enables fast generation of metagenomic data for bacterial communities residing within human gut, which can be utilized as diagnostic biomarkers for certain diseases, and potentially lay the path towards future microbiome therapeutics.
(307) High Sensitivity PTM Characterization in Complex Cell Lysates Using Trapped Ion Mobility
Post-translational modification of proteins represents an essential mechanism that regulates the function and abundance of proteins and is critical to a wide variety of cell processes such as signal transduction, cell development and mitosis. Post-translationally modified peptides are often present in low abundance and isobaric peptides that differ only by the site of the modification and hence represent a significant analytical challenge. Using a trapped ion mobility (TIMS) equipped QTOF, mouse samples enriched for phosphorylated tyrosine, acetylated lysine, K-ε-GG and symmetric and asymmetric dimethylated arginine resulted in 1095, 8804, 7199, 300 and 147 unique modified peptides respectively from 15 mg of starting material using a 90 min gradient. More than 3450 unique peptides with acetylated lysines were identified from only 150 µg of starting material. We have previously shown the unique ability of the TIMS analyzer to separate isobaric, coeluting phosphopeptides that differ only by the position of the site of phosphorylation on the peptide backbone. This will be extended to the localization of additional isobaric positional isomers using the unique ability of the TIMS process to increase the ion mobility resolution by varying the trapping range and ramp time of the second TIMS analyzer.
(107) MULTI-seq: Scalable Single-Cell RNA-seq Multiplexing Using Lipid-Tagged Indices
MULTI-seq is rapid, modular, and universal scRNA-seq sample multiplexing strategy using lipid-tagged indices. MULTI-seq reagents can barcode any cell type from any species with an accessible plasma membrane in 10 minutes and also functions on nuclei. The method is compatible with enzymatic tissue dissociation, downstream FACS enrichment, and also preserves viability and endogenous gene expression patterns. We leverage these features to multiplex the analysis of multiple solid tissues comprising human and mouse cells isolated from patient-derived xenograft mouse models. We also utilize MULTI-seq's modular design to perform a 96-plex perturbation experiment with human mammary epithelial cells. MULTI-seq also enables robust doublet identification, which improves data quality and increases scRNA-seq cell throughput by minimizing the negative effects of Poisson loading. We anticipate that the sample throughput and reagent savings enabled by MULTI-seq will expand the purview of scRNA-seq and democratize the application of these technologies within the scientific community.
(303) Absolute Quantitation of the N-Linked Glycans from Biotheraputic IgGs
The ability to accurately quantitate the glycan chains attached to glycoproteins has wide-ranging implications. Numerous studies over the past 40 years have demonstrated that abnormal glycosylation occurs in virtually all types of human cancers and demonstrate the potential of using glycan markers in either a diagnostic or a prognostic manner. The glycosylation on recombinant protein therapeutics is also known to have profound effects, with one of the better-known examples being the increased serum half-life of erythropoietin (EPO) resulting from glycoengineering. Hence, the quantification of glycoprotein glycans play important roles from the discovery of new diagnostic/prognostic markers to the development of therapeutic agents.
The focus of this presentation is the evaluation of an isotopically labeled IgG as an internal standard for the relative and absolute quantitation of N-linked glycans attached to human IgGs. We have developed a HILIC-MRM protocol that permits us to monitor 36 glycoforms attached to the conserved N-linked glycosylation site in the FC region of human IgGs. This procedure can be applied to the analysis of IgGs in cell culture media without the need for extensive sample cleanup and involves minimal sample processing. Essentially, the sample is spiked with an internal standard consisting of an isotopically labeled IgG. The material is then reduced, alkylated, digested with trypsin, and analyzed by HILIC-MRM. The internal standard allows for both absolute and relative quantitation across the multiple samples and reduces the experimental accuracy to <10%. To demonstrate utility our HILIC-MRM approach, we have performed a time course experiment to evaluate how glycosylation changes over the course of an expression and compared glycosylation profiles obtained from IgGs expressed under different conditions.
(503) Leveraging the versatility of the chicken embryo chorioallantoic membrane (CAM) model as a tool in the fight against cancer.
Patient-derived xenograft (PDX) models have become the gold standard tool for pre-clinical evaluation of promising anti-cancer therapeutics. However, most PDX models involve the use of immune-deficient mouse models that are becoming increasingly cost-prohibitive, their use is limited to certain cancer types, and experiments are often time and labor-intensive. The chicken embryo chorioallantoic membrane (CAM) represents a rapid, scalable, and cost-effective alternative in vivo 3D culture and PDX platform that streamlines the drug discovery process. Primary or tumorigenic established cell lines are engrafted as a mixture of cells and basement membrane extract on the CAM of embryonic day 7 SPF-certified eggs (White leghorn). 3D organoids or tumors are allowed to grow on the CAM for up to 10 days before embryos are sacrificed and tumors are collected. Organoid or tumor growth is imaged, quantified, and analyzed under different experimental conditions. Furthermore, angiogenesis, cell invasion into the CAM, and metastasis can also be reliably quantitated between different groups. Our results support the establishment of over 40 combined cell lines and PDX models, the ability to assess angiogenesis, and allow for the measurement of micrometastasis into the chick embryo visceral organs. In conclusion, the CAM model serves as a great alternative but can also complement the mouse PDX model as a versatile tool that represents a more permissive and rapid platform for applications such as therapeutic agent testing and functional mechanistic studies that more closely resemble the in vivo setting.
(117) Automation of IDT probe-based targeted enrichment workflow using xGen® Lockdown Probes in a high-throughput lab
The Center for Inherited Disease Research (CIDR) provides high quality next-generation sequencing (NGS), genotyping and statistical genetics consultation to investigators working to discover genes that contribute to disease. CIDR began production level automated genotyping in 1996 with Human STRP linkage panels and continues to automate its production workflows as technologies advance. Automated protocols now exist for Illumina GWAS arrays and whole exome and targeted sequencing in research and clinical settings. CIDR is continually seeking new laboratory and informatics ways to improve its workflow and reduce human error while maintaining the highest quality of data production. Here we have created an automated workflow to process IDT® (Integrated DNA Technologies) probe based target enrichment for NGS using the Perkin Elmer Janus® to normalize and pool samples prior to capture, and the Agilent Bravo® for hybridization, capture, wash, and amplification master mix distribution. HapMap and FFPE generated libraries were enriched for the IDT® PanCancer Panel using both manual and automated processing and subsequently were sequenced using the Illumina MiSeq® platform. Flowcell data was processed using the CIDRSeqSuite analysis pipeline to generate QC metrics for both processing types to confirm that the addition of automated steps maintained or increased the quality of the sample data. QC data analysis showed that automated processing of samples increased the percent selection of both HapMap and FFPE sample types by over 5%. The reproducibility rate between automated and manual processing was 99.5% for HapMap samples.
(119) Comparison of three libraries for 10x Genomics single cell immune TCR repertoire profiling
10x Genomics single cell sequencing provides a comprehensive and scalable solution for cell characterization and gene expression profiling of hundreds to tens of thousands of cells. We tested three libraries: (1) 5 prime gene expression, (2) direct enrichment for TCR, and (3) post-cDNA-amplification enrichment for TCR, for 10x Genomics single cell immune TCR repertoire profiling on two Invariant natural killer T (iNKT) cells B240 and B241 with iNKT purity at 90% and 45% respectively. We sequenced about 1100 cells on B240, and 580 cells on B241. We identified the number of productive clonotypes at 165, 284, and 332 on B240 for three libraries respectively, where 47 clonotypes are overlapped among all three libraries. B241 has the number of productive clonotypes at 88, 152 and 148 for three libraries respectively, where 35 clonotypes are overlapped among all three libraries. We also performed the correlation between gene expression data and clonotypes with the data by library 3. Since each sample is a mixture of iNK T cells and classic NK T cells (non-invariant), PCA/tSNE plots of the 5' gene expression data shows two separate clusters for each sample, identifiable by TCR clonotype usage. One cluster contains unique or very few alpha/beta clones, while the other one contains diverse alpha/beta clones. Based upon our data, we noticed that the number of clonotypes identified is proportional to the number of cells sequenced regardless of libraries. Library 1 has low sensitivity in detecting clonotypes due to un-enrichment of TCR genes, and only beta chain was detected for clonotypes. The other two libraries are able to identify high and similar number of clonotypes. However, library 3 is able to associate gene expression pattern with TCR clonotype usage, which is a big advantage over other two libraries for 10x Genomics single cell immune TCR repertoire profiling.
(145) Multiplex miRNA Profiling for Biomarker Discovery and Verification Studies Using the FirePlex® Platform
We have developed the FirePlex® Technology Platform to address the need for rapid and sensitive biomarker quantitation. Utilizing patented FirePlex hydrogel particles and a three-region encoding design, FirePlex assays allow for true, in-well multiplexing, providing flexible and customizable analyte quantification.
To facilitate miRNA biomarker discovery studies, we offer our standard FirePlex miRNA assays, for quantitation of 5-400 miRNA targets per sample and data acquisition on standard flow cytometers. For miRNA screening studies requiring faster workflows, we offer our high-throughput miRNA assays (miRNA-HT). The high-throughput assays allow for quantitation of 5-36 miRNA targets per sample and assay readout rapidly conducted on high-content imagers.
FirePlex miRNA assay combines particle-based multiplexing with single step RT-PCR signal amplification using universal primers. Thus, these assays leverage PCR sensitivity while eliminating the need for separate reverse-transcription reactions and mitigating amplification biases introduced by target-specific qPCR. Assay sensitivity is ~1000 miRNA copies per sample, with a linear dynamic range of ~5 logs. Assays can be performed without the need for RNA purification, making the FirePlex ideally suited for profiling in serum, plasma, exosomes, cell culture supernatants, urine, and directly from FFPE and tissues. The ability to multiplex targets in each well eliminates the need to split valuable samples into multiple reactions. Results are displayed and interpreted using the integrated, free-of-charge FirePlex Analysis Workbench.
Panels are available for biomarker discovery studies, as well as for specific research areas of interest. We also provide the option to design fully customizable miRNA panels for any sequence, from any species, at no additional cost.
Here we present the data from several studies investigating circulating miRNA profiles, as well as miRNA profiles obtained directly from FFPE tissues, using the FirePlex miRNA Assay Panels. Together, this novel combination of bioinformatics tools and multiplexed, high-sensitivity assays enables rapid discovery and verification of miRNA biomarker signatures from biofluid samples.
(151) QIAseq FastSelect: One-step, rapid removal of rRNA during whole transcriptome NGS library prep
Whole transcriptome NGS enables the characterization of both coding mRNAs and long noncoding RNAs (lncRNAs) from biological samples, including FFPE samples. However, before ultra-sensitive RNAseq can be performed on FFPE samples, cytoplasmic and mitochondrial rRNA should be removed to increase sensitivity and decrease the cost per sample.
To deplete rRNA, various methodologies exist, including hybridization/capture methods both as a pre-treatment or during post-library construction and methods which utilize enzymatic removal with target specific probes. However, these methods are arduous, not ideally suited for fragmented samples and may cause sample loss or distortion of transcriptomic profiles.
To remedy the complexity and the time necessary for rRNA removal in RNAseq applications, we have developed the QIAseq FastSelect RNA Removal kits which utilize a novel, one-step rRNA depletion technology. QIAseq FastSelect is compatible with both fresh and FFPE-fragmented RNA and stranded RNAseq libraries that utilize the dUTP or selective ligation method. Globin depletion kits are also available.
Lung cancer is the second most common cancer in both men and women. In 2018 alone, there were more than 200,000 new cases and over 150,000 deaths. One of the keys for developing new strategies around lung cancer treatment involves understanding the molecular pathology responsible for growth, metastasis and treatment failure. To determine this, whole transcriptome analysis, particularly next-generation sequencing (NGS), becomes a crucial technique in identifying druggable pathways and new biomarkers for patient stratification.
Here we utilized QIAseq FastSelect and strand-specific RNAseq to analyze the whole transcriptome of matched normal and tumor lung cancer FFPE samples. The resulting differentially expressed RNA signatures are being utilized for pathway analysis and biomarker evaluation for sample stratification. QIAseq FastSelect rapidly eliminates rRNA to enable the discovery of gene signatures locked away in FFPE samples. It is faster, more efficient, and more cost effective than existing solutions.
(401) Data Portal: Experiences Managing and Delivering Data to End Users at the University of Illinois at Chicago
Managing project data files and ultimately delivering these files to end users can be a major challenge for core facilities. Data management tasks for a core facility can include transferring data from instruments or providers to storage servers/appliances, transferring to and from computational clusters for processing, organizing the data into projects, and sharing the files with the end users. We addressed this challenge by leveraging Arvados to act as the central data management system for the core facilities at UIC. Arvados, an existing platform to store and organize genomic and big data, provided a web accessible platform for transferring data to and from storage repositories as well as organizing in to different projects. We then developed a web-based data portal application to allow a managed and more user-friendly access of the data in Arvados to the end users. The data portal application allows core facility personnel to share select files, with descriptors, from a project with the end users and bundle these files into releases that could represent separate instrument runs or processing and analysis tasks within a given project. The data portal application also allows selected end users to be designated as “owners” and then be able to add and remove other end users from a project. Furthermore, we developed a SFTP connector for the data portal that allows end users to download their data, in a self-service fashion, using a public key authentication scheme that does not require sharing passwords with the end user. Compiled and source code for the data portal application and associated SFTP connector will be available online at https://github.com/chlige/data-portal and https://github.com/chlige/data-portal-sftp, respectively.
(329) Quantitative Profiling of Post-Translational Modifications in E. coli
Protein post-translational modification (PTM) serves to regulate nearly every function of cellular biology including growth, development, and disease. Antibody-based enrichment coupled with liquid chromatography-tandem mass spectrometry (LC-MS/MS) has long been used to study PTM changes and associated cellular signaling in mammalian cells and tissues. Application of these methods to study prokaryotic systems is critical to understanding the underlying signaling networks that cause phenotypic changes. In this study we compare changes in the abundance of PTMs in E. coli strain MG1655 grown in low nitrogen or low phosphate as compared to complete minimal media. Cells were grown in the appropriate media, harvested in 9M urea buffer, and subjected to western blotting with a panel of PTM-specific antibodies to determine by band pattern/intensity changes which were good candidates for the mass spectrometry-based analysis. Samples were reduced, alkylated, digested with trypsin, and subjected to immuno-affinity enrichment with the antibodies that showed changes in the western blotting screen (Phospho-S/T/Y, Acetyl-K, Succinyl-K). Enriched peptides were run on an Orbitrap Q Exactive mass spectrometer in data-dependent mode. MS/MS data was matched to peptide sequences, score filtered, and the relative amount of each peptide was quantified across samples. This analysis identified over 200 phosphorylated peptides, over 7,500 acetylated peptides, and 7,800 succinylated peptides across the samples. These PTM peptides were from proteins representing all aspects of cellular biology. Among these, thousands of peptides were identified that changed between complete media and low nitrogen or low phosphate conditions. Many proteins in cellular metabolic pathways changed between samples, providing potential insights into how E. coli cellular signaling is regulated in response to low nutrient conditions. The method can also be applied to other prokaryotic systems to study aspects of bacterial cell signaling, disease biology, or microbiome research.
(123) Deconstructed PCR: A novel method for reducing PCR bias
Despite substantial effort invested into correcting amplification bias, PCR-based studies continue to generate data that distort underlying template ratios. A major source of PCR bias is from primer-template interactions, leading to PCR selection favoring certain templates. Motives of this study were to understand better the causes of selection bias in PCRs with complex templates and complex degenerate primer pools, and to develop novel strategies to decrease bias. An experimental system was developed to reduce PCR bias by separating linear copying of templates from exponential amplification of amplicons (Deconstructed PCR or ‘DePCR’) and by limiting opportunities for primer-template interactions. Furthermore, the DePCR system provides a mechanism to quantify primer-template interactions (Primer utilization profiles or ‘PUPs’). DePCR was used to interrogate mock DNA communities and complex environmental samples, and all reactions were compared to standard PCR workflows. Experiments with annealing temperature gradients demonstrated a strong negative correlation between annealing temperature and the evenness of primer utilization in complex pools of degenerate primers. Critically, shifting primer utilization patterns mirrored shifts in observed microbial community structure. In experiments with mock DNA templates, DePCR demonstrates that although perfect match primer-template interactions are abundant, the dominant type of primer-template interactions are mismatch interactions, and mismatch amplification starts immediately during the first cycle of PCR. Furthermore, in DePCR reactions involving multiple mismatches, no strong effect on template profiles was observed. DePCR allows improved representation of templates, greater tolerance for mismatches between primers and templates, and greater success in amplifying complex templates with low complexity primer pools. In addition, PUPs are empirical quantitative data derived from primer interactions with genomic DNA templates, and are a novel form of biological information that can be acquired only with DePCR. The DePCR method is simple to perform and is limited to PCR mixes and cleanup steps and has applicability to amplicon-based microbiome studies.
(501) Evaluation of a New Gradient Elution Protein Sequencer
Edman protein sequencing is perhaps the most established method for determining the N-terminal sequence of proteins. The N-terminal sequence is often involved in cell signaling, protein folding and physiological function of proteins. For this reason, the FDA requires N-terminal sequence to be verified for each protein or peptide drug product. To show the utility of Edman sequencing, Brain Natriuietic Peptide (BNP), which is a forty-five-residue cyclic hormone peptide with diuretic and angiectatic effects, was sequenced an upgraded Edman sequencer that utilizes technologically advanced LC pumps and an photo diode array detector for enhanced sensitivity. The Edman sequencing results from this new unit are compared to in-source decay sequencing from a MALDI-TOF instrument. Although, sequencing by mass spectrometry is much faster, it is challenging to get the N-terminal sequence information by mass spectrometry, whereas Edman sequencing provides conclusive N-terminal sequence information. The results show that despite the good correlation between the Edman sequencing data an MALDI-TOF sequencing data, the MALDI-TOF does not provide the N-terminal sequence of the first six residues from BNP and those residues are only identified by Edman. This data implies that there are practical limitations to sequencing by MS alone and that Edman sequencing still is the gold standard for obtaining N-terminal sequence information from peptides and proteins.
(169) Automated Co-extraction of High-quality DNA and RNA from Single Clinical FFPE Samples
Formalin-fixed, paraffin-embedded (FFPE) preservation is the preferred method to archive clinical tissue biopsy samples for histopathological diagnosis. As advances in clinical molecular pathology continue to grow, the importance of reliable methods of extraction from FFPE tissue specimens become vital to ensure that patients receive timely and accurate reports. However, nucleic acid extraction from FFPE samples can be challenging and labor intensive, often resulting in degraded and fragmented DNA and RNA. Given the precious and limited nature of these clinical samples, the ability to differentially co-extract high-yield and high quality DNA and RNA from a single sample input provides a tremendous advantage. Coupling the Covaris LE220R-plus Focused-ultrasonicator with liquid handling automation and the truXTRAC® FFPE kits for high-yield co-extraction, in this poster we demonstrates a standardized clinical FFPE extraction workflow providing downstream result confidence (higher yields and corresponding higher DV200 scores), increased efficiency, decreased sample variability, and reduction of manual “touch points” throughout the process. Furthermore, it is shown that the automated DNA and RNA workflows yield similar results as compared to manual methods using our truXTRAC FFPE kits.
(311) Integrating database search and de novo sequencing for immunopeptidomics with DIA approach
Identification of tumor-specific antigens (neoantigens) is needed for development of effective cancer immunotherapy and a good source for such antigens are the pools of HLA-bound peptides presented exclusively by the tumor cells. Mass spectrometry (MS) has evolved as the method of choice for the exploration of the human immunopeptidome (HLA class-I and class-II peptides). The key challenge is to deal with the low abundance of these peptides. Data-independent acquisition (DIA) technology promises to capture the low abundance data. However, the high number of fragments ions generated from multiple peptide precursors contained in the same selection window complicates the data analysis in a classical database search strategy. This problem is circumvented by the use of a peptide reference spectral library, which is generated beforehand by an extensive analysis of the similar samples by DDA. An alternative is to create a pseudo-DDA dataset from the DIA data for subsequent search in way similar to the classical DDA strategy. Both approaches have a shortcoming. Peptides in the samples not present in a spectral library or sequence database in principle cannot be analyzed. To circumvent this limitation, de novo peptide sequencing is essential for immunopeptidomics. We recently reported that deep learning enables de novo sequencing with DIA data. In this work, we have developed a new integrative peptide identification method which can integrate de novo sequencing more efficiently into protein sequence database searching or peptide spectral library search. Evaluated on large real datasets our method outperforms current identification methods.
(101) Accelerating high-throughput screening with FirePlex®-HT
In patients and animal models, molecular biomarkers are used as indicators of normal and pathogenic processes. In drug discovery and screening pipelines, molecular biomarkers are used to assess the mechanism of action, efficacy, and toxicity of lead compounds. To address the need for rapid and sensitive quantitation of protein biomarkers, we have developed the FirePlex®-HT Immunoassays, which enable multiplex quantitation of up to 10 protein analytes in 384-well plate format.
Utilizing patented FirePlex hydrogel particles and a three-region encoding design, our assays allow for true, in-well multiplexing, providing flexible and customizable quantification of analytes. FirePlex-HT immunoassays use high-performance recombinant matched antibody pairs that reduce cross reactivity between individual analytes, provide up to 3-4 logs dynamic range, and demonstrate 1-100 pg/ml sensitivity. Assays require only 12.5 µl of biofluid sample per well, and have been validated in serum, plasma, and cell culture supernatant. The two-step workflow and no-wash assay format limit hands-on time and are amenable to automation, thus making FirePlex-HT ideally suited for high-throughput screening studies. Assay readout is conducted on high-content imagers and data analysis is performed with the integrated, free-of-charge FirePlex® Analysis Workbench software, thus bypassing the need for dedicated instrumentation or expensive software licences.
Here we introduce the simplified workflow of the FirePlex-HT® immunoassays with data demonstrating the performance for quantifying key cytokines in multiplex, in biological samples. In addition, we show comparison data generated using multiple different high content imagers, which demonstrate comparable assay performance. Together, this novel combination of multiplexed, high-sensitivity assays and bioinformatics tools enables rapid quantitation of protein biomarker signatures in biofluid specimens.
(207) Whole organ vascular imaging with cellular resolution
Understanding spatially complex biological systems like entire organ vasculature requires the availability of high-resolution information on their tissue architecture, as is provided by light sheet microscopy. The aim of our study was to show whether image quality can be improved using optimized light sheet parameters and objective lenses with planar focal plane, high NA and long working distance.
A mouse kidney was analyzed as follows. Staining was performed using anti-PECAM1 primary antibodies and Alexa dye–coupled secondary antibodies. Samples were optically cleared in a benzyl alcohol/benzyl benzoate solution (BABB). Cleared samples were stored in BABB and imaged with an UltraMicroscope II light sheet microscope, using either the zoom body setup or the infinity corrected optics setup together with objective lenses of the MI PLAN series (LaVision BioTec, a Miltenyi Biotec Company). 3D volumes were rendered using Voreen (Department of Computer Science at the University of Münster, Germany).
This study reveals an improved image quality of data acquired with the objective lenses of the MI PLAN series compared to those data acquired with the zoom body setup. The higher NA and the dedicated light sheet microscopy design of the objective lens contribute to the data optimization. Beside the improved detection optics, we were able to show that optimized light sheet illumination leads to further improvements of the imaging results.
Infinity corrected optics and optimized light sheet parameters improve image quality thereby making light sheet microscopy a valuable tool to image entire organ vasculature.
(111) A metagenomic analysis of environmental and clinical samples using a secure hybrid cloud solution
The number and types of studies about the human microbiome, metagenomics and personalized medicine, and clinical genomics are increasing at an unprecedented rate, leading to computational challenges. For example, the analysis of patient/clinical samples requires methods capable of (i) accurately detecting pathogenic organisms, (ii) running with high speed to allow short response-time and diagnosis, and (iii) scaling to ever growing databases of reference genomes. While cloud-computing has the potential to offer low-cost solutions to these needs, serious concerns regarding the protection of genomic data exist due to the lack of control and security in remote genomic databases.
We present a novel metagenomic analysis system called "Virgile" that is capable of performing privacy-preserving queries on databases hosted in outsourced servers (e.g., public or cloud-based). This method takes as input the sequenced data produced by any modern sequencing instruments (e.g., Illumina, Pacbio, Oxford Nanopore) and outputs the microbial profile using a database of whole genome sequences (e.g., the RefSeq database from NCBI). The algorithm for the microbial profile aims to estimate without bias the abundance of microorganisms present using a genome-centric approach.
Result: Using an extensive set of 65 simulated datasets, negative and positive controls, real clinical samples, and mock communities, we show that Virgile identifies and estimates the abundance of organisms present in environmental or clinical samples with high accuracy compared to state-of-the-art and popular methods available, including MetaPhlAn2 and KrakenUniq. Running at high speed, Virgile can also be run on a standard 8 GB RAM laptop.
Virgile is a novel privacy-preserving abundance estimation algorithm called Virgile that can efficiently and rapidly discern the abundance and taxonomic identification of organisms present in a metagenomic sample, including those from medical environments. To the best of our knowledge, Virgile is the only metagenome analysis system leveraging cloud computing in a secure manner.
(133) High throughput, reduction transcriptomics using Ultraplex RNA sequencing
Elucidating molecular mechanisms and understanding genetic heterogeneity in cancer biology can be achieved through high throughout gene expression analysis and pathway assignment. Recent advances in RNAseq methodologies have enabled accurate gene expression profiling from single cells, but analyzing hundreds or thousands of cells is an arduous exercise in library prep.
Here we describe a high throughput 3’ RNAseq library prep methodology termed Ultraplex (UPX). With QIAseq UPX, reverse transcription is performed directly on lysed cells, while simultaneously assigning a unique ‘cell index’ to all cDNA synthesized from a cell. All subsequent transcriptome or targeted panel library steps that follow occur in a single pooled sample; with up to 384 samples assigned per sequencing index. Each 384-plex library is assigned a standard sample index, such that up to 384 x 48 transcriptomes or 384 x 384 targeted panels libraries can be sequenced together. With this methodology, thousands of transcriptomes or targeted RNA panels can be prepared and sequenced together.
Here, we analyze the heterogeneity of an ostensibly uniform population of cancer cells, identifying unique signatures of genes that drive cellular clustering. With QIAseq UPX, high throughput transcriptomic analysis enables the identification and characterization of gene signatures which divide cells into discrete sub-populations based on vectoral gene expression components.
(129) Evaluation of Cell Fixations for Downstream RNA Isolation
Increasingly, Flow Cytometry Shared Resource Facilities are asked to sort cells for RNA isolation either in bulk or at the single cell level. In many cases, the ability to fix the cell prior to sorting is desirable. With so many fixation methods in the literature the Flow Cytometry Research Group (FCRG) decided to perform a systematic evaluation of the reported fixation methods to assess how the different fixatives affected the quality of RNA isolated from sorted cells. Based on the literature, five different common chemical fixatives were analyzed using the cell line HL-60. The assessment included a paraformaldehyde fixation, alcohol fixation (methanol and ethanol), formaldehyde fixation, zinc fixation and two commercial fixations, BD Cytoperm/Cytofix (cat #554715) and eBiosciences Intracellular Fixation and Permeabilization Buffer (cat#88-8824-00). Each method was tested at two separate shared facilities and for each method different variations of the fixation procedure, i.e., time, temperature, dilution, were also tested. The protocol involved fixing the cells first, then proceeding to sort those cells into lysis buffer (RLT) and measure the amount, quality, and purity of the RNA. Four samples were used for each fixation condition: unfixed not sorted, unfixed sorted, fixed not sorted, and fixed sorted. Nanodrop was used for purity, Ribogreen for yield, and a bioanalyzer for quality. Results will be presented that will aid researchers and shared facilities in determining optimal fixation processes for experimental design involving cell sorting.
(179) Hierarchical model for integrative analysis of mRNA-seq and miRNA-seq data
The newly discovered importance of miRNAs in homeostasis and disease have made it essential to incorporate miRNAs into gene regulatory networks. These findings call for investigations aimed at identifying disease-associated miRNA-mRNA pairs. A hierarchical model offers the opportunity to associate molecules measured in multiple omic studies across several levels to uncover novel relationships pertaining to disease status. The hierarchical model can be specified with two levels: (1) a mechanistic submodel relating mRNA to miRNA markers, and (2) a clinical submodel relating disease status to mRNA and miRNA, while accounting for the mechanistic relationships in the first level. In order to determine the mRNAs that go into the hierarchical model, we first use random forests to identify mRNAs associated to the disease status. The mechanistic submodel fits a penalized regression model on each of these mRNAs versus all miRNAs. The clinical submodel uses a penalized logistic regression model to relate the disease status to the linear predictors from the mechanistic submodel, as well as miRNAs and mRNAs not considered in the mechanistic submodel (i.e., miRNAs and mRNAs that did not show evidence of association). The performance of the hierarchical model is evaluated using TCGA mRNA-seq and miRNA-seq data generated by analysis of tumor and adjacent normal liver tissues acquired from 49 HCC patients. We found 25 miRNAs associated to 9 mRNAs forming 40 unique miRNA-mRNA associations. Of these, three of the associations have been reported in the literature as having been experimentally verified. Network and pathway analysis further reflect the role of these molecules in the pathogenesis of HCC at a fundamental level, as relevant to changes involving key oncogenes and tumor suppressors. Area under the receiver operating characteristic (ROC) curves show the pairs selected by the hierarchical model have comparable performance as the top 30 mRNA selected by random forests.
(115) Analysis of Complex Microbial Samples Using High Definition Mapping
Complex microbial communities play a critical role in a wide variety of biological systems in the environment and throughout the human body. Characterization of these communities has historically been limited to one or a small number of known genetic markers for species such as 16S rRNA genes. While the advent of inexpensive shotgun sequencing has enabled a more accurate measure of biodiversity than marker typing, short read lengths prevent accurate analysis of related strains within a mixture, as well as consistent characterization of large-scale structural variation that can distinguish highly related strains and significantly impact pathogenicity.
To address these issues, we have applied the Nabsys HD-MappingTMplatform to strain-level identification of microbial strains in the context of complex mixtures. HD-Mapping employs electronic detection of tagged single DNA molecules, hundreds of kilobases in length, at a resolution superior to existing mapping approaches. The combination of long read lengths and high information density means that individual HD-Mapping reads tend to be much more specific to the genomes from which they derive than do NGS reads. As a result, differences between closely related strains of the same species become clear with minimal bioinformatics analysis.
Here we describe strain-level characterization of the ZymoBIOMICS Microbial Community Standard using Nabsys HD-Mapping. DNA was extracted using a standard solution phase, kit-based isolation procedure. Single-molecule reads derived from the mixture were mapped to the NCBI database of all ~10,500 completed bacterial references, including ~1,700 references for species present in the mixture. Through analysis of unique read mapping characteristics, the correct reference was identified for each of the 8 bacterial strains present in the mixture as well as relative strain quantitation.
(141) Liquid biopsy quality control – the importance of plasma quality, sample preparation, and library input for next generation sequencing analysis
Liquid biopsy is emerging as a non-invasive companion to traditional solid tumor biopsies. As next-generation sequencing (NGS) of circulating cell-free nucleic acids (cfNA = cfDNA and cfRNA) becomes common, it’s important to understand the impact of sample preparation on quality, specificity, and sensitivity of liquid biopsy tests. Plasma samples are often limited and may have undesirable characteristics such as lipemia or hemolysis that contribute unwanted genomic DNA (gDNA) to the sample. Low cfDNA concentration can also limit the amount available for NGS library prep. In this study, we explore the effects of suboptimal plasma and low library input on liquid biopsy NGS, and discuss various techniques for in-process quality control of cfNA samples isolated from plasma.
(109) A Fluorescence-based Method for Accurate Quantification of NGS libraries in Minutes
Accurate quantification of NGS libraries is critical for a successful sequencing run. Currently used methods of quantification are time-consuming, costly, and can be highly variable. We have developed NuQuant, a novel method to accurately quantify NGS libraries, that can be performed with a simple fluorescent measurement. NuQuant is compatible with the common red fluorescence excitation/emission filter set (650/670 nm), making it compatible with a wide range of bench top fluorometers and fluorescent plate readers. We have developed a custom library quantification application for the Qubit fluorometer that directly provides the molar concentration of a library. Utilizing this application, we have demonstrated that NuQuant has excellent reproducibility across users from multiple sites. We now demonstrate the compatibility of NuQuant with standard fluorescence plate readers, enabling quantification of libraries in a high-throughput fashion. We have tested NuQuant on a variety of commonly used plate readers such as the Tecan Infinite 200 Pro and the Promega GloMax. Libraries in a 96-well format can be measured in a matter of minutes, without the need for sample dilution. Molar concentration of libraries was easily determined by utilizing a standard curve. We tested libraries with various input from 10ng to 500ng and insert size from 200bp to 500bp, and found good agreement between NuQuant values and a qPCR based quantification method. Most importantly, we observe good correlation between NuQuant library concentration and total number of sequenced reads. In conclusion, scientists with access to commonly used fluorescent plate readers can now use NuQuant to achieve rapid and cost-effective quantification of NGS libraries, generating highly uniform sequence reads in multiplex runs.
(171) Automating Genomic DNA Extraction from Whole Blood and Serum with GenFind V3 on the Biomek i7 Hybrid Liquid Handling System
The isolation of high quality genomic DNA (gDNA) is the precursor to many molecular biology assays. The new GenFind V3 Blood and Serum DNA Isolation Kit from Beckman Coulter uses the patented SPRI (Solid Phase Reverse Immobilization) paramagnetic bead technology to isolate genomic DNA from fresh or frozen whole blood and serum containing Citrate, EDTA, or Heparin anticoagulants, as well as from cultured cells. GenFind V3 uses an improved cell lysis buffer and Proteinase K treatment to rupture cell membranes and digest proteins to give consistently high quality gDNA with improved yields and purities. The GenFind V3 kit supports up to 400 µL sample input volumes from blood or serum or two million cultured cells and can be performed in either a 96-well plate or tube-based format. Additionally, the need for large volume whole blood extractions continues to grow along with the ability to easily automate the process to alleviate user errors. To address this need, a protocol is available that supports >400 µL and up to 2 mL of whole blood is available that can be performed in an automated fashion using a 24-well plate. Volumes greater than 2 mL can be performed in a tube-based format.
Here, we demonstrate a walk-away automated solution for GenFind V3 kit to purify up to 400 µL of whole blood using the Beckman Coulter Biomek i7 Hybrid liquid handling system. The Biomek i-Series method is a high yielding and robust nucleic acid purification process and can process up to 96 samples in a 96-well format in less than 3.5 hours with minimal user interaction and no off-deck centrifugation or vacuum filtration. The method is also compatible with the Biomek i-Series NGS workstation, facilitating the installation process for existing Biomek i-Series users.
(157) Sample quality control of cell-free DNA
Quality control of nucleic acid starting material is essential to ensure the success of downstream experiments. Especially, Next Generation Sequencing (NGS) developed to a powerful tool in almost all genetic research and diagnostic areas. Due to the establishment of low input library protocols for NGS workflows sequencing of cell-free DNA (cfDNA) became possible. Since the downstream applications are often time-consuming and expensive, tight QC steps are required to ensure that samples are “fit for purpose”. These QC steps can be performed with automated electrophoresis systems.
Different cell-free DNA samples were evaluated for Sample quality with an Agilent 4200 TapeStation system and the Agilent Cell-free DNA ScreenTape assay.
Depending on preanalytical sample treatment or extraction methods the quality of cfDNA can vary. The results include a score to qualify cfDNA samples according to their contamination level with high molecular weight material. This allows defining a threshold for objective sample qualification prior to library preparation.
Moreover, accurate quantification of cfDNA samples is essential to determine suitable input amounts for cfDNA library preparation prior to sequencing.
Quality control of cfDNA is essential to ensure the success of downstream experiments. Automated electrophoresis systems standardize sample quality control and enable objective sample integrity assessment as well as the establishment of quality thresholds.
(121) Comprehensive structural analysis of cancer genomes by genome mapping
Tumors are often comprised of heterogeneous populations of cells, with certain cancer-driving mutations at low allele fractions in early stages of cancer development. Effective detection of such variants is critical for diagnosis and targeted treatment. However, typical short sequence reads are limited in their ability to span across repetitive regions of the genome and to facilitate structural variant (SV) analysis. Based on specific labeling and mapping of ultra-high molecular weight (UHMW) DNA, we developed a single-molecule platform that has the potential to detect disease-relevant SVs and give a high-resolution view of tumor heterogeneity.
We developed a DNA isolation and sample preparation workflow that preserves the DNA integrity and conserves structural variation information from blood, cells and preserved tissue. Single molecules are labeled at specific motifs and analyzed in massively parallel nanochannels. The single-molecule maps are used in a bioinformatics pipeline that effectively detects structural variants at low allele fractions. It includes single-molecule based SV calling and fractional copy number analysis. Preliminary analyses using simulated and well-characterized cancer samples showed high sensitivity for variants of different types at as low as 5% allele fractions. The candidate variants are then annotated and further prioritized based on control data and publically available annotations. The data are imported into a graphical user interface tool that includes new visualization tools (such as Circos diagrams) for real-time interactive visualization and curation.
Bionano offers sample preparation, DNA imaging and genomic data analysis technologies combined into one streamlined workflow that enables high-throughput genome mapping on the Bionano Saphyr system. Together, these components allows for efficient analysis of any genome of interest.
(203) Molecular-targeted optical coherent microscopy as an optical biopsy tool for early detection of cancer
Optical coherent tomography (OCT) is a powerful tool for assessing tissue architectural morphology. It enables 3D imaging with the resolution comparable to traditional histopathology (a few microns), but it can be performed in vivo and in real-time without tissue removal and specimen processing. Optical coherent microscopy (OCM) combines coherence-gated detection with confocal microscopy in order to achieve high transverse resolutions, thus enabling 3D visualization of cellular features. However, current OCT/OCM imaging technologies have not been able to leverage the recent advances in molecular-targeted contrast agents that are revolutionizing biomedicine. The new techniques enable molecular contrast for 3D-OCT/OCM have been developed and validated in this research. Both the structural and pathological information of tissue has been imaged with our OCT/OCM in 3D, in vivo, and in real time with micron-level spatial resolution at multiple scales. This work will lay the foundation for a wide range of fundamental research, small animal imaging, and future clinical applications in humans. This work will also serve as a starting point for the OCT/OCM studies of other pathologies associated with abnormal protein expression levels, such as neurodegenerative and cardiovascular diseases.
(173) SMART-Seq® Stranded Kit performance with ovarian cancer cells
Single-cell RNA sequencing (scRNA-seq) approaches are increasingly being used to characterize the abundance and functional state of tumor-associated cell types and have provided unprecedented detail into cellular heterogeneity. Extracting meaningful biological information from the small amount of RNA in single cells requires a library preparation method with exceptional sensitivity and reproducibility. The SMART-Seq v4 Ultra® Low Input RNA Kit for Sequencing (SMART-Seq v4) is an extremely sensitive scRNA-seq library preparation method in part due to its capability to retrieve information from full-length mRNA and not just the 3’ end. However, this method can only capture polyadenylated mRNA. To address this, we have modified our SMART® RNA-seq technology to create the SMART-Seq Stranded Kit, a single-cell RNA-seq library preparation method that relies on random priming instead of oligo dT priming. The SMART-Seq Stranded Kit captures any RNA regardless of polyadenylation status and preserves strand-of-origin information, making it more amenable for distinguish overlapping genes and comprehensive annotation and quantification of lncRNAs. To show the applicability of the SMART-Seq Stranded Kit in characterizing tumor heterogeneity, we analyzed single cells dissociated from a solid tumor in stage IV ovarian cancer (serous carcinoma). We sorted CD45+ leukocytes and EpCAM+ tumor cells were sorted in 96-wells plates. After library preparation, sequencing and analysis, we detected an average of 4,717 genes in the CD45+ cells and 8,039 genes in the EpCAM+ tumor cells. This analysis enabled identification of well-accepted markers of tumor infiltrating lymphocytes and associated with ovarian carcinoma.
(149) New Developments in Nucleic Acid Sample Quality Control
Quality control (QC) of RNA and DNA samples is key for the success of any downstream experiment. Especially, Next Generation Sequencing (NGS) developed to a powerful tool in almost all genetic research and diagnostic areas. Since the downstream applications are often time-consuming and expensive, tight QC steps are required to avoid a “garbage in-garbage out” situation.
The ideal QC solution is easy-to-use, economical and provides fast and unambiguous results also for very low concentrated samples. Nucleic acid quality assessment can be standardized using automated electrophoresis systems to ensure that samples are “fit for purpose”.
This poster exhibits the latest developments in nucleic acid sample QC and gives application examples – from RNA to Cell-free DNA (cfDNA) - evaluated with an Agilent 4150 TapeStation system.
Cell-free DNA (cfDNA), gain more and more importance in the context of cancer research. Accurate quantification of cfDNA samples is essential to determine suitable input amounts for cfDNA library preparation prior to sequencing. Dependent on preanalytical sample treatment or extraction method, cfDNA samples may contain high molecular weight DNA fragments e.g. genomic DNA contaminations. High molecular weight material can negatively influence library preparation and subsequently result in lower sequencing depth. For the objective quality evaluation of gDNA and RNA, the quality scores DNA integrity number (DIN) for gDNA and the RNA integrity number equivalent (RINe)for RNA can be assessed providing numerical values from 1 (degraded) to 10 (intact) for classification of samples.
(137) Immuno-biotechnology and bioinformatics in Community Colleges
Immuno-biotechnology is one of the fastest growing areas in the field of biotechnology. Digital World Biology’s Biotech-Careers.org database of biotechnology employers (>6800) has nearly 700 organizations that are involved with immunology in some way. With the advent of next generation DNA sequencing, and other technologies, immuno-biotechnology has significantly increased the use of computing technologies to decipher the meaning of large datasets and predict interactions between immune receptors (antibodies / T-Cell receptors / MHC) and their targets.
The use of new technologies like immune-profiling - where large numbers of immune receptors are sequenced en masse - and targeted cancer therapies - where researchers create, engineer, and grow modified T cells to attack tumors - are leading to job growth and demands for new skills and knowledge in biomanufacturing, quality systems, immuno-bioinformatics, and cancer biology. In response to these new demands, Shoreline Community College (Shoreline, WA) has begun developing an immuno-biotechnology certificate. Part of this certificate includes a five-week course (30 hours hands-on computer lab) on immuno-bioinformatics.
The immuno-bioinformatics course includes exercises in immune profiling, vaccine development, and operating bioinformatics programs using a command line interface. In immune profiling, students explore T-cell receptor datasets from early stage breast cancer samples using Adaptive Biotechnologies’ (Seattle, WA) immunoSEQ Analyzer public server to learn how T-cells differ between normal tissue, blood, and tumors. Next, they use the IEDB (Immune Epitope Database) in conjunction with Molecule World™ (Digital World Biology®) to predict antigens from sequences and verify the results to learn the differences between continuous and discontinuous epitopes that are recognized by T-cell receptors and antibodies. Finally, to get hands-on experience with bioinformatics programs, students will use cloud computing (CyVerse) and IgBLAST (NCBI) to explore data from an immune profiling experiment.
(403) Exploring Core Facility Employee Job Satisfaction: Influence of Institutional Support and Career Development.
This exploratory study evaluates the influence of Institutional support on job satisfaction of Core Facility employees, using geographic, career experience, and demographic data. We distributed a web-based survey through to Core Facility Directors and Managers. As such, we leveraged ABRF membership as the primary population surveyed. Three hundred respondents participated in the survey, of which 256 completed the survey and were qualified and used in our analysis. This survey incorporated questions on career experience, job training, mentorship, institutional support (financial and non-financial), and current job satisfaction. Our findings are presented.
(175) Elucidating acrylamide adverse effects on zebrafish using a multi-omics approach
Acrylamide (AA) neurotoxicity has driven much attention after the occupational poisoning of workers injecting AA-based grouting agent during tunnel constructions in Sweden and Norway during the 1990s. A chemical model for acute AA neurotoxicity has been developed in adult zebrafish. In order to elucidate the precise mechanism by which AA elicits its neurotoxicity, we performed a multi-omic analysis (metabolomics, proteomics, transcriptomics) to describe the molecular effects of the exposure to moderate levels of AA in the zebrafish brain. We observed a cascade of molecular adverse events linked to the ability of AA to form adducts with thiol groups. These events include the depletion of glutathione, the inactivation of key components of the thioredoxin system, and the dysregulation of microtubule-related genes. As these effects are interconnected, we propose that they represent a perfect storm, blocking the normal functioning of nervous cells and explaining most, if not all, AA neurotoxic effects. Our results also suggest that neurotoxicity should be regarded as the major damage after AA exposure, and that it should be the main target for new efficient countermeasures against this toxidrome.
(205) Validation of antibody panels for high-plex immunohistochemistry applications
Introduction: Characterization of the spatially-resolved expression of key proteins within tissues enables a deep understanding of biological systems. There has been significant progress in developing technologies with expanded capabilities to analyze higher numbers of proteins, however, the validation of these technologies and their associated affinity reagents remains a significant barrier to adoption. We have developed a validation pipeline that ensures optimal sensitivity and specificity for high-plex antibody panels for the analysis of FFPE sections using the NanoString GeoMx® Digital Spatial Profiling (DSP) platform. The DSP is designed to simultaneously analyze up to 96 proteins by detecting oligos conjugated to antibodies that can be released via a UV-cleavable linker.
Methods: Oligo-conjugated antibodies were tested for specificity by immunohistochemistry on FFPE human tissues. The sensitivity and dynamic range was tested using FFPE cell pellets with target-specific positive and negative cells at different ratios. An interaction screen was performed to evaluate potential deleterious effects of multiplexing antibodies, and a human tissue microarray (TMA) containing normal and cancer tissues was employed to assess assay robustness. The reproducibility of the panel on DSP was tested on serial FFPE tumor specimens.
Results: Immunohistochemical analysis of unconjugated and oligo-conjugated antibodies displayed indistinguishable staining patterns on control tissues and cell lines. Mixed cell pellet assays revealed strong correlations between observed counts and positive cell numbers. Antibody interaction studies showed similar count values for antibodies alone or in combination, and TMA analysis demonstrated expected patterns of expression across tissue types. Analysis of all markers across 24 registered regions of interest across serial FFPE sections were highly correlated. Spatial analysis of lymphoid tissue revealed high levels of biological heterogeneity across multiple germinal centers.
Conclusion: These results demonstrate the validation and application of high-plex protein panels to accurately interrogate the immune biology within FFPE tissue using the NanoString DSP platform.
(305) Evaluation of different sample preparation workflows for reproducible, quantitative, and in-depth analysis of urine proteomics
The growing field of urinary proteomics has provided a promising opportunity to identify biomarkers that can be used for diagnosis of a number of diseases. Urine offers great potential for clinical studies because it is abundant and readily collected in a non-invasive manner. With the advent of improved sample processing and separation methods along with newer technologies to analyze tandem mass spectrometry data, we now have the tools to identify a constellation of markers from urine samples. Although the attempts to study urine proteomics is not new, there is a need for better sample processing workflows to generate fast, reproducible, and more in-depth proteomics data. Some of the current standard workflows in this field require multiple steps (e.g. prefractionation) or generate high percentage of tryptic missed-cleavage peptides. Here, we have evaluated the performance of several sample preparation workflows, such as MStern blotting, PreOmics iST, Suspension-trapping (S-trap), and In-solution digestion for urine samples. MStern blotting is a membrane-based sample preparation method based on polyvinylidene fluoride (PVDF) membrane. PreOmics iST and S-trap are newer developed methods which have been mostly applied for cellular proteomics studies. In-solution digestion method required standard 8M urea as denaturing buffer. Data Dependent Acquisition (DDA) mode on QE-HF was used for single-shot label-free data collection. The raw data were processed in MaxQuant followed by downstream analysis in Perseus. Our results reveal a high degree of reproducibility within each workflow. PreOmics iST displayed the best digestion efficiency with lowest percentage of missed-cleavage peptides. S-trap workflow outperformed other methods with the greatest number of peptide and protein identifications. Using S-trap workflow, with less than 0.5mL urine sample as starting material, we identified ~1500 protein groups and ~17700 peptides from DDA analysis. To the best of our knowledge, these are the highest number of peptides and proteins reported for single injection of non-depleted urine samples.
(201) Imaging in Developmental Biology: An Essential Tool with No Instructions
Imaging is a fundamental tool in biomedical disciplines. A critical aspect in the acquisition and evaluation of imaging data is a detailed and accurate description of the technology used in the literature. In our work at a major imaging core, we are often met with the situation in which the experiments our clients want to recreate are poorly described, making analysis and replication of the published literature difficult.
In order to evaluate the extent and severity of this problem, we have analyzed Developmental Biology publications. Research articles in three leading journals were analyzed for the importance of imaging (fraction of figure panels that contained original images) and compared with the detail given to the experimental specifics of image acquisition (fraction of the materials and methods section devoted to image acquisition and analysis). Finally, the quality of the imaging information given was evaluated for its completeness with a simple pass/fail grade.
Results indicate that imaging is an essential tool in Developmental Biology, with over 80% of the figures being images, largely microscopy. However, less than 5% of the text in the methods section of the analyzed articles is devoted to experimental details of image acquisition and analysis (on average 57 words). Furthermore, the overall quality of the information provided is dismal, with a large majority of publications obtaining a failing grade (83%), and many examples containing no usable information (10%).
The lack of information on the imaging methodologies used in published articles makes it impossible to accurately replicate the reported data. This is a serious problem that requires immediate attention. Imaging shared resources have a key role to play in ensuring accurate reporting of critical imaging parameters. This role includes providing off the shelf descriptions for the methods section of manuscripts and client education on the importance of reporting that information.
(323) SysMet: A Tool for Integrative Systems Metabolomics
Metabolomics plays an indispensable role in the growing systems biology approaches to identify biomarkers for complex diseases such as cancer. Liquid chromatography coupled to mass spectrometry (LC-MS) and gas chromatography coupled to mass spectrometry (GC-MS) have been extensively used for high-throughput comparison of the levels of thousands of metabolites among biological samples. However, the potential values of many disease-associated analytes discovered by these platforms have been inadequately explored in systems biology research due to lack of computational tools. Partly due to these limitations, poor reproducibility of previously identified metabolite biomarker candidates has been observed, especially when they are evaluated through independent platforms and validation sets. Our goal is to provide metabolomics core facilities and research scientists with bioinformatics platforms and expertise that enable them to search for disease-associated metabolites at the systems level through integrative systems metabolomics. To this end, we developed a new browser friendly cloud-based tool (SysMet) to help uncover the relationship of diseases and metabolites by investigating the rewiring and conserved interactions among metabolites and through integrative analysis of multi-omic data. Developed via a modular design and a user-friendly graphical user interface (GUI), SysMet allows users to: (1) import preprocessed metabolomic data for differential analysis of metabolite profiles using a network-based method; (2) import other preprocessed omic data for selection of disease-associated metabolites based on network-based integrative analysis; and (3) visually evaluate the outcome of network-based differential analysis and multi-omic data integration through high-quality figures. We believe SysMet will contribute to improving the ability of researchers to discover disease-associated metabolites by enhancing the role of metabolomics in systems biology research.
(153) Raw Signal Analysis of Low Coverage Oxford Nanopore Reads Produces Highly Accurate Clinically Relevant Genotypes
In a clinical context, DNA sequencing is often used to identify variants from a reference genome. For many applications, a clinician will have a list of variants that they suspect may be present in a patient and only want to know about the genotype of those variants: the goal of an analysis is the confident identification from a known panel rather than de novo variant discovery. The intermediate data produced by the Oxford Nanopore Technologies sequencing platforms has richer information about possible variants and alternative calls than does the final ‘fastq’ base call. We reasoned that genotyping would be more accurate and powerful directly from the intermediate data, rather than first base-calling then comparing to a reference. We applied a novel algorithm to NA12878 reads, looking for artificial SNPs that we placed in the reference genome, and show single-read SNP detection accuracy can exceed 99% with confidence increasing rapidly with coverage. Applying this novel algorithm to look for variants from the COSMIC cancer mutation database in reads from a SKBR3 sample run on an Oxford Nanopore PromethION, we show that clinically relevant SNPs in can be detected with high confidence from only a few reads. This approach paves the way for using Nanopore sequencing for clinical SNP calling.
(159) Simple and scalable genome analysis with Transposase Enzyme Linked Long-read Sequencing (TELL-Seq): from haplotype phasing to de novo assembly in a tube
Haplotype phasing of genomes and de novo assembly of novel genomes are major hurdles for short read based next generation sequencing platforms. Long sequence reads are essential to overcome the significant sequence homology on some regions of the genome. Several NGS library technology breakthroughs recently have demonstrated barcode linked-read sequencing method can effectively generate long read like information and successfully applied for human genome phasing, structural variation detection or de novo assembly of other genomes. However, they either require expensive capital expenditure on a special instrument or are not scalable for commercial adoption yet due to sophisticated barcode generation. We have developed a simple and scalable NGS library technology, Transposase Enzyme Linked Long-read Sequencing (TELL-SeqTM), to use short NGS reads for genome scale haplotype phasing and/or de novo genome assembly. Several million uniquely barcoded beads are used to generate linked reads, which could be linked as long as a hundred kilobases, by strand transfer reactions using transposase in a PCR tube with a standard NGS laboratory setting. TELL-Seq library procedure takes approximately 3 hours and multiple samples can be easily processed parallelly in a 96-well format when needed. The library protocol can be adjusted and used for various sizes of genomes from bacteria to human. Using TELL-Seq we are able to generate comparable and excellent haplotype phasing results on a NA12878 human sample, and successfully de novo assembly on an E. coli and an Arabidopsis thaliana. More applications and analysis solutions are being developed for TELL-Seq library technology.
(105) Genotyping by sequencing of Canis familiaris using iGenomX RipTide DNA library preparation
Dogs have been living with humans for approximately 15000 years. Selective breeding has created a multitude of dog breeds with distinct characteristics. Great interest exists in understanding how selection has affected the modern dog genome and what variants are linked to specific canine breed characteristics. Dogs are also susceptible to a number of diseases that have counterparts in humans. Their unique population structure, relatively limited heterogeneity within breeds, greater genome sequence identity to humans than mice, and their sharing of a common environment with humans make them an excellent model organism for certain human diseases.
The iGenomX RipTide library prep is a high throughput DNA library prep for next generation sequencing that has been used to prepare libraries for a variety of applications where large numbers of samples require library preparation at low cost. One such application is genotyping by sequencing. Here we show the use of the RipTide library prep in a case control GWAS study, generating over 30 million biallelic SNPs per sample on a cohort of West Highland White Terriers. After filtering, more than 5.2 million SNPs were identified with a minor allele frequency of >5%. PCA analysis showed that the variants permitted the accurate identification of breeds. The data also showed a novel genetic association with Westie lung disease, the canine equivalent of chronic obstructive pulmonary disease in humans.
The iGenomX RipTide library prep combined with Illumina sequencing generated more variants in less time and at lower cost than the standard microarray-based genotyping experiment.
(165) Genomic and RNA Profiling Core (GARP)
The Baylor College of Medicine Genomic and RNA Profiling (GARP) Core’s mission is to facilitate cutting-edge genomics research by providing state-of-the-art equipment and expert assistance in strategic and experimental planning.
GARP technologies include the NanoString Technologies nCounter platform which allows for highly multiplexed, direct digital counting of hundreds of targets without the use of enzymes. This PCR-free assay accommodates multiple applications, is well-suited for gene expression studies and can handle challenging sample types. GARP library prep automation system, SMARTer Apollo (Takara), handles up to 96 samples per run to generate NGS libraries for low- and high-throughput projects with high reproducibility. Both user-prepped samples and in-house library preparation are available for applications such as mRNA-seq, Total RNA-seq (intact and FFPE), Limited-Input RNA-seq, microRNA-seq, ChIP-seq, amplicon sequencing, whole-exome capture, whole-genome sequencing (amplified and PCR-Free) and whole-genome bisulfite sequencing.
Here, we present data illustrating the proficiency of the Swift ACCEL-NGS Methyl-Seq Library Prep kit to generate WGBS libraries from genomic DNA samples at inputs as low as 10-100 ng for Illumina NextSeq 500 and the NovaSeq 6000 next gen sequencing.