Concurrent Scientific Session (Genomics): Open Mic
JAX survey of bioinformatics and datascience cores revealed 'collaborative and embedded' model is rare but a desirable path forward
Service and support model dominated the function of bioinformatics cores: to provide broad range of data analysis services to the faculty labs that do not have such an expertise. It served well in the early days of genomics research that was many fold less demanding compared to today's. However, such a model may not continue to be effective in the current and future biomedical research that involves complex integrative multi-omic data analysis and interpretation. We at The Jackson Laboratory, in the backdrop of the transformation of Computational Sciences (the bioinformatics core) in the recent years, held discussions with the bioinformatics cores at the universities and research institutes, as well as datascience cores in the industry in USA. In these discussions, we paid significant attention to the function and models used and how the ‘collaborative and embedded’ model could resolve number of challenges such as relationship with faculty labs, scope management, staff recruitment and dealing with the complexity of modern genomics research. However, such a model is rare in the bio-medical academic space, but commonly employed in the datascience cores in the industry where it appeared to bear significant positive results and address major challenges. Hence, we think we reached time which require collaborative and embedded model for bioinformatics cores in the bio-medical academic space too.
GBIRG bioinformatics core survey highlights the challenges facing data analysis facilities.
Over the last decade, the cost of -omics data creation has decreased, while the need for analytical support for those data has increased exponentially. Integration of -omics data sets of differing size and scale, challenge bioinformaticians computationally and statistically, requiring more sophisticated pipelines, and thus more time and money. None the less, bioinformatics cores are often asked to operate under various cost-recovery models, with limited institutional support. How widespread is this model? Does it serve the scientific community successfully? What other administrative challenges do bioinformatics cores face? Seeing the need to assess the bioinformatics core operations, GBIRG conducted a survey to answer these questions, and understand bioinformatics cores going forward.
Miniaturization of Illumina library preparation for high throughput, plate-based next generation sequencing studies
The high cost of Illumina library production is a critical factor in the utilization of underpowered experiments. Several studies have demonstrated that increasing biological replicates, even without increasing the total amount of sequencing, significantly improves experimental power. Automated liquid handlers can address this issue through increased throughput and miniaturization of reaction volumes. However, integrating these robots in centralized facilities can be challenging due to the diverse projects, sample types, and sample qualities submitted for analysis. Here we demonstrate the miniaturization of a diverse set of Illumina library protocols on the TTP Mosquito HV and their application to a broad variety of biological sample types. Methods for both RNA and DNA library production and from intact and fragmented samples have been successfully implemented and are widely used using either Illumina or NEB reagent sets. The miniaturization of these protocols has resulted in both significant savings in library production cost, and in a notable increase in average number of replicates performed by the scientists.