Concurrent Scientific Session (Mass Spectrometry): Making a Proteomics Core Nimble and Efficient

Abstracts

Implementation and application of data-independent acquisition workflows in a mass spectrometry facility.

Track:

Proteomics workflows in mass spectrometric core facilities require great flexibility and adaptation to the individual projects, providing high quality data sets for diverse sets of projects. We demonstrate the use of data-independent acquisition DIA workflows in our proteomics core for a wide variety of collaborative projects, representing at times small studies or larger scale projects.  Depending on the project or species for our DIA data processing we often utilize published large spectral libraries, such as a pan-human spectral library (Rosenberger et al.) or published large mouse spectral libraries (Biognosys).  However for other projects prior to DIA acquisitions we apply data-dependent acquisitions (DDA) to build our own spectral libraries that will be used later to process the quantitative DIA data sets, which will be demonstrated for a C. elegans proteostasis project analyzing protein aggregates of young and old worms. Specific challenges and opportunities are also experienced when using DIA for post-translational modifications. All acquisitions implement the use of retention time standards (or iRT) for chromatographic alignments.  The presentation discusses practical aspects of using DIA in a Proteomics Core Facility, challenges and solutions – use of new technologies and dissemination of data sets to collaborators and core users.

 

Authors:
  • Birgit Schilling
    Author Email
    BSCHILLING@BUCKINSTITUTE.ORG
    Institution
    Buck Institute for Research on Aging

Can you deal with sample and experimental diversity in a Proteomics Core while being nimble and efficient? Answer= Not really

Track:

Proteomics has changed drastically in the last 5 years. Gone are the days when you can run a gel and id a protein and make people happy. Now everything is quantitative, and the experimental complexity has gone up 5-10 fold. Now we need to quantify thousands of proteins (& their PTM’s) as accurately as we can from a wide range of samples and organisms that come through the door. In addition, today’s proteomics cores have many different mandates. These are different across different institutions, but generally fall along the lines, generate meaningful scientific data for as many people as possible, do it as quickly as you can and do not lose money. These mandates generally require some degree of efficiency. So how can you be efficient when every new sample in the door requires different methods? Especially with each sample requiring different sample preparation techniques? The short answer is that you can’t. Yes, you need to get the work done, but pushing the envelope in terms of good quantitative proteomics data in a proteomics core is inherently inefficient. However, there are things you can streamline and I will present several strategies for doing that with varying degrees of success

Authors:
  • Brett S. Phinney
    Author Email
    bsphinney@ucdavis.edu
    Institution
    UC Davis