From pinhole to panorama

Seer’s Proteograph™ technology offers richer access to the proteome

Keep scrolling to learn how an unbiased view of the proteome can reveal biological discoveries you never thought possible.

Seer technology for proteomics research

For decades, a limiting factor in proteomics research has been the inability to access the proteome in ways necessary to survey and understand its diversity. Today, deep proteomics workflows are costly, complex, and time-consuming, and researchers continue to face the ongoing tradeoff: the scale of the study or the depth of its coverage.

So, Seer created a new technology and approach to provide a more transformative lens of the proteome.

video card 1
video card 2

Unlock critical, novel biological insights

Seer’s Proteograph technology platform enables you to resolve biology at a more fundamental level, from peptide differential expression to the identification of different biologically relevant proteoforms, by offering the four attributes proteomics studies need today.

Proteograph Key Attributes:

  • Unbiased coverage: Discover new biology, not just what you can capture
  • Deep access: Survey across the dynamic range of proteins in complex samples without depleting or fractionating
  • Rapid workflow: Rely on an optimized, robust workflow to complete your projects with minimal hands-on time
  • Scalable technology: Power longitudinal studies with scalable assay designs

Researchers are making important discoveries with the Proteograph

Being able to catalogue the full set of proteins in any sample, or identify the ones associated with disease, could potentially revolutionize science and healthcare. At Seer, our mission is to pioneer new ways to decode the secrets of the proteome to improve human health.

Ready to level up your proteomics research?

The Proteograph technology enables deep proteome coverage within different biofluids

Seer’s Proteograph workflow

A simplified pathway to get an unlimited, unbiased view of the proteome at scale.

Proteomic Technologies FAQs

Proteomics research has many focus areas, including protein discovery, protein quantification, and protein characterization. While each research focus requires distinct sets of sample preparation techniques and detection strategies, comprehensive assessment of the proteome requires an unbiased, rapid, deep, and scalable technology.

Researchers focused on protein discovery require techniques that cast a wide net to catch as many proteins as possible (i.e., unbiased technology) across the dynamic range (i.e., deep technology) to discover protein biomarkers associated with a phenotype. This should be done across a large enough cohort of samples to empower discovery (i.e., scalable and rapid technology). Given the content breadth and depth, and need for large cohort sizes, an unbiased, rapid, deep, and scalable technology is important for researchers focused on protein discovery.

Proteomics techniques can largely be distinguished into two types: sample preparation (separation and isolation of proteins) and protein identification/characterization. Sample preparation includes various gel-based and chromatography-based strategies. Liquid chromatography (LC) is commonly paired with Mass Spectrometry (LCMS) and can separate proteins in a complex mixture. Protein identification/characterization includes Edman sequencing, which is the detection of amino acid sequences in peptides/proteins; affinity-based proteomics, which typically uses antibodies or aptamers to target specific proteins; and, Mass Spectrometry (MS), which is the process by which proteins are ionized and their mass is calculated based on mass-to-charge ratios.

Liquid chromatology (LC) is a technique used to separate a mixture of peptides to reduce peptide complexity and control the flow of content into the MS. One LC technique is to use a reverse phase column chemistry, where peptides are first separated by hydrophobicity (reverse phase chromatography), followed by peptides binding to column packing material and elute over time as a function of LC buffer composition, and finally the temporal delivery of eluents. This technique helps the MS operate within optimum scan speed and dig deeper into the proteome. The resolving power of a LC system is tunable and scalable and depends on pump pressure limits, size of the packing material in a LC column, column length, gradient length, and buffer composition. In cases where a user may wish to increase the LC separation power, ion mobility separation may be used orthogonally to LC. 

Mass Spectrometry (MS) is an analytical tool used to detect and quantify unknown compounds, like peptides and proteins, in a sample with a mass-to-charge ratio (m/z). MS consists of three components:

  • The ionization source: when molecules are converted to gas-phase ions.
  • The mass analyzer: when ions are sorted and separated by m/z ratios.
  • The ion detector: when m/z ratios of separated ions are measured and produce a mass spectrum, which shows the intensity of separated ions by m/z ratio. 

The mass spectra produced by the MS are then analyzed to identify and quantify proteins. 

Mass Spectrometers can be operated in different modes, including data-dependent acquisition (DDA) mode (MS2 triggered by the most intense MS1 spectra), data independent acquisition (DIA) mode (windows of MS1 spectra are captured for MS2 analysis), and targeted mode (peptides are targeted based on a list of interest).

The two major Mass Spectrometry approaches to study the proteome are the top-down approach, a technique that analyzes intact proteins and enables near 100% sequence coverage but is challenging to scale, and the bottom-up approach, a technique for studying digested proteins before Mass Spectrometry analysis. While easy to implement, peptides must be computationally reconstructed into proteins for protein identification, and it has lower protein coverage than the top-down approach, making it more challenging to resolve proteoforms.

Yes, there can be. Upstream workflows are complex, time-consuming, and require technical expertise. There can be challenges with protein solubility, the proteomic dynamic range (quantifying low abundant proteins), proteome complexity, data analysis, proteoform identification, and throughput.

The top 22 most abundant proteins account for approximately 99% of the total protein mass and plasma, yet the many thousands of less abundant proteins found in the other 1% have significant impact on biology and health. The main challenge in proteomic research today is many of the current approaches for studying proteins are limited in depth, and, therefore, limited in their ability to access that 1%. Techniques, such as depletion, exist to try to reduce the signal from the most abundant proteins, however, these workflows can be costly, complex, and time-consuming.

Depletion is a method used in proteomics research to access low abundant biomarkers.
It helps reduce the complexity of biological samples – serum, plasma, or biofluids – by removing the more abundant proteins and enhancing the detection of lower abundant proteins in both discovery and targeted proteomic analyses.

Analyses performed at the protein-level (i.e., protein-centric) involve understanding biology using proteins, however, it is possible that protein-centric analyses conceal important distinctions between protein forms, or proteoforms, which can arise from alternative splicing (protein isoforms), allelic variation (protein variants), or post-translational modifications, which provide mechanistic insights underlying complex traits and disease. Analyses performed at the peptide-level (i.e., peptide-centric) are high-resolution and enable researchers to zoom in on the peptide sequences to identify proteoforms and better understand biology.

Seer’s engineered nanoparticles (NP) consist of a magnetic core and a surface with unique physiochemical properties. When NPs are introduced into a biofluid, such as blood plasma, a selective and reproducible protein corona is formed at the nano-bio interface, driven by a combination of protein-NP affinity, protein abundance, and protein-protein interactions. In a process called the Vroman effect, there is competition between proteins for the binding surface on the NP, which results in the binding of high-affinity and low-abundant proteins. Specifically, pre-equilibrium, the protein corona composition is based mainly on proteins that are in close proximity to the NP, commonly high-abundant proteins. At equilibrium, high-abundant and low-affinity proteins are displaced by low-abundant and high-affinity proteins, resulting in the sampling of proteins across the wide dynamic range. NPs can be tuned with different functionalizations to enhance and differentiate protein selectivity.

Coefficient of Variation (CV) is a statistic used to compare the extent of variation from one set of data to another (i.e., reproducibility), even if the means are very different. CV is a measure of the relative distribution of data points around the average (ratio of standard deviation divided by the mean). Factors that may contribute to variability include technical variability (operator, protein preparation method, instrument, ionization efficiency, or software choice) and biological variability (genetic background, disease state, age of subject, or gender). Recent studies have shown ~20-40% intensity CVs are expected.

Proteogenomics is an area of research that combines proteomics and genomics to better understand the flow of genetic information between DNA, RNA, and proteins.

Through next generation sequencing (NGS), massively parallel sequencing has enabled high-throughput, scalable, and fast generation of genomic information from DNA and RNA (i.e., four bases: A, C, T, and G) to measure the human genome. While NGS has revealed important findings, proteomics must be considered to gain a comprehensive understanding of human biology.

For example, when we compare transcriptomic data with proteomic data in the same biosample, we see a low or modest correlation between. This relatively weak correlation between the transcriptome and the proteome supports the importance and added value of studying biology using proteins. Much like NGS enables the almost complete coverage of the genome, a next generation proteomics (NGP) technology that enables almost complete coverage of the proteome through unbiased and deep coverage will be an important advancement and milestone in understanding human biology.

Several factors should be considered when choosing proteomics software. Tools to assess post-acquisition QC, including inspection of the chromatograms and spectras, to examine the raw signal to troubleshoot issues and tune settings, and tools to compare injections across time to assess LCMS performance, are useful. Additionally, tools to perform post-acquisition analysis, such as peptide/protein identification and quantification, and tools to gain biological insights are also important. However, for both post-acquisition QC and analysis, proteomics data analysis software should be scalable, easy-to-use, and high-performing, and should enable automation and support data storage/management.

Establishing and maintaining the computational infrastructure that is needed to effectively analyze proteomics data can be challenging for some labs. Cloud computing offers a flexible, scalable, and low-cost solution for proteomics data analysis, helping proteomics researchers conduct high-throughput proteomics studies at scale.

Keep Exploring

Proteograph Product Suite

Seer’s technology platform can broadly interrogate the dynamic range of proteins in a biological sample with easier-to-use workflows and powerful precision to study hundreds and thousands of samples.

Learn More

Proteograph Analysis Suite

See how you can quickly and accurately analyze your proteomics data in a matter of minutes and just a few clicks.

Start Interactive Demo

Customer Stories

Alex Rosa Campos, Ph.D. with Sanford Burnham Prebys shares his experience using the Proteograph Product Suite, his team’s workflow, and the results of their study.

Watch Now