Article Category: Reproducibility

Technical and Biological Replicates are Critical for Quantitative Western Blot Success

Replicates improve the reproducibility and accuracy of experimental findings. They are important because they confirm the validity of observed changes in protein levels. Without replication, it is impossible to know if an effect is real or simply an artifact of experimental noise or variation, which can directly affect conclusions made about experimental findings.

There are two types of replicates: biological and technical. Each type addresses different questions1,2,3. Peer-reviewed journals, such as the Journal of Biological Chemistry, have specific guidelines in regards to replicates.

“Authors must state the number of independent samples (biological replications) and the number of replicate samples (technical replicates) and report how many times each experiment was repeated.”
Instructions for Authors. The Journal of Biological Chemistry

Technical vs. Biological Replicates: Which Do You Need to Include?

Technical Replicates

Technical replicates are repeated measurements used to establish the variability of a protocol or assay, and determine if an experimental effect is large enough to be reliably distinguished from the assay noise1. Examples may include loading multiple lanes with each sample on the same blot, running multiple blots in parallel, or repeating the blot with the same samples on different days.

Figure 1. Technical replicates help identify variation in technique. For example, lysate derived from a mouse and treated under a set of experimental conditions (A, B, C), then run and measured independently three times, will help identify variation in technique.

Technical replicates evaluate the precision and reproducibility of an assay, to determine if the observed effect can be reliably measured. When technical replicates are highly variable, it is more difficult to separate the observed effect from the assay variation. You may need to identify and reduce sources of error in your protocol to increase the precision of your assay.

Technical replicates do not address the biological relevance of the results.

Biological Replicates

Biological replicates are parallel measurements of biologically distinct and independently generated samples, used to control for biological variation and determine if the experimental effect is biologically relevant. The effect should be reproducibly observed in independent biological samples. Demonstration of a similar effect in another biological context or system can provide further confirmation. Examples include analysis of samples from multiple mice rather than a single mouse, or from multiple batches of independently cultured and treated cells.

Figure 2. Biological replicates derived from independent samples help capture random biological variation. For example, lysates derived from 3 mice and treated under the same set of experimental conditions (A, B, C), will help identify variation resulting from the biology.

To demonstrate the same effect in a different experimental context, the experiment might be repeated in multiple cell lines, in related cell types or tissues, or with other biological systems.

An appropriate replication strategy should be developed for each experimental context. Several recent papers discuss considerations for choosing technical and biological replicates1,2,3.

This protocol, Quantitative Western Blot Analysis with Replicates, will guide you in choosing and incorporating technical and biological replicates in your experimental design for reproducible data. It includes calculations for replicate analysis as well as how to interpret the data you obtain.

Additional Resources to Help You Get the Best Data

LI-COR has additional resources that you can use as you plan your quantitative Western blot strategy.

References:

  1. Naegle K, Gough NR, Yaffe MB. Criteria for biological reproducibility: what does “n” mean? Sci Signal. 8 (371): fs7 (2015).
  2. Blainey P, Krzywinski M, Altman N. Replication: quality is often more important than quantity. Nat Meth. 11(9): 879-80 (2014).
  3. Vaux DL, Fidler F, Cumming G. Replicates and repeats – what is the difference and is it significant? EMBO reports 13(4): 291-96 (2012).

Why Can Western Blot Data be Difficult to Reproduce?

Western blot analysis is susceptible to error and variation in more ways than one, whether it be the technique itself, or the reagents, samples, and materials used in the assay. While it is impossible to eliminate all variation and error, by accounting for its sources and following good Western blotting practices, you can minimize variation and error and generate accurate and replicable results.

Let’s consider some common sources of variation, and best practices to minimize their impact on the accuracy of results.

Cell line and cell culture practices

Cell lines, the very source of samples for Western blotting, can introduce error and variability in assay and analysis. There is a risk of cross-contamination between different cell lines and infection with bacteria, viruses, or other agents, when working in shared cell culture hoods and incubators1. Repeated propagation of cell lines can also result in a cell line drift, causing changes in the genetic makeup of cells, and possibly also in protein expression1. Cell culture media, serum, reagents, and glassware have an impact on the growth of cells and the overall experimental conditions, potentially affecting assay results1,3.

What can you do?
For human cell lines, use cells authenticated using Short Tandem Repeat (STR) profiling, and check animal-derived cell lines for mycoplasma and viral contamination1. Develop a growth profile of your cell line before initiating experiments, so you know when to harvest cells, perform assays, or start a fresh culture3. Microscopic assessment and analysis of expression data also helps identify any changes in cell behavior3.

Primary antibodies

Western blotting results depend heavily upon the quality of primary antibodies used. Variability between batches, as well as cross-reactivity of the antibody with different isoforms of the target or entirely different protein targets can lead to non-specific binding and background signal, yielding results that are difficult to replicate4.

What can you do?
Antibody validation for confirming binding and specificity of antibodies to the protein of interest is essential prior to their use in assays. Confirm antibody data such as batch and lot numbers, cross-reactivity, and characterization assays from antibody vendors4. By including positive and negative controls in your experiment, you can check for any non-specific binding of the antibody to other proteins present in your samples4. Use the antibodies in applications recommended by the vendor, as functionality in different types of experiments (e.g., Western blots and immunofluorescence applications) might vary3.

Loading and normalization

Measuring changes in the expression levels of proteins is a relative assessment. So, if you inadvertently load unequal amounts of sample across wells and compare protein levels, it can throw your results off. Similarly, normalizing data to a single housekeeping protein whose expression may have been affected by experimental treatments, or normalizing without validating the housekeeping protein antibody expression, will also introduce error in your analysis.

What can you do?

Check for equal sample loading across lanes and uniform housekeeping protein expression using a loading indicator.

For an even more airtight analysis, consider normalizing data to total protein loading (use this protocol for normalization using REVERT™ Total Protein Stain). In contrast to a single housekeeping protein, total protein normalization reduces the impact of biological variability on data, by accounting for all proteins loaded in the lane and allows you to evaluate the efficiency of transfer prior to the immunodetection.

Range of detection

Are you comparing data captured at different exposures or on separate blots? Think about some of the sources of variability in this comparison: experimental conditions, reagents used, and exposure times. Can you have confidence in your comparative analysis?

What can you do?

To accurately detect and compare signals from both protein targets and internal loading controls (whether a housekeeping protein, modified forms of a protein, or total protein) you need to measure data from the same blot, within the combined linear detection range of the assay. The linear range is where signal intensities detected by the imaging system are proportional to protein abundance. So how do you determine the combined linear range? Create a dilution series to determine the linear response range of both the target protein and internal loading control. For quantitative analysis load sample amounts that provide a linear response within the range. This protocol will guide you through the steps.

Detection

You are eager to see the results of your experiment, but depending upon how you choose to visualize them, you could be introducing a whole new set of variables into your data. X-ray film has a very narrow detection range in which its response to light is linear5. Signals above and below this range cannot accurately be documented on film, so band intensities are not proportional to light emitted during the chemiluminescence reaction. In addition, the enzymatic reaction signal varies with substrate incubation time, type, amount, and temperature, to name a few. Acquiring accurate signals within the combined linear range of film and the enzymatic chemiluminescence reaction is just that much more challenging.

What can you do?
Consider detection using near-infrared fluorescence imaging. Digital imaging provides a much wider dynamic range compared to film. Direct detection using dye-conjugated antibodies eliminates variation of the enzymatic reaction. As a result, you can acquire signals within the combined linear range, proportional to the amount of protein present on your blot.

Data analysis

Beware of conclusions based on data from a single experimental run. Variation in the biology of the experimental system, as well as in assay, technique, and equipment needs to be accounted for using replicate samples.

Similarly, analyzing data images using unsupported software programs leaves your data vulnerable to error. Certain image enhancement features like gamma correction or conversion to other file formats can cause non-linear adjustments to the image and/or loss of data depth needed for accurate analysis.

What can you do?
Include both technical and biological replicate samples in your experimental design. Technical replicates are repeated measurements of the sample, representing independent measurements of the noise in equipment and technique6. For instance, loading multiple wells with the same amount of sample or repeating blots with the same samples on different days are a few ways to take variation in technique into consideration. On the other hand, biological replicates capture random biological variation by measuring responses in biologically distinct samples6. So, you could repeat your assay with independently generated samples taken from different cell or tissue types, to confirm that your observations are not an irreproducible fluke. See more tips on replicate samples.

Technical replicates help identify variation in technique.

Biological replicates derived from independent samples capture random biological variation.

As for data analysis, always use software programs that are compatible with your imaging system and designed for your specific assay. Minimize image processing, as not all software packages indicate whether the original data is modified. Avoid converting and transferring files between software programs.

Now that you know some of the experimental factors that could be influencing your Western blot results, how will you implement these best practices in your protocols, detection, and data analysis? Get a refresher on the basics of Western blotting at Lambda U™.

References:

  1. Freedman LP, Venugopalan G, Wisman R. Reproducibility2020: Progress and priorities. F1000Research. 2017;6:604. doi:10.12688/f1000research.11334.1.
  2. Cell Line Authentication. The Global Biological Standards Institute™. Web. Accessed December 20, 2017.
  3. Baker M. Reproducibility: Respect your cells! Nature 537, 433–435; 15 September 2016. doi:10.1038/537433a
  4. Baker M. Reproducibility crisis: Blame it on the antibodies. Nature 521, 274–276; 21 May 2015. doi:10.1038/521274a
  5. Laskey, R.A. Efficient detection of biomolecules by autoradiography, fluorography or chemiluminescence. Methods of detecting biomolecules by autoradiography, fluorography and chemiluminescence. Amersham Life Sci. Review 23:Part II (1993).
  6. Blainey P, Krzywinski M, and Altman N. (2014) Points of Significance: Replication. Nature Methods 11(9): 879-880. doi:10.1038/nmeth.30

Tracing the Footsteps of the Data Reproducibility Crisis

Have you found it challenging to replicate the results of your own or somebody else’s experiments? You are not alone. A member survey conducted by the American Society for Cell Biology (ASCB) revealed that out of 869 respondents, 72% had trouble reproducing the findings of at least one publication1. In a more comprehensive study by the Nature Publishing Group, over 60% and 70% of researchers surveyed in medicine and biology, respectively, reported failure in replicating other researchers’ results2. And out of the 1,576 scientists surveyed in various fields, 90% agreed that there is a reproducibility crisis in scientific literature2.

In case you are wondering, both surveys were conducted between 2014 and 2015, and there is a growing consensus about data reproducibility challenges. But how did we get here?

Beginnings of a Crisis

In 2011, scientists at Bayer HealthCare in Germany published an article in Nature Reviews Drug Discovery, reporting inconsistencies between published data and in-house target validation studies3. Out of the 67 target identification and validation projects they had analyzed for data reproducibility, 43 had shown inconsistencies and had resulted in the termination of projects3. Through this review, the Bayer researchers attempted to raise awareness about the challenges in reproducing published data and called for confirmatory studies prior to investing in downstream drug development projects3.

Close on the heels of Bayer’s report, researchers at Amgen described their attempts at replicating the results of published oncology studies, in a 2012 Nature commentary4. While reporting success at confirming the findings of only 6 out of the 53 landmark publications reviewed, the Amgen scientists outlined recommendations to improve replicability of pre-clinical studies4.

These publications spurred data reproducibility conversations within the biomedical research community, giving way to a wave of initiatives to analyze and address the problem.

Data Reproducibility Gaining Momentum

Reproducibility of research data depends, in part, on the specific materials used in the experiment. But how often are research reagents referenced in sufficient detail? A study found that 54% of resources reported in publications, including model organisms, antibodies, reagents, constructs, and cell lines, were not uniquely identifiable5. In order to promote proper reporting of research materials used, the NIH has recommended that journals expand or eliminate the limits on the length of methods sections17.

When you had challenges reproducing data in your lab, were you able to identify what caused them? In another publication, study design, biological reagents and reference materials, laboratory protocols, and data analysis and reporting were attributed as the four primary causes of experimental irreproducibility6. In effect, an estimated 50% of the U.S. preclinical research budget, or $28 billion a year, was reportedly being spent on data that is not reproducible6.

Based on feedback from researchers in academia, biopharmaceutical companies, journal editors, and funding agency personnel, the Global Biological Standards Institute (GBSI) developed a report highlighting the need for a standards framework in life sciences7.

Changes Instituted by Granting Agencies and Policy Makers

In the face of data reproducibility challenges, government agencies that fund research, including the National Institutes of Health (NIH) and National Science Foundation (NSF) developed action plans to improve the reproducibility of research8,9. The NIH also revised criteria for grant applications8. That means researchers will need to report details of experimental design, biological variables, and authenticate research materials when applying for grants8.

The Academy of Medical Sciences (UK), the German Research Foundation (DFG), and the InterAcademy Partnership for Health (IAP for Health) identified specific activities to improve reproducibility of published data10,11,12.

Recommendations on Use of Standards, Best Practices, and Reagent Validation

Among the organizations championing the development of standards and best practices to improve the reproducibility of biomedical research are:

  • Federation of American Societies for Experimental Biology (FASEB) with recommendations regarding the use of mouse models and antibodies13
  • American Statistical Association’s (ASA) report on statistical analysis best practices when publishing data14
  • Global Biological Standards Institute with recommendations regarding the additional standards in life science research; antibody validation and cell line authentication groups in partnership with life science vendors, academia, industry, and journal publishers15
  • Science Exchange’s efforts at validation of experimental results16

Changes to Publication Guidelines

Journal groups have been revising author instructions and publication policies to encourage scientists to publish data that is robust and replicable. That means important changes regarding reporting of study design, replicates, statistical analyses, reagent identification and validation, are coming your way.

  • The NIH and journal publishing groups including Nature, Science, Cell, Journal of Biological Chemistry, Journal of Cell Biology, and Public Library of Science (PLOS), among others, have developed and endorsed principles and guidelines for reporting preclinical research. These guidelines include statistical analysis, transparency in reporting, data and material sharing, refutations, screening for image-based data (e.g. Western blots) and unique identification of research resources (antibodies, cell lines, animals)17
  • The Center for Open Science (COS) developed Transparency and Openness Promotion (TOP) guidelines framework for journal publishers. Signatories include journal publication groups like AAAS, ASCB, Biomed Central, F1000, Frontiers, Nature, PLOS, Springer, and Wiley, among others18

Emphasis on Training

To train scientists in proper study design and data analysis, the NIH has developed training courses and modules19. A number of universities also offer courses in study design and statistics20.

In the face of revisions to grant applications and publication guidelines, use of standards, reagent validation, and need for consistent training in methods and technique, changes are coming your way. Is your lab prepared? Let us help you get there. See what has changed for publishing Western blot data and get your entire lab trained to generate consistent and reproducible Western blot data at Lambda U™.

References:

  1. How Can Scientists Enhance Rigor in Conducting Basic Research and Reporting Research Results? American Society for Cell Biology. Web. Accessed October 6, 2017.
  2. Baker M. 1,500 scientists lift the lid on reproducibility. Nature (News Feature) 533, 452-454.
  3. Prinz F, Schlange T, Asadullah K. 2011. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov 10, 712-713. doi:10.1038/nrd3439-c1
  4. Begley GC, Ellis LM. 2012. Drug development: Raise standards for preclinical cancer research. Nature 483, 531-533. doi:10.1038/483531a
  5. Vasilevsky NA, Brush MH, Paddock H, et al. On the reproducibility of science: unique identification of research resources in the biomedical literature. Abdullah J, ed. PeerJ. 2013;1:e148. doi:10.7717/peerj.148.
  6. Freedman LP, Cockburn IM, Simcoe TS (2015). The Economics of Reproducibility in Preclinical Research. PLOS Biology 13(6): e1002165.
  7. The Case for Standards in Life Science Research – Seizing Opportunities at a Time of Critical Need. The Global Biological Standards Institute. Web. Accessed November 16, 2017.
  8. Enhancing Reproducibility through Rigor and Transparency. National Institutes of Health. Web. Accessed October 6, 2017.
  9. A Framework for Ongoing and Future National Science Foundation Activities to Improve Reproducibility, Replicability, and Robustness in Funded Research. December 2014. National Science Foundation. Web. Accessed October 6, 2017.
  10. Reproducibility and Reliability of Biomedical Research. The Academy of Medical Sciences (UK). Web. Accessed October 6, 2017.
  11. DFG Statement on the Replicability of Research Results. The Deutsche Forschungsgemeinschaft (DFG – German Research Foundation). Web. Accessed October 6, 2017.
  12. A Call for Action to Improve the Reproducibility of Biomedical Research. The InterAcademy Partnership for Health. Accessed October 6, 2017.
  13. Enhancing Research Reproducibility: Recommendations from the Federation of American Societies for Experimental Biology. Federation of American Societies for Experimental Biology. Web. Accessed October 6, 2017.
  14. Recommendations to Funding Agencies for Supporting Reproducible Research. American Statistical Association. Web. Accessed October 6, 2017.
  15. Reproducibility2020. The Global Biological Standards Institute™. Web. Accessed October 6, 2017.
  16. Validation by the Science Exchange network. Science Exchange. Web. Accessed November 16, 2017.
  17. Principles and Guidelines for Reporting Preclinical Research. Rigor and Reproducibility. National Institutes of Health. Web. Accessed November 16, 2017.
  18. Transparency and Openness Promotion (TOP). Center for Open Science. Web. Accessed October 6, 2017.
  19. Training. Rigor and Reproducibility. National Institutes of Health. Web. Accessed November 16, 2017.
  20. Freedman LP, Venugopalan G and Wisman R. Reproducibility2020: Progress and priorities [version 1; referees: 2 approved]. F1000Research 2017, 6:604 (doi:10.12688/ f1000research.11334.1)