Adaptive Background Subtraction
Introduction to Background Subtraction
In quantitative Western blotting, image background affects the accuracy and variability of analysis. Nonspecific bands, smears, spots, and blotches in Western blot images can make it difficult to precisely determine band intensity. When you draw a shape around a band, background pixels inside the shape also contribute to the signal intensity. During image analysis, background must be subtracted to accurately calculate the signal intensity for regions of interest. An algorithm estimates the intensity of background pixels and subtracts that value from the total signal intensity of the shape to “correct” for background.
Background subtraction is an important aspect of quantitative Western blot analysis – but its accuracy is strongly influenced by both image context and the type of subtraction algorithm applied (1-3). Currently available Western blot analysis software offers multiple options for subtracting background, and it can be difficult to know which method will produce the most accurate and reproducible quantification results. With little guidance about when and why each method is appropriate, researchers are forced to either pick a method or simply use the default settings and hope for the best. These subjective choices are based on each user’s personal preferences and introduce variation that makes the analysis less reproducible.
Background subtraction works best when image background is uniform and consistent across the blot, with well-separated bands and lanes (1, 4, 5). But every blot is different; electrophoresis artifacts, uneven background, smears, spots, and variations in lane or band spacing make each blot unique. Most background algorithms cannot adapt to these variations and will not produce accurate results every time (1, 2, 5). An ideal background subtraction algorithm for Western blot analysis would respond and adapt to these variations and produce accurate, reproducible results from every blot.
Common background subtraction algorithms rely on the shape or lane context. Shape-based methods use the area surrounding a shape or feature to estimate image background, and lane-based methods use information from the context of each lane. A new method that considers both the shape context and lane context, called Adaptive Background Subtraction (ABS), eliminates user bias for more reproducible and less subjective estimation of image background.
Unlike conventional background subtraction methods, ABS evaluates the local area around each shape and its context within the lane to determine the background value. This patented, intelligent algorithm combines the advantages of shape-based and lane-based methods in a single calculation. User bias and the impact of subjective choices are minimized or eliminated, allowing different users to generate consistently reproducible results.
Reproducibility and Error
By evaluating a broader context for each shape, Empiria Studio uses a larger volume of data to evaluate and estimate image background. A typical region of interest may contain a few hundred pixels, and a conventional background subtraction method uses a few hundred pixels to estimate image background. In contrast, Empiria Studio analyzes several thousand pixels to estimate the background for that shape.
If examined at the pixel level, image background is actually quite uniform. The ABS algorithm looks for this redundancy and differentiates it from “non-context” areas like spots, smears, or other artifacts adjacent to the band or in the sample lane. With a large data set for comparison, the algorithm can easily identify and disregard these anomalies. It adapts to the individual image context, calculating image background with unmatched precision and reproducibility.
Background for Lane Quantification vs Band Quantification
In Empiria Studio, background is calculated differently when a lane is quantified compared to when a band is quantified.
Band: The background value for a band is calculated using lane area around the band.
Lane: One background value is calculated for the membrane and subtracted from all the lanes that are being quantified on the image.