Camera Selector

Defective pixels

News list

Product News

Why buy an image sensor with pixels that need “correcting”?

Defect pixels are inherent to all CMOS and CCD sensors, due to silicon impurities and manufacturing effects. One can pay extra for fewer defects, but there is no escaping the phenomenon.

Defect pixel correction in machine vision
Impurities in silicon wafers and sensor production processes make it very difficult to obtain defect-free CCD or CMOS sensors. Sensor manufacturers have different grades of sensors based on the number of defective pixels. Those with few to none are classified as higher-grade and are much more expensive. Some specific applications, such as flat panel inspection, might require these higher-grade sensors. Most machine vision applications, however, do not require a “perfect” sensor and the standard grade sensors are a much more cost-effective solution.

  • Question: Hmm.  My smartphone takes great pictures, with no apparent defect pixels, and it costs less than many/most industrial machine vision cameras. Why don’t I see any defect pixels the pictures from my smartphone?
  • Answer: In fact the sensor in your smartphone has many defect pixels, but through configuration masking at build-time and algorithms in the camera’s firmware, the defects are corrected, or more properly stated, smoothed over, by “near neighbor” substitution/interpolation, to generate an image that appears defect free.
  • Question: OK, so why don’t industrial machine vision cameras sensors get the same handling as in smartphones, and spare us this whole conversation?
  • Answer: For machine vision applications, the goal is generally not to create an image that’s pleasing for a human to look at.  Rather it’s to create an image that’s interpretable by software, to take some action, e.g. “good part” vs. “bad part”, or “steer 2 degrees right”.   Depending on sensor, lens, resolution, lighting, and application, the presence of a non-continuous value amidst its neighboring pixels might be either of:

a) A defect pixel arising from the sensor, that is brighter/darker than what it should be relative to the number of photons that actually impacted that sensor position, OR

b) A genuine variance on the target surface

If it’s an instance of (b), and one is inspecting LCD TV/monitors for defects, for example, one wants to let the discontinuity pass from the camera to the software, in order to detect the candidate flaw and take appropriate action.  In the stylized illustration below, suppose the LCD was emitting nominally yellow: for the two anomalies, it would be important to know if those are from the LCD itself or from the camera sensor.   In fact one tries to design applications so that each real-world feature is “seen by” several pixels, to permit defect pixel correction, gain information, and raise efficacy, but the underlying point is hopefully clear.

So machine vision applications designers usually prefer to understand exactly what the naked sensor is generating, and to have options to engage pixel correction features under programmer control.  Perhaps an analogy back to the auto industry is appropriate: self-parking cars are now available, but as a driver I want to decide when to use that feature, whether to keep my skills sharp and park manually sometimes, or whether the situation is inappropriate to use automated-parking.  Give me options, but don’t deny me the possibility of full control if and when I want it.


Learn more
The key takeaway is that defect pixels are a fact of life, and there are effective ways to deal with them. To learn more about the technical details of pixel correction technologies and methods with Allied Vision cameras please download our application note on “Defective Pixel List Management Tool” today.

For further help on this topic, please contact us about your application goals, and we’ll be happy to recommend solutions aligned with your needs.

News list