Defective pixels: Why buy an image sensor with pixels that need “correcting”?
Camera Features, BLOG
Erstellt von Scott Smith
Today we’re discussing defect pixels and “correction” feature. Defect pixels are inherent to all CMOS and CCD sensors, due to silicon impurities and manufacturing effects. One can pay extra for fewer defects, but there is no escaping the phenomenon.
Impurities in silicon wafers and sensor production processes make it very difficult to obtain defect-free CCD or CMOS sensors. Sensor manufacturers have different grades of sensors based on the number of defective pixels. Those with few to none are classified as higher-grade and are much more expensive. Some specific applications, such as flat panel inspection, might require these higher-grade sensors. Most machine vision applications, however, do not require a “perfect” sensor and the standard grade sensors are a much more cost-effective solution.
Question: Hmm. My smartphone takes great pictures, with no apparent defect pixels, and it costs less than many/most industrial machine vision cameras. Why don’t I see any defect pixels the pictures from my smartphone?
Answer: In fact the sensor in your smartphone has many defect pixels, but through configuration masking at build-time and algorithms in the camera’s firmware, the defects are corrected, or more properly stated, smoothed over, by “near neighbor” substitution/interpolation, to generate an image that appears defect free.
Question: OK, so why don’t industrial machine vision cameras sensors get the same handling as in smartphones, and spare us this whole conversation?
Answer: For machine vision applications, the goal is generally not to create an image that’s pleasing for a human to look at. Rather it’s to create an image that’s interpretable by software, to take some action, e.g. “good part” vs. “bad part”, or “steer 2 degrees right”. Depending on sensor, lens, resolution, lighting, and application, the presence of a non-continuous value amidst its neighboring pixels might be either of:
i. A defect pixel arising from the sensor, that is brighter/darker than what it should be relative to the number of photons that actually impacted that sensor position, OR
ii. A genuine variance on the target surface
If it’s an instance of (ii), and one is inspecting LCD TV/monitors for defects, for example, one wants to let the discontinuity pass from the camera to the software, in order to detect the candidate flaw and take appropriate action. In the stylized illustration below, suppose the LCD was emitting nominally yellow: for the two anomalies, it would be important to know if those are from the LCD itself or from the camera sensor. In fact one tries to design applications so that each real-world feature is “seen by” several pixels, to permit defect pixel correction, gain information, and raise efficacy, but the underlying point is hopefully clear.
So machine vision applications designers usually prefer to understand exactly what the naked sensor is generating, and to have options to engage pixel correction features under programmer control. Perhaps an analogy back to the auto industry is appropriate: self-parking cars are now available, but as a driver I want to decide when to use that feature, whether to keep my skills sharp and park manually sometimes, or whether the situation is inappropriate to use automated-parking. Give me options, but don’t deny me the possibility of full control if and when I want it.
For further help on this topic, please contact us about your application goals, and we’ll be happy to recommend solutions aligned with your needs.
Subscribe to this blog
Don’t miss any new articles. To receive new post notifications to your inbox, subscribe to our e-newsletters today, and we will send you email notifications as soon as a new article in your area of interest is available.