The representation of a digital color image requires the knowledge of at least three color components for each pixel. Usually red, green and blue samples (that is the RGB color space) are used.
Therefore, the acquisition of a color image in a digital camera requires three sensors for pixel, each of them sensible to a particular wavelength. Moreover, the positionioning of the sensors is not straighforward. The sensor can be placed over a plane and the light entering the camera split in each pixel and projected onto each spectral sensor, but this solution is expensive and leads to some phase delay between the components. Another approach is to stack the color sensors on top of one another, as done in the Foveon cameras, but the esposure times are high because the light has to penetrate three levels of silicon. Therefore, most of the digital cameras use a single-sensor technique, with a grid of different color sensors, called Color Filter Array (CFA). The most common arrangement for the CFA is called Bayer pattern from the name of his inventor Bryce Bayer, but also other schemes have been considered.
From the data captured by the sensor, a full-resolution color image has to be reconstructed using an interpolation procedure called demosaicking. This reconstruction requires some ad hoc techniques in order to produce high-quality images, since the classical image interpolation approaches are not able to exploit jointly the information given by the three color components. Therefore, many demosaicking techniques have been proposed, with different trade-off between computational cost and complexity.