Interpolation is in the photo as the image tries to magnify or repair imperfections. Image interpolation happens in one step with all digital photos, whether in bayer demosaicing or photo enlargement.
It happens when you resize or remap your image from one pixel grid to another. Image resizing is necessary when you need to increase or decrease the total number of pixels.
Whereas, remapping can happen in a wide variety of scenarios, for example; correcting lens distortion, changing perspective and rotating an image.
Even if the same image resize or remap is performed, the results can vary significantly depending on the interpolation algorithm. This is just an estimate, so an image always loses some quality each time an interpolation is performed. Interpolation works using known data to predict values at unknown points.
Types of Interpolation Algorithms
Common interpolation algorithms can be divided into two categories: adaptive and non-adaptive.
- Adaptive methods vary depending on what they are interpolating (sharp edges vs. smooth texture), whereas non-adaptive methods treat all pixels equally. Adaptive algorithms include many proprietary algorithms from licensed software such as Qimage, PhotoZoom Pro, Original Fractals, and others. Most of them apply a different version of their algorithm (pixel by pixel) when they detect the presence of an edge. These algorithms are primarily designed to maximize artifact-free detail in enlarged photos, so some cannot be used to distort or rotate an image.
- Non-adaptive algorithms include: nearest neighbor, bilinear, bicubic, spline, sinc, lanczos, and others. Depending on their complexity, these use 0 to 256 (or more) adjacent pixels when interpolating. The more contiguous pixels they contain, the more accurate they may become, but this comes at the expense of much longer processing time. These algorithms can be used to both distort and resize an image.