Friday, February 09, 2007

On Image stacking

This is the first article about the new tag "image processing" and I hope you will like them. The first topic is stacking. Today a newbie asked about what's image stacking in an Internet forum, I come up with the following in my mind:-

Image stacking is like when we're in school, when we try to measure something, we did several independent measurements, and then we sum it up and take the average as our final result. You might also want to kick out the highest and the lowest value before taking average.

Intuitive and sound enough, I suppose. I hope that can clear the misconception about stacked image is not scientific, and it's anything worse than a single snapshot. You won't think a single measurement will be more accurate, or closer to the reality, right?

Why stacking?

* The primary reason is to reduce noise, the noise level is inversely proportional to the square root of the number of frames stacked. For examples, with 4 frames stack, the amount of noise will be reduced by half.

* Stacking average out unwanted signal, for example, dust on the CCD. Imagine that the same spot of dust will remain stationary on the CCD, but your target may drift, that means the relative position of the dust on your target image will change over time. By averaging the individual frames, the dust will be effectively removed.

* Seeing will distort your target in a random manner. By stacking a number of randomly distorted frames, you end up with an image which is statistically non-distorted. The effect is most noteable when the number of frames are large and it approaches to the reality as the number of frames are increased.

Why not stacking?

* Targets with highly dynamic nature are not suitable for stacking. To define highly dynamic, it is related to the image scale which we used. For example, if the image scale is so small such that the change is less than one pixel in size, stacking will NOT be shown the final image at all since it is not detectable. So the question is, whether your exposure window is within the detectable limit of change over time

Other points to be noted:

* Stacking can reduce the amount of noise, but it will not increase the amount of signal. Long exposures are required to gather enough signal, in case of insufficient signal, suppressing noise by stacking will not help to deliver more signal

* Stacking cannot replace cooling, but instead, they can be done at the same time

* Single frame is by no means superior to stacking in terms of the capability to reproduce reality, as long as the change of the target is within 1 pixel. Therefore, it will critical to determine the capturing time window and keep it within limit.

More on image processing later.

No comments: