I have no idea if this will yield a response, but here goes;
I need your help to learn how to apply flats and darks to post-processing.
Wait... What is he smoking?
At first glance, it sounds like I'm a misinformed newbie; applying flats and dark to post processing!?
But that's exactly what you will be able to do, if this research and prototype bears out.
If you're a
Tracking starts as soon as you import your linear dataset, fresh from the stacker. It ends when you've processed your image, where it performs the most targeted noise reduction possible.
In between, Tracking data is used to greatly enhance the results of many other modules; sharpening, deconvolution and more all rely on the data and stats Tracking mines, to obtain more accurate results, while suppressing artefacts and grain development.
The Tracking engine has been steadily perfected over the past 7 years since its introduction in 1.3. It is now mature enough to - hopefully - make the leap into analyzing stacker behavior (whether
Calibration and stacker behavior compensation
The current problem in post processing is that an algorithm, say deconvolution, makes image-wide assumptions. Traditional software bases these assumptions on the immediate input image.
The problem still, is that calibration frames can greatly impact Signal-to-Noise Ratios locally in a dataset. The signal "boost" in response to, for example, vignetting around the edges, is not "free". The boost to even out lighting also comes with a boost in noise. That means an increased noise level for previously darker areas, relative to (already) fully lighted areas of the image.
Similarly, some flat normalization schemes actually reduce brightness (for example as found in
The main thing is that they cause signal quality - and noise - to vary in different parts of your dataset. The factor by which they vary can be quite significant.
Without knowledge about these varying noise levels, all an algorithm or filter can do is treat all pixels the same across the whole image. The best it can do is choose a good "middle ground" (or, worse, put the onus on you if it's not smart enough to objectively do this). In practice, this means overcorrection, under-correction and - usually - a combination of both in parts of your image.
In the case of a noise reduction scheme, this means that some areas are noise-reduced too much, while others may not be noise-reduced enough,
In the case of a deconvolution algorithm, this means that some areas develop artefacts, while in others it may leave recoverable detail on the table.
And so on, and so forth.
Calibration Tracking aims to fix this.
What does it look like?
The current prototype produces improvements, as seen in this 400% crop;
No luminance masks, no selective masking, no manual intervention (the same default settings were used for all versions), no crutches - just pure improved signal evolution information is driving this. The mined data and stats - from the flats and lights, to the stacker's treatment, to the chosen stretch - do all the "talking" and drive/direct deconvolution and denoise.
In this example it successfully compensated for a subduing of the signal by
Shown are;
- "naive" processing (no deconvolution, no denoise).
- Standard Tracked deconvolution
- Standard Tracked deconvolution + Standard Tracked denoise
- Enhanced Tracked deconvolution + Enhanced Tracked denoise
As you can - hopefully - see, the early results are encouraging, but a lot more testing needs to be done.
Specifically, I need many more datasets to be able to test many different stackers, circumstances and gear combinations.
If you would like to participate and help test this technique, please share with me two stacks;
- One stacked as normal, calibrated to the best of your abilities.
- One with just the lights stacked, no other calibration done.
The software will compare the two stacks and construct a SNR discrepancy model that goes on to drive all other post-processing algorithms.
Thank you for reading this and thank you in advance for any help you can provide!
Clear skies!