R&D'ing new image processing technique; crowdsourcing datasets!

This is our area for Vendors. If you are a vendor and wish to promote your products please feel free to do so.
Forum rules
Please feel free to post any new products, discounts, or special announcements!
Also you may post in the open forums to answer specific questions about your products. The only thing we ask is to not downgrade another vendors products.
Post Reply
User avatar
startoolsastro
Moon Ambassador
Articles: 0
Offline
Posts: 56
Joined: Mon Jun 17, 2019 12:38 am
4
Location: Australia
Status:
Offline

R&D'ing new image processing technique; crowdsourcing datasets!

#1

Post by startoolsastro »


Hi all,

I have no idea if this will yield a response, but here goes;

I need your help to learn how to apply flats and darks to post-processing.

Wait... What is he smoking?

At first glance, it sounds like I'm a misinformed newbie; applying flats and dark to post processing!?
But that's exactly what you will be able to do, if this research and prototype bears out.

If you're a StarTools user you'll be familiar with signal evolution Tracking. If not, then, in a nutshell, it is ST's unique ability to monitor signal (and its noise component!) evolution through time. The benefits of mining this data are objectively cleaner end results with more detail. If you need a refresher or would like to learn more, you can do so here.

Tracking starts as soon as you import your linear dataset, fresh from the stacker. It ends when you've processed your image, where it performs the most targeted noise reduction possible.
In between, Tracking data is used to greatly enhance the results of many other modules; sharpening, deconvolution and more all rely on the data and stats Tracking mines, to obtain more accurate results, while suppressing artefacts and grain development.

The Tracking engine has been steadily perfected over the past 7 years since its introduction in 1.3. It is now mature enough to - hopefully - make the leap into analyzing stacker behavior (whether DSS, PI, APP, Nebulosity, Siril, RegIm or others) in response to calibration frames. The goal is to make Tracking able to trace signal evolution history all the way back to a photon exciting an electron in the sensor.

Calibration and stacker behavior compensation

The current problem in post processing is that an algorithm, say deconvolution, makes image-wide assumptions. Traditional software bases these assumptions on the immediate input image. StarTools already greatly improves this by basing these assumptions on signal evolution since output form the stacker, rather than just an immediate input image.

The problem still, is that calibration frames can greatly impact Signal-to-Noise Ratios locally in a dataset. The signal "boost" in response to, for example, vignetting around the edges, is not "free". The boost to even out lighting also comes with a boost in noise. That means an increased noise level for previously darker areas, relative to (already) fully lighted areas of the image.

Similarly, some flat normalization schemes actually reduce brightness (for example as found in DSS) to match overall brightness levels. That means a decreased noise level, relative to other areas of the image. Really, they're just two sides of the same coin.

The main thing is that they cause signal quality - and noise - to vary in different parts of your dataset. The factor by which they vary can be quite significant.

Without knowledge about these varying noise levels, all an algorithm or filter can do is treat all pixels the same across the whole image. The best it can do is choose a good "middle ground" (or, worse, put the onus on you if it's not smart enough to objectively do this). In practice, this means overcorrection, under-correction and - usually - a combination of both in parts of your image.

In the case of a noise reduction scheme, this means that some areas are noise-reduced too much, while others may not be noise-reduced enough,
In the case of a deconvolution algorithm, this means that some areas develop artefacts, while in others it may leave recoverable detail on the table.
And so on, and so forth.

Calibration Tracking aims to fix this.

What does it look like?

The current prototype produces improvements, as seen in this 400% crop;

Image

No luminance masks, no selective masking, no manual intervention (the same default settings were used for all versions), no crutches - just pure improved signal evolution information is driving this. The mined data and stats - from the flats and lights, to the stacker's treatment, to the chosen stretch - do all the "talking" and drive/direct deconvolution and denoise.

In this example it successfully compensated for a subduing of the signal by DSS in the core as a result of its normalisation during flat frame calibration. This has allowed decon to be more aggressive, while allowing denoise to back off somewhat, while keeping noise and artefacts perfectly stable elsewhere (not shown - outside the crop).

Shown are;
  • "naive" processing (no deconvolution, no denoise).
  • Standard Tracked deconvolution
  • Standard Tracked deconvolution + Standard Tracked denoise
  • Enhanced Tracked deconvolution + Enhanced Tracked denoise
How can you help?

As you can - hopefully - see, the early results are encouraging, but a lot more testing needs to be done.
Specifically, I need many more datasets to be able to test many different stackers, circumstances and gear combinations.

If you would like to participate and help test this technique, please share with me two stacks;
  • One stacked as normal, calibrated to the best of your abilities.
  • One with just the lights stacked, no other calibration done.
All settings should remain identical between the two stacks.

The software will compare the two stacks and construct a SNR discrepancy model that goes on to drive all other post-processing algorithms.

Thank you for reading this and thank you in advance for any help you can provide!

Clear skies!
Ivo Jager - creator of StarTools
User avatar
bladekeeper
Inter-Galactic Ambassador
Articles: 0
Offline
Posts: 3603
Joined: Thu Apr 25, 2019 2:39 am
4
Location: Lowell, Arkansas, US
Status:
Offline

TSS Awards Badges

TSS Photo of the Day

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#2

Post by bladekeeper »


Hello Ivo!

I can throw a couple of stacks your way. What is the best way for you? Dropbox links?
Bryan
Scopes: Apertura AD12 f/5; Celestron C6-R f/8; ES AR127 f/6.4; Stellarvue SV102T f/7; iOptron MC90 f/13.3; Orion ST80A f/5; ES ED80 f/6; Celestron Premium 80 f/11.4; Celestron C80 f/11.4; Unitron Model 142 f/16; Meade NG60 f/10
Mounts: Celestron AVX; Bresser EXOS-2; ES Twilight I; ES Twilight II; iOptron Cube-G; AZ3/wood tripod; Vixen Polaris
Binoculars: Pentax PCF WP II 10×50, Bresser Corvette 10×50, Bresser Hunter 16×50 and 8×40, Garrett Gemini 12×60 LW, Gordon 10×50, Apogee 20×100

Image
User avatar
startoolsastro
Moon Ambassador
Articles: 0
Offline
Posts: 56
Joined: Mon Jun 17, 2019 12:38 am
4
Location: Australia
Status:
Offline

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#3

Post by startoolsastro »


bladekeeper wrote: Tue Jul 30, 2019 3:31 am Hello Ivo!

I can throw a couple of stacks your way. What is the best way for you? Dropbox links?
That's amazing! Dropbox would be fantastic!

EDIT: And it'd be very useful to know what stacking solution you used as well. Thank you!
Ivo Jager - creator of StarTools
User avatar
JayTee United States of America
Universal Ambassador
Articles: 2
Offline
Posts: 5619
Joined: Thu Apr 25, 2019 3:23 am
4
Location: Idaho, USA
Status:
Offline

TSS Awards Badges

TSS Photo of the Day

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#4

Post by JayTee »


Hi Ivo,

If I read your post correctly, are you saying that we will no longer need to use Deep Sky Stacker to stack images (lights, darks, and bias) to produce an image that we can further post-process?

Cheers,
JT
∞ Primary Scopes: #1: Celestron CPC1100 #2: 8" f/7.5 Dob #3: CR150HD f/8 6" frac
∞ AP Scopes: #1: TPO 6" f/9 RC #2: ES 102 f/7 APO #3: ES 80mm f/6 APO
∞ G&G Scopes: #1: Meade 102mm f/7.8 #2: Bresser 102mm f/4.5
∞ Guide Scopes: 70 & 80mm fracs -- The El Cheapo Bros.
∞ Mounts: iOptron CEM70AG, SW EQ6, Celestron AVX, SLT & GT (Alt-Az), Meade DS2000
∞ Cameras: #1: ZWO ASI294MC Pro #2: 662MC #3: 120MC, Canon T3i, Orion SSAG, WYZE Cam3
∞ Binos: 10X50,11X70,15X70, 25X100
∞ EPs: ES 2": 21mm 100° & 30mm 82° Pentax XW: 7, 10, 14, & 20mm 70°

Searching the skies since 1966. "I never met a scope I didn't want to keep."

Image
User avatar
startoolsastro
Moon Ambassador
Articles: 0
Offline
Posts: 56
Joined: Mon Jun 17, 2019 12:38 am
4
Location: Australia
Status:
Offline

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#5

Post by startoolsastro »


JayTee wrote: Tue Jul 30, 2019 8:29 am Hi Ivo,

If I read your post correctly, are you saying that we will no longer need to use Deep Sky Stacker to stack images (lights, darks, and bias) to produce an image that we can further post-process?

Cheers,
JT
Hi JT,

No, not quite! :) We will very much still require stackers. The difference will be that there is now a way those bias/darks/flat frames are also used during post-processing to the benefit of your image.
Stackers use these calibration frames to correct imperfections (hot pixels, uneven lighting, etc.). However, some of these corrections exacerbate noise. I'm now working on a way to estimate how this noise was exacerbated exactly, so it can be taken into account during post-processing, by algorithms that work better if they have a handle on that noise (such as deconvolution and noise reduction).

Does that make sense?
Ivo Jager - creator of StarTools
User avatar
bobharmony
Local Group Ambassador
Articles: 0
Offline
Posts: 2028
Joined: Sun May 12, 2019 1:11 pm
4
Location: Connecticut, US
Status:
Offline

TSS Photo of the Day

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#6

Post by bobharmony »


Wow, Ivo, I continue to be in awe of the creative thought process you go through to come up with these great ideas! I will be happy to provide a couple of stacks for you to experiment with.

Because I image with a DSLR, the temperature matching between the darks and lights will not be as exact as those of my brethren who are using cooled cameras. Is that an issue for the experiment - or maybe just another data point to add to the mix?

If it would be useful (I use DSS for stacking), I could include the .txt file that describes the files used in stacking and the parameter settings for the stack.

It'll be a day or two before I can pull some of this together. Would you prefer stacks from the current version of DSS? I will need to run stacks with just the lights (and maybe a new stack of the full set of files with the same version of DSS).

Bob
Hardware: Celestron C6-N w/ Advanced GTmount, Baader MK iii CC, Orion ST-80, Canon 60D (unmodded), Nikon D5300 (modded), Orion SSAG
Software: BYE, APT, PHD2, DSS, PhotoShop CC 2020, StarTools, Cartes du Ciel, AstroTortilla

Image
User avatar
startoolsastro
Moon Ambassador
Articles: 0
Offline
Posts: 56
Joined: Mon Jun 17, 2019 12:38 am
4
Location: Australia
Status:
Offline

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#7

Post by startoolsastro »


bobharmony wrote: Tue Jul 30, 2019 12:05 pm Wow, Ivo, I continue to be in awe of the creative thought process you go through to come up with these great ideas! I will be happy to provide a couple of stacks for you to experiment with.

Because I image with a DSLR, the temperature matching between the darks and lights will not be as exact as those of my brethren who are using cooled cameras. Is that an issue for the experiment - or maybe just another data point to add to the mix?

If it would be useful (I use DSS for stacking), I could include the .txt file that describes the files used in stacking and the parameter settings for the stack.

It'll be a day or two before I can pull some of this together. Would you prefer stacks from the current version of DSS? I will need to run stacks with just the lights (and maybe a new stack of the full set of files with the same version of DSS).

Bob
Thanks so much Bob - that would be super helpful. The whole idea behind this exercise is to get real data, warts and all, as real people capture it; imperfection is key to validating the code for production use. So, yes please! :)
The parameter settings for the stack would be very helpful, especially if something is not quite as expected, it might provide some clues in that case.
Any version of DSS will do - just use what you would normally use for both stacks

The key thing is to keep all other variables constant, and just have the stacks differ only in their calibration.
Ivo Jager - creator of StarTools
User avatar
bladekeeper
Inter-Galactic Ambassador
Articles: 0
Offline
Posts: 3603
Joined: Thu Apr 25, 2019 2:39 am
4
Location: Lowell, Arkansas, US
Status:
Offline

TSS Awards Badges

TSS Photo of the Day

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#8

Post by bladekeeper »


Hi Ivo,

I've sent a PM to you with links to the data. Hope that helps some and thanks for the opportunity to help out! :)
Bryan
Scopes: Apertura AD12 f/5; Celestron C6-R f/8; ES AR127 f/6.4; Stellarvue SV102T f/7; iOptron MC90 f/13.3; Orion ST80A f/5; ES ED80 f/6; Celestron Premium 80 f/11.4; Celestron C80 f/11.4; Unitron Model 142 f/16; Meade NG60 f/10
Mounts: Celestron AVX; Bresser EXOS-2; ES Twilight I; ES Twilight II; iOptron Cube-G; AZ3/wood tripod; Vixen Polaris
Binoculars: Pentax PCF WP II 10×50, Bresser Corvette 10×50, Bresser Hunter 16×50 and 8×40, Garrett Gemini 12×60 LW, Gordon 10×50, Apogee 20×100

Image
User avatar
startoolsastro
Moon Ambassador
Articles: 0
Offline
Posts: 56
Joined: Mon Jun 17, 2019 12:38 am
4
Location: Australia
Status:
Offline

Re: R&D'ing new image processing technique; crowdsourcing datasets!

#9

Post by startoolsastro »


Learning some interesting things already! :)
Looks like I have some more work to do on the SNR gain estimation to better cope with the signal modification (+noise) from the darks.
I may well be too ambitious here, in which case the "uncalibrated" stack would have to include bias/darks as well to take them out of the equation (e.g. the only remaining difference between the two stacks being flats).
Thiese are precisely the sort of "real world" scenarios I'm looking for! Thank you!
Ivo Jager - creator of StarTools
Post Reply

Create an account or sign in to join the discussion

You need to be a member in order to post a reply

Create an account

Not a member? register to join our community
Members can start their own topics & subscribe to topics
It’s free and only takes a minute

Register

Sign in

Return to “Vendors”