If you're using StarTools
, the Life module gives you a few ways of pushing back busy star fields and re-focus the viewer's attention on the larger scale structures. Try the Isolate preset (w/o mask), or try the a tweaked version with Parameter [Detail Preservation] set to [Min Distance to 1/2 Unity]. You can even run two iterations (a combination of the two ) if you want;
Note that the above techniques do not use star masks at all.
You can, of course, also use techniques to manipulate the star shapes as well (these technique do use a star mask), for example by using morphological filters (as found in StarTools
' Shrink module).
Being more careful with how your processing decisions affect the prominence of stars in the first place is usually the key;
- avoid stretches that bloat stars (for example, don't use a manual Develop in StarTools)
- use star masks if you don't want to see stars processed; StarTools can auto-generate these for you whenever you need them (for example mask out stars when using wavelet sharpening if you don't want this procedure to affect your stars)
More generally speaking, unless your tools are somehow incapable of dealing with stars, I cannot think of a valid reason to separately process stars and background. That's because this procedure, by definition, will yield artefacts if processed separately; data needs to artificially manufactured for the pixels that used to take up the full stellar profile. It tampers with the point spread function in your "starless" image, while the synthesized detail will affect the outcome of any algorithm that relies on spatially co-located pixel analysis (e.g. algorithms that "scan" the neighborhood). Most detail restoring/enhancing algorithms fall in that category. The effect may be more or less pronounced, but it will be there. Depending on whether you are a purist, relying on "made up" data may be objectionable. As a developer I would leave that decision to the user (there is a guide for using StarNet++ with StarTools here
if you want for example), however when publishing an image, I, for one, would certainly want to know about any treatment that completely made up data. Sadly (fortunately?) I can usually pick such a treatment quite easily, but others may not.
If you don't have tools to generate accurate masks easily (e.g. you use the GIMP
or Photoshop for example), something like StarNet++ can definitely be useful for the generation of masks - if you really don't have anything else. Its output is unfortunately only as good as the data it was trained on; YMMV heavily depending on how you stretched your image and what point spread function your optical train produces.
Hope any of this helps!