Sunday, 28 November 2021
  9 Replies
  1.5K Visits
0
Votes
Undo
  Subscribe
Just my humble opinion. 

Users of pixinsight have available to them an extremely useful and time-saving tool  to simplify and speed up calibration of raw data. Users of pier 14 may find it particularly useful since each captured frame comes with a huge overhead in terms of size and therefore time needed to perform each step of calibration. 

You could justify doing each step of calibration manually with data from a smaller sensor. These will often complete quickly and one can just sit and wait the few seconds before starting the next step. When dealing with a large sensor such as on pier 14 the completion time for each filter is typically measured in minutes. Do you just sit there and wait for it to complete or come back later at a time when you think it has completed?

Without WBPP you need to process the data for each filter one at a time, manually reselecting the process each time and then selecting the relevant master calibration files and the light frames. This alone I believe makes WBPP a must for any of the piers. 

My next post will address a recently voiced criticism of its use and why I disagree. 

Thanks for reading. 

Ray 

Ray
Roboscopes Guinea Pig


0
Votes
Undo
Just my opinions. 

How much compromise in the quality of the data results from  using WBPP? Everyone's mileage may vary, not least I believe by the level of post processing skills. Personally a small loss will hardly be noticeable in my final image, however, I'll still try to extract as much as possible from what I receive. I'd say very little will be lost simply as a result of using the automatic script in question but then only as of recently. The following is just how I currently approach calibration. 

I use other tools in pixinsight with caution to avoid culling data too soon in the process. First a fastish 'Blink' to identify individual subs or groups where cloud or very occasionally misalignment occurs. I can then eliminate those early on. After calibration but before alignment I usually run the subframe selector but only to try and identify a good sub or two for aligning all the data. In addition any that are excessively bad as identified from the peaks on the graphs I'd look at before excluding. Using a suitable algorithm to do this selection on a scientific basis is almost certainly a better way, but I prefer the more personal approach before obliterating anything. :) For many months now I have been using the Normalise Scale Gradient (NSG) after alignment to more scientifically identify the poorer subs and exclude them from the Integration. Any suggestions on better ways of doing any of the aforementioned I'd very much welcome. 

So what are the penalties for using WBPP to do the calibration and does convenience result in compromise? I'm currently aware of a situation where narrowband data ought to be looked at prior to the calibration process to see if any data might be lost, but that's been only recently.

Prior to then WBPP did not have a way to deal with such a loss and may explain a recent criticism that it loses much of your data. Since this summer it's been changed to allow us to mitigate a particular situation which could give rise to some loss. Admittedly before then, providing you were aware of the issue and also how to deal with it, you'd have been recommended not to use the script. 

As just mentioned I'm now aware of just one issue which may arise (solely/mainly) with narrowband data.This as far as I understand is where there is a weak signal and some pixels may end up getting set to zero after dark frame subtraction. With regards to broadband and OSC data I'm not currently aware of what WBPP may be doing to lose data from those, so I'd be most grateful if someone would reply and explain how this may arise so that I can look for solutions. Until then I will stick with WBPP. 

Hope this is understandable. 

I'd welcome some feedback if there are still scenarios that would make the use of the script undesirable

Cheers, 

Ray 

Ray
Roboscopes Guinea Pig


2 years ago
·
#4111
0
Votes
Undo
Hi 
i m also using WBPP to save a lot of time  on calibration 

for integration , if data is difficult , i can sometime make a light integration for each filter to try different setting of integration 

but for 90% of the data  , i m using wbpp. 

i m pretty sure that the gain i will get with manual calibration will be spoil with my skill in post processing . I try each time to not overprocess the data but it is very hard to not push the slides :) 
0
Votes
Undo
Florent, 

So true about those sliders.... "I'll try just a bit more.... maybe a teeny bit extra shouldn't hurt". :) After you've  finally finished tweeking and take stock......."what on earth happened!" :(

Hope you are okay and thanks for those very welcome targets you are submitting for us. 


Cheers, 

Ray 

Ray
Roboscopes Guinea Pig


2 years ago
·
#4113
0
Votes
Undo
hi,
I use WBPP 100% of the time. I do a cull in blink first.

I calibrate, cosmetic correct with specific defect lists as applicable to the system the data came off,  and apply sub-frame weighting before registering. I then come out of WBPP and move into normalize scale gradient before finally integrating.

I don't see how WBPP can do any damage to the data or do a poorer job of it while calibrating and registering. Its the same thing one wud do manually, only without sitting in front of the computer watching the process console

Once past registration is where the special sauces come into play in any case even during a manual process.

I wud love to hear though how doing the calibration stage manually can help improve my images ....

vikas
2 years ago
·
#4114
0
Votes
Undo
I don't do any processing myself, but I personally think it is all down to personal choice.

There is definitely no right or wrong way to go about processing and it's purely upto "us" the processors to achieve what we desire..

You can either push the image to it's limits, you can over edit, under edit or even just accept an automated curve.

The final result and what you get is all dependant on your post processing skills and any 'personal' techniques you add to the mix.

Unfortunately there is no "magic button" that will always churn out that perfect hubble image every time.

in my humble opionion.

Phil McCauley
Roboscopes Website Admin


0
Votes
Undo
Thanks gentlemen for the feedback.

I originally posted this simply to respond to something that I read and was not able to reply to directly not being a Syndicate member to the Pier. The post I think was somewhat tongue in cheek and suggested that you could lose half of your data using the automated script. The recommendation being that to achieve the best result you need to do the calibration manually. 


For starters anyone that uses the WBPP script will know how much quicker and easier it is, and I cannot imagine going back to doing it manually. Vikas has kindly shared how he goes about the calibration  to make good use of pixinsight and how that will save a lot of time sitting staring at a screen. :) 

Going back to the post in question there was unfortunately no mention as to how data could/would be lost. I'm currently only aware of one situation that does occasionally arise after dark subtraction, mentioned in an earlier reply. The manual calibration script allows the entry of a pedestal value to rectify the problem, and up until the summer you couldn't do this in pixinsight. This only occurs with the low signal typical of narrowband imaging. 

If you don't ever watch pixinsight tutorials from Adam Block then I'd recommend visiting his YouTube channel and subscribing. To get an understanding of the dark subtraction issue, within his YouTube channel select the VIDEOS tab along the top and search down for a video he made at the start of June this year. It is titled Part 16.... Output pedestal for Narrowband Imaging. I'd provide a link if only I knew how to do so. :(  He also shows how you can assess whether or not this issue exists on any of your data after calibration. If it does then it's a bit of trial and error to find a suitable value to enter. I think only perfectionists may want to do this every time as it can be quite time consuming. 

Honestly it doesn't appear to make much of an improvement to my resulting image, but as I said for others it might be worth investigating. 

Does anyone else know of another reason to use manual calibration? 

Cheers, 

Ray 

Ray
Roboscopes Guinea Pig


2 years ago
·
#4116
0
Votes
Undo
Hi Ray,

the video from Adam Block u refer to is here - Adam Block Pedestal.

When i first saw it the heavens came crashing down for me because i thought all the images i had processed prior were all trash and had this problem of  pixels with a 0 value.

I incorporated his method into my workflow, and made the pixel math script, which i have attached.

However, after using it for a while i discarded its use for i could not find a single sub in countless datasets which had this problem.

Adam himself never mentioned it again in any of his videos after so that was the end of that.

Now that u have brought it up again i might give it a go for a few upcoming datasets....

vikas
0
Votes
Undo
Hi Vikas, 

Thank you for providing the link to the video. 

I agree that, for me anyway, the benefits are hard to see. However I've found situations where the problem Adam describes exists on the narrowband data we receive and shows up as very many red pixels when applying the script, typically this is on the o3 data. I may in a later post attach a screenshot as an example. 

I am now going to revisit this myself and see if it makes any noticeable difference to when I didn't use a pedestal value. This is likely to take some time as it involves pier 14 data and many subs. 
Cheers, 

Ray 

Ray
Roboscopes Guinea Pig


0
Votes
Undo
Well I believe that you are right Vikas as after waiting several hours for the results I probably won't bother to try it again. Where it might prove more useful is when there are a very limited number of subs and stacking will not average out the data. The dataset I used had nearly 200 subs and may be why there was little difference after they'd all been combined. I suspect that the Normalize Scale Gradient script may bring about greater benefit to the end result anyway. 

To reiterate, this post was only to counter an earlier negative review of WBPP on the forum.  From the replies received it appears that by choice WBPP is the popular way to go.  :) 

The conclusion I take from this is that WBPP is the better option and that any data you may lose will likely be inconsequential. 

Thanks for all the helpful feedback. 

Cheers, 

Ray 

Ray
Roboscopes Guinea Pig


  • Page :
  • 1
There are no replies made for this post yet.
Be one of the first to reply to this post!
Submit Your Response

Follow Us

Newsletter

Proud to use

  • FLI

  • 656 Imaging

  • 10 Micron

  • Planewave

  • ZWO

Company Details:

Roboscopes

802 Kingsbury Road
Birmingham
B24 9PS
United Kingdom


Roboscopes is a trading name of ENS Optical LTD ¦ Copyright© 2020 Roboscopes
Cron Job Starts