User Tools

Site Tools


photo:ast_photography_adv

advanced astrophotography

Introduction:

  • while some astronomical objects are bright (eg. moon, sun, Jupiter) and one can usually take quite satisfactory photos of these without much hard work, the majority of subjects require much hard work to achieve great pics
  • Jupiter, moon & Mars at opposition (and perhaps Saturn) is best done with a webcam taking 1500pics and stacking the best
    • Philips ToUCam with lens removed and attached to 8-14“ SCT or APO refractor 4-6” diameter

Steps when taking the photos:

  • avoid windy nights that will blur the photos and contribute to poor seeing at high magnification
  • avoid poor seeing nights if high magnification is to be used
  • go to a dark sky site if possible or early morning when there is less light pollution and better seeing
  • hopefully time it to ensure target is near zenith +/- 30 degrees to minimise atmospheric problems
    • NB. German fork mounts will have a problem if target is at zenith as camera will not be able to swing through mount.
  • ensure the subject is adequately tracked to minimise movement trails
    • comets should be tracked themselves not adjacent stars if exposures are longer than ~60secs
    • accurate polar alignment is important as although it doesn't effect tracking it will cause field rotation if not accurate
    • ensure telescope is well balanced and mount is level
    • ideally use a CCD auto-guider on an accurate mount such as a Meade LXD75 or better still a Losmandy mount
    • stars “move” at 15 arc secs per second
    • guiding tolerances depend on focal length of camera and film image size:
      • 100mm focal length tolerance = 25 arc secs
      • 500mm focal length tolerance = 5-6 arc secs
    • a well aligned, level, balanced mount should enable unguided images of stars using 100mm lens for indefinite time, but using a 400mm lens, the periodic error of the drive becomes visible at exposures greater than 5 min.
    • select a mount with small periodic error otherwise you may be limited to 20sec exposures
    • see guiding
  • ensure vibrations are minimised:
    • sturdy mount
    • don't touch telescope or camera for 10sec prior to and for duration of exposure
      • use self-timer on camera or remote control shutter release
    • use camera which does not have mirror vibrations ie. non-SLR or SLR with mirror lock-up
  • ensure optics are as good as possible:
    • photos through cheaper refractors or through Newtonians or even SCT's will have annoying aberrations
    • using a Fastar to achieve a faster f/ratio with a SCT causes significant coma, Starizona's Hyperstar is said to be a better alternative & gives a faster f/ratio of f/1.9.
    • if possible use an expensive APO refractor - even a 4“ will be adequate or a good quality camera telephoto lens such as a Canon 200mm f/2.8L II (avoid zoom lenses and optical image stabilised lenses)
    • ensure telescope optics are collimated
    • avoid vignetting when using afocal method
  • ensure adequate magnification is used so subject is not too small on image:
    • comet with tail - usually 200-500mm is adequate
    • comet head - may need 1000-3000mm for good effect
    • moon, sun - anything more than 500mm focal length is adequate
    • jupiter, mars at opposition, saturn - ~200 to 300x magnification is needed
    • nebulae, galaxies, globular clusters - usually 1000-3000mm for good effect - an 80mm refractor may be adequate
  • ensure focus is perfect
    • if possible use a dedicated focusing screen and a viewfinder magnifier
    • on digital cameras, set to manual focus & select infinity - may be able to use the viewfinder manual focus magnification to focus the telescope or via a connected laptop
    • may need to use additional methods to assist accurate focusing of the telescope such as Hartmann masks
  • ensure signal on film or digital camera will be adequate:
    • this is often trial and error
    • subject cannot be too under-exposed as you will never be able to create sufficient signal:noise ratio for an optimum picture
      • increase length of exposure - but do not go past the tracking limits of your system
      • use a faster lens eg. f2-5.6
      • use a higher ASA or ISO rating - if using film, consider gas hypersensitisation
    • subject cannot be too over-exposed as all stars will become fully saturated and lose any color, and if there is light pollution, ensure this does not saturate the background which would make gaining a satisfactory signal:noise ratio impossible
      • better to use multiple short exposures and combine them (see below)
      • consider a filter to minimise light pollution (eg. red H-alpha, OIII, or other red filter) bearing in mind that if filter is not perfectly flat, it may introduce optical aberrations which may require fixing before the image can be stacked with images taken without the filter
    • determine individual sub-exposures duration:
      • this will depend upon:
        • subject brightness
        • sensor ISO (and whether it is modified for H-alpha spectrum if target is a H-alpha emitter)
        • effective f ratio
        • use of filters such as light pollution filters which will require longer exposures
        • degree of Light Pollution which will limit exposure duration
        • accuracy of tracking for the given effective focal length which will limit exposure duration
      • as long as the subexposure is long enough to register the skyfog at some 30+ times the Read Noise of your camera, then your 30*1min exposures = 1*30min exposure. This is the so-called Skyfog-Statistics-Limited regime.
      • in digital cameras, many people aim to get the sky glow histogram midpoint at 10% of maximum if the camera has very low noise (eg. Canon 1 dmk2, 20D, 350XT) whereas those with noisier cameras at ISO 800-1600 (Canon 300D, 10D & Nikon D70) aim for this mid-pt being at 25-50% of maximum.
      • suggested sub-exposure durations at ISO 1600 for dark skies:
        • 1min at f/2.8; 
          • due to most people's mount limitations, this is what most aim for, hence they use the EF 200mm f/2.8L lens.
        • 2min at f/4;
        • 4min at f/5.6;
    • determine how many sub-exposures you need to take and stack:
      • the aim is to maximise the signal:noise ratio, and for images that have not become saturated and for which motion blur can be controlled:
        • signal:noise ratio improves with the square root of total exposure time
        • readout noise is proportional to the square root of the number of exposures taken, thus 4x5min exposures should give better S:N ratio than 20x1minute exposures
      • image stacking improves:
        • signal:noise ratio
          • stacking reduces the impact of random noise (not constant thermal noise - this is reduced by subtracting dark frames)
        • dynamic range
          • stacking increases the number of possible digitized values linearly with the number of images stacked.
          • stacking thus allows you to increase the brightness of dim pixels to above the range of noise
      • number of sub-exposures may be limited by:
        • changing appearance of subject (eg. a rotating planet)
        • increasing thermal noise in sensor from prolonged use
    • determine if you need to take a mosaic to ensure the dynamic range of the subject is covered:
      • mosaicing uses different exposure durations for sub-exposures so that when combined allowed an even greater dynamic range to be represented in the final image in a similar way to landscape photographers who create high dynamic range images by using bracketed exposures and combining them.
      • this need depends upon:
        • subject dynamic range (eg. bright stars or bright areas within otherwise dim nebulae)
        • sensor dynamic range:
          • webcams usually have 8bits
          • point and shoot digitals have 8-10bits
          • most dSLRs have 10-12bits
          • high end dSLRs such as Canon 1D Mark III have 14bits
          • dedicated CCD astrocameras and some medium format dSLRs may have 16bits
  • take photos in RAW mode if using a digital camera
    • in-camera creation of jpegs or even some tiff files results in only 8bits per color, meaning data is lost from the raw image which is often 12 or 14 bit per color
    • in addition, lossy formats such as jpeg create their own artifacts and dark frame subtraction is unsatisfactory
    • set white balance to manual mode such as daylight - autoWB may result in different effects with each exposure
    • disable in-camera dark frame mode if you want to get better results by taking own dark frames and getting a median
  • take the additional photos needed for later image manipulation:
    • multiple short exposures, the more the better eg. 15sec each
    • 5 to 10 dark frame exposures of same exposure duration if camera does not have dark frame built in
    • 10-30 flat field exposures of same exposure duration with system aimed at a uniformly lit target
      • these will be used to remove uneven image exposures resulting from vignetting, etc within the system
      • aim for an average image brightness that will be about half the max. brightness your camera can take
      • use exactly the same system set up as for your raw images, including focus point AND orientation of the camera
    • 10 dark frame flat field exposures
    • optionally take bias exposures esp. if you need to scale the dark frames
      • these zero time exposure frames which are subtracted from the dark frames to set its true electrical zero signal turn it into a thermal frame which can then be used to scale it even if the exposure times of the thermal frame and the raw image are different
    • if using a monochrome camera:
      • exposures in the different colors for LRGB imaging eg. H-alpha, Red, Green & Blue (blue is often double exposure of others)
        • these can then be combined later

Digital darkroom steps:

  • 1. calibrate the RAW, BMP or TIFF images (not jpegs):
    • convert RAW images to 16bit linear BMP or TIFF (ie. not jpegs as non-linear), unless your image software can manipulate the RAW image files
    • use image arithmetic to remove thermal noise & to “flatten image” - remove uneven brightness
      • median combine (or average) all dark frame images to make a master dark frame image
      • subtract the master dark frame image from each raw image
        • this removes thermal dark current which is an artifact not noise, and will actually increase noise, so this is why a number of dark frames are taken and the median of them is used to minimise the amount of noise introduced
      • do the same with the dark frame flat field exposures to make a master which is then subtracted from each flat field exposure
      • median combine all flat field images to create a master flat field image
        • use this in image software flat field routine to removing uneven brightness in the raw images that have had dark frame subtracted from them
      • the final calibrated image = (raw image - master dark)/master flat field
  • 2. aligning and stacking the calibrated raw images:
    • use astro-imaging software such as Registax, Registar,  Mira, MaximDL or CCDSoft to automatically align the images then stack them
    • you will need to set a reference image and choose a method of alignment such as Auto Star Matching
    • stacking images results in improved signal:noise ratio as by combining 10 images you get 10 x the signal but only 3 x the noise (as noise is random, it increases by square root of the number of images whereas the subject which is almost constant increases linearly).
    • a signal:noise ratio of greater than 3 is desirable.
    • Registar although diificult to learn, is superior to both Maxim & CCDSoft in aligning images (Mira is 2nd best):
      • A couple of things it can do that the others cant:
        • 1. Allow you to register images of different scales.
        • 2. It will do image warping to match the stars.
  • 3. maximising your image data:
    • use the histogram curve ONLY after all the prior procedures have been done otherwise you will lose important data by clipping it out:
      • avoid using brightness & contrast controls - these are just linear stretches of the histogram such that increasing brightness is equivalent to both pointers on the histogram being moved equally to the left, while contrast adjustments move the pointers in opposite directions.
      • use the gamma control to create a more film-like response by flattening the ends and increasing the mid-range gradient
      • ? use contrast stretch in ImagePlus using digital development and in Photoshop using LAB mode luminance channel only.
      • using DDP often works wonders with globular clusters and is a quick and dirty way to get a good non-linear stretch.  The resulting histogram will allow you to present the image with nice core detail and more of the fainter stars surrounding the core.  This can also be done in PS with a good use of curves…just takes more time and practice.
  • 4. remove imperfections and artifacts as needed:
    • clone tool to remove cosmic ray strikes, etc
    • bloom removal
    • fixed pattern noise removal using Fast Fourier Transform filters
    • deconvolution to remove motion blur or coma aberration:
      • Lucy-Richardson deconvolution
      • Maximum Entropy deconvolution
    • consider adding Gaussian blur to minimise appearance of residual noise
    • correct optical distortions:
      •  
  • 5. advanced colour manipulation as needed:
    • colour balance, hue & saturation
  • 6. masking and other tricks
    • light pollution often adds a gradient effect to the brightness of the image
      • this can be minimised by using gradient masks in Paint Shop Pro or Adobe Photoshop
      • in Iris use Background fit to remove it using a polynomial filter to create a sky background
      • ? use lighting effects to generate a light fall-off profile & reducing fall-off using a color burn layer in Photoshop.
    • comet tricks:
      • Iris RGradient processing to display jets more clearly
      • Iris Log view to display extensions more clearly
      • Iris angular filtering
      • Iris wavelet processing
    • detecting supernovae, asteroids & variable stars:
      • use Iris Blink processing
  • 7. finally the unsharp mask filter to sharpen the image
    • ?unsharp on luminance channel only

If you are really serious:

  • get some high end equipment:
    • Losmandy mount with dual axis drives
    • 4-6” f/5.6 APO refractor
    • SBIG CCD camera cooled to -45degC with in-built auto-guider and filter wheel
    • high quality H-alpha or OIII filter to minimise light pollution & maximise emission nebulae
    • this should only cost $A20,000-30,000 :)

Pertti's method:

1. Take RAW images (light, dark and flat frames), 2. Develop all the RAWs with ImagesPlus, linear mode, daylight white balance, into 16-bit TIFF files, 3. Average combine darks into a master dark and flats into a master flat (I don't use flats very often but sometimes they are crucial), 4. Calibrate light frames using the master dark and master flat, 5. Align calibrated light frames, 6. Combine aligned frames (adaptive addition or some of the variations of average combination), 7. Enchance the images (Digital Development, Levels, iterative restoration, sharpening, etc.) What happens here is that the master dark and flat remove all constant problems in the image.  Calibrating with the master dark does actually increase the S/N ratio, but too little to be noticed, because it removes constant hot pixels only.  Master flat does not change the S/N ratio but it rather balances the brightness of the center with the corners as well as removes dust effects. Stacking improves the S/N ratio the most and makes it possible to perform more drastic processing later on without making the noise visible. Besides, if you (manually) enchance images before stacking, you will need to perform it multiple times.  It is much easier to do after stacking, because there is only one image!

ImagesPlus does a nice job with its Adaptive Addition where overflow is automatically avoided. When the signal level is high enough to begin with, I use averaging or some of its variations, but when the signal level is low, like for nebulae, I use adaptive addition.  In fact, I use it a lot, because even for globulars and open clusters it helps to boost the dimmest parts of the image. Adaptive Addition in ImagesPlus not just adds but it increases the level of the dim parts of the image more in proportion.

Image Processing Nomenclature Abbreviations

  • R:     Raw image
  • DS:    Dark Subtracted S(T/#):multiple Stacked images (Type/# of images)       
    • The “T” in parentheses should be replaced by a word to indicate the type of stacking done. 
    • Additive stacking is the most common type, but other types include Averaging and Median stacking. 
    • The “#” should be replaced  with the number of images in the stack.
    • The parenthetical number indicates the number of images in the stack.
  • LS:    Light subtraction       
    • Also referred to as stray light subtraction or scattered light subtraction. 
    • A form of background compensation or flattening technique.
  • FFC:   Flat Field Correction
  • SSF:   Smoothness/Sharpening Functions       
    • Filtering techniques such as sharpening, softening, unsharp masking, high-pass or low-pass filtering, Gaussian blurs, etc. 
  • HM:    Histogram Manupulation       
    • Histogram equalization, stretching, clipping, histogram curve modification, etc.  Histogram manipulations may be done manually or automatically.  For instance, filters for brightness enhancement, contrast enhancement, color/tone enhancement, and the like work by histogram modification.
  • MT:    Manual Touchup: localized manual fixes, such as hot pixel removal
  • DSP:   Digital Signal Processing techinques: these include convolution,  deconvolution, Fast Fourier Transforms
  • DDP:   Digital Development Process       
    • A technique that reduces the dynamic luminance range of a digital image to be more like those obtained from conventional film photography.  The middle range of luminance is kept linear but the high and low ends of the luminance are made nonlinear to avoid saturation at the high end and to enhance details at the low end.
  • C:     Composite       
    • An image created by combining pieces of multiple images into a single image.
  • M:     Mosaic       
    • Mosiac is a specific type of composite image. 
    • In general, a composite image does not necessarily represent a real scene. 
    • A mosaic is a composite image that is intended to represent a true scene by stitching together an assemblage of images (generally overlapping        ones). 
    • A mosaic may be created to provide a larger or more detailed continuous image than a single shot could have provided, or to provide detail for a portion of an image that would not have been possible in a single image. 
  • O:     Other       
    • Anything not covered by the above. 
    • Helpful to let people know that additional processing was done beyond what is specified by the abbreviations that accompany the image.  Also helpful as feedback regarding this list; if “Other” appears very often, it would  indicate that some commonly-used techniques do not appear on this list and should be added.

Using Mira to remove background gradients:

As the software engineer explained it to my on the phone, Mira Pro is able to detect a gradient (which he referred to as a slope of linear data) and remove it with some complicated math that he tried to explain. I lost him halfway into it… You are able to specifically define regions that should not be used in the computation of the background data. Now, how do you tell if you removed the gradient, or where the gradient is in the first place? Mira has a powerful feature that allows you assign a color palette to a grayscale image so that you can look at your data in different color spectrums. By fine tuning the histogram stretch and the color palette contrast and gamma, you can get views of your data that reveal gradients in stunning color. For instance, lets say you are looking at your full screen nebula region. You would assign an aggressive stretch on the grayscale image, then assign a color palette to the image, do a little tweaking, and low and behold, one corner of the image is bright red fading to bright blue on the opposite corner of the image. Now you use the gradient removal tool (which they call fit background) by selecting regions to exclude from the calculation, set your math options, tell it to maintain the value intensity in the central part of the image, and click OK. You then can look at the false color data and determine if the field is perfectly flat. If it is not, you can undo, and do a little more tweaking of the fit background dialog until you get it right. At one point I was looking at M20 which filled the entire field. M20 was bright green, the background was bright blue, with a bright red gradient running through the background of the image. I selected the central brightest portion of the M20 and clicked on the fit background tool, and it perfectly removed the gradient, creating a perfect blue background with no red, although all the dim nebulosity stayed bright green (and yellow) at the brightest points. I did this on all four channels. It was magic. That's the only way to explain it. I also tried a image from the FSQ-106 and STL11000. A huge nebulous star field. The gradient popped out in living color. In grayscale I could not see it even with the most aggressive histogram stretch. In addition to that, you can load all four channels into an image set, where you can flip through the four images at 1-30 frames per second as an animation. Make real time histogram stretches and color palette adjustments that apply to all four channels at once and easily see the gradient differences in each channel. I could go on forever about what a great tool this is for *the perfectionist*…but I only know about 10% of the program so far. ;) $50,000 for imaging equipment…crummy pictures from gradients. $50,000 + $1300 for software, and your images come out nicer and more accurate with easy tools to repair data due to light pollution. Now its not all that easy, but we spend less than $1300 on three emission line filters. ;) There is a lot more to image processing than gradient removal, but I will tell you, without gradients, image processing becomes much easier. The more time I spend in Astrophotography, the more I realize that imaging raw data is the easy part. Its what you do with it once you have it. This is where software comes in. People have a hard time justifying expensive software purchases because it does not weigh 47 lbs and breaks your back putting it in your trunk…but if you think about it, software for image processing has a far greater impact on the final product of all your labors than the equipment you use. Give me an 8“ LX200 and a ST7XME and put me up against a 14” RC with a ST10XME. If the owner of the 14“ RC cant image process, then I produce the finer image. My point, think of software as being as valuable as your imaging equipment and then you start getting the proper perspective on value. I would rather own a 130mm refractor and lots of great software (and RAM), then a 180mm refractor and Paint Shop Pro.

rb from Mt Ewell Observatory July 2004

I agree one hundred percent with your philosophy.  It is a lot like buying a telescope.  Many new to this hobby and just starting out, buy such things as a LX200GPS-12. (Yours truly included.) They think that the big pretty OTA will allow them to see Hubble type vistas right in the eyepiece.  The mount is only that part of the system that holds the telescope up so you can look through it!  I now tell anyone who is thinking about getting into this hobby to buy a GOOD mount then look for an OTA.  If I were doing it again,  I would have bought something like a Losmandy GM-11 and put my old C-8 OTA on it until I could afford a larger OTA.  (It would have been less expensive too.)  Buying software is much the same.  I used Paint Shop Pro for a long time (And I still like some of its tools.) But PS-CS allows me to do things that PSP simply does not have the power to do.  This is what the extra few hundred dollars i n purchase price buys you.  I suppose the same thing is true about Mira.  You can use lower end programs and still come out with very good images, but the higher end tools produce the images with a lot less aggravation. Don Waid

I have seen so many people plan their hardware budget carefully to get a mount, scope, pier, reducers, focusers, etc., and have no clue that they will eventually spend another $2000 or more in software to process their images.  They also have little insight into the process work flow and how many disparate programs that they will need. For example, 1. Acquire your data with CCDSoft and do image links with TheSky to find the guide star.  Get Focusmax software to focus your electronic focuser for sharp stars. 2. Use reduction groups in CCDSoft (now Maxim, too) to reduce and possibly align your images. If you don't like their alignment quality, get other programs like Registar or MIRA. 3. Combine the reduced images in such programs as Sigma or Russ' Croman's RC Control Panel to take advantage of more sophisticated rejection methods. 4. Deconvolute the luminance in programs like CCDSharp for even sharper stars. 5. Bring the R,G,B into Maxim, normalize the background and apply the color weights after aligning.  Oh, you may have used alignment in CCDSoft or gone out and used programs like Registar. Create a 16-bit RGB TIF file in Maxim for import into yet another program, Photoshop.  If you don't own it yet, prepare to give up 2” Nagler eyepiece and buy Photoshop, and if you don't have Photoshop CS to work in 16-bit mode, better plan to upgrade $$. 6. You may still may go back into Registar to align the RGB TIF with the FITs luminance.  7. Import into Photoshop.  Oh, wait, the FITs file is not importing. Get Eddie Trimarchi's free FITs plug-in for Photoshop.  If you are working with 32-bit real (IEEE) files in such programs as MIRA, then you need to buy Eddie's commercial program. 8. Spend 3 years learning Photoshop (Total Training's DVD set is wonderful in this regard $$), and possibly buy Grain Surgery to smooth backgrounds.  9. Make sure you budget for a laptop or desktop computer with the largest hard drive you can find and a fast (e.g. 3GHz) processor, and don't forget that large-screen calibrated monitor.  10. And……… I know this is a generalized example (rant) just to make a point, and is nowhere near exact. There are many other programs out there, such as Image Plus, MIRA, AIP4WIN, Picture Window Pro, AstroArt, etc. just to confuse your selection. Have you ever tried flowcharting this????  Anyway, I was one who did a good job budgeting for the hardware with no clue as to the software. As Richard points out, it is your processing skills that will make your raw data alive, but you have to figure out your process flow first.  Don Goldman www.astrodon.com 

 

Another method to remove gradients:

The information for gradient removal is in the picture, when it is a many stars–much sky type picture. My backgrounds were effectively removed by doing a wide range median fiiltering on my original picture and subtracting it from the original. Of course, in many cases there are structured foregrounds which make this procedure difficult and artistic talent is necessary to make a structureless background. In case of Hi-Res pictures, it is not necessary to do a median on a full-scale picture (which took ages, especially on a 33 MHz 486). Resampling the picture for 10 % of the size, median filtering it and resampling to the original size was just as effective. Siebren Klein s.s.klein@tue.nl http://www.geocities.com/siebren2001/index.html

 

Manual normalisation: To balance the color in a RGB image you need to have all the images “normalized”. Roughly speaking, and this may not be technically correct, you need to set the background level of the images to the same value. As you can see from my post the background ADU counts for my R G & B were all very high and they were different. This is primarily due to the high level of light pollution I was imaging under. This light pollution is not even across the spectrum and therefore the different levels of background intensity. This of course is also affecting the main part of the image but you do not have a standard there to go by. The standard to balance to is a dark sky location. In effect, the normalization process is to remove as much of the light pollution (Sky Glow) effect as possible. This is not to be confused with correcting a gradient problem. That is a different process. To normalize the RG&B frames I use pixel math in MaxIm. I make note that the sub-exposures ha ve been reduced and combined into 3 R G B master frames. If any gradient removal plug-ins are used I do that before normalization. Now for the normalization steps. First open all three R G B master frames into MaxIm. Open the image information window. (View/Information) Select a master frame to work on. Choose three locations on the master frame where you know the sky is to be dark. These will be our standardization reference points. (Avoid any nebulae, stars, etc.) I set the aperture radius of my cursor to 10 pixels. (Right click on the image and choose “Set Aperture Radius”) Move the cursor over the three areas you chose and read the average ADU count displayed in the Information Window for each location. Get out you calculator and average these readings. This now becomes the background count for the frame. My R frame had a count of about 9,200. (Very high, I hope yours are not that bad.) You now go to pixel math. (Process/Pixel Math) I try to bring the background count down to about 125 to 150. To do this set the parameters in the Pixel Math Window to: Scale factor % = 100 Operation = None and Add Constant to the amount you want to reduce. In my case it is -9,050. This should give me a 150 background when the operation is complete. Click on OK and the operation completes. You can now check by moving the cursor over the three reference areas and see if they are in the ballpark of between 125 to 175. If not simply go to Edit/Undo Pixel Math and redo the operation again with a revised “Add Constant” amount. Do this for all three of the R G B frames. Just be sure you use the same three locations for all frames to get your background counts. After you normalize your R G B frames you can combine them into a master RGB image. I combine this RGB image with my Luminance image in PS layers. I know this is long and I am no expert in image processing. Some on this group may wish to educate me as to a better method of doing this. This is just what I use and I learned a lot by trial and error. (It seems like more error than anything else.) Use this and if you like it all the better. If it doesn't work for you, disregard or modify it. If you find something else works better please let me know so I can use it. Don Waid

Here ya go: 1. Open all three R,G and B master FITS frames in Maxim. 2. Start with the Red frame. 3. Open up the information window. VIEW > INFORMATION or Ctrl-I. 4. Set mode to APERTURE. 5. Right click on the image and set APERTURE RADIUS to 10 pixels. 6. Now grab your notebook or a piece of paper. 7. Find three spots on the image that are most obvious background areas largely unaffected my your object and stars. You may need to adjust your APERTURE RADIUS to accommodate for a very busy image. 8. Note in the information box the average ADU count for those three areas. You don't have to use three, you could just use two or even one. What I normally do is just scan the image for all the background areas and get a 'feel' for the background ADU count and then decide on a number that I think represents an accurate ADU background number. I think our brains can do a better job than the computer on figuring what is background and what is not based on what we see with our own eyes. Remember to stay away from obvious gradients and hot pixels and dark areas while doing this. Once you get the hang of it, you will be a background ADU expert. ;) 9. Pick a number and right it down in your notebook. Round up to the nearest 50. So lets say you look around three areas of the image and they all hover around 5012-5055. Write down 5050 as the background ADU for that image. 10. Next open up pixel math. PROCESS > PIXEL MATH. 11. Image A should be the Red frame we are working on. 12. Image B should also be the Red frame. 13. Operation is add. 14. Add constant should be 100 pixels less than our ADU number we came up with. So if we came up with 5050. The add constant should be -4950. 15 Select OK. 16 Now go to the corrected (normalized image) and confirm that the areas you were studying all hover around 100 ADU. +/- 20% is expected. 17. Now repeat these steps for the Green and Blue images using the same areas you measured ADU in the red frame. You don't want to use new areas in these images as that would defeat the purpose of getting all three images normalized. 18. Now you have three manually normalized RGB images. 19. Combine those using Mr. Goldman's fine RGB ratio's, (Don't forget to uncheck normalized images when combining), and you will find that color balance comes our very well. 20. When you save the image, make sure to save as TIFF file and that the file is stretched for 16 bit. (Under SAVE AS, select STRETCH, select LINEAR ONLY, INPUT RANGE as MAX PIXEL, OUTPUT RANGE, 16 BIT.) 21. Import TIFF into Photoshop and use levels and curves to bring out the image details. rb Richard A. Bennion Managing Director Ewell Observatory http://www.ewellobservatory.com

One way to manually normalize (per Ron's first book) is to use pixel math.  Say that your red channel has a background adu count of 1000, your green is 1500, and your blue is 800.  Then assume you want to bring down the value so _all_ 3 channels are at 100.  In Maxim, choose pixel math and select the “add constant” feature.  Then plug in a negative value for the red of -900, the green of -1400, and the blue of -700.  When you have done this, the background adu of all three channels will be around 100..and the background should look neutral when you combine the 3 channels (light pollution gradients aside).  Actually what I do is use software on each channel to take care of light pollution gradients first, then do the pixel math. Another way…the way I think Ron now uses and is in his new book, is just to do the RGB combine and select (click on) normalize background.  Then Maxim will do a reasonable job of creating equal background counts for each channel.  You would then bring this into PS and do final tweaks on the black point of the histogram of each channel so that the space between the left point and the starting point of each channel's histogram is equal.  I prefer this method because it's a hard core assurance that the background will be neutral.  (I have problems distinguishing between dark colors, so I rely on the histogram routine to make sure things are “right”) Once the background is “neutral”, color balance on the “target” becomes easier, since you are now dealing with just the color balance tool (minor tweaks if your RGB combine ratios were correct for you system)..or you can adjust the “target” color thru histogram changes in the midpoint and white point “pointers” for one or more channels.  (after doing either of the above, you may need to go back and tweak the background black points again to maintain a neutral background after the tweaks. Hope this makes sense…if not, please feel free to ask more questions…maybe I or someone else can explain it better.

Incidentally, the reason that both Richard and I are shooting for a 100 adu background result is that we don't want to go below the “pedestal” set by SBIG and others which is usually 100 adu.  If you normalize to a number below the pedestal of 100, the resulting histogram will look clipped.  (no space between the left point and the starting point of the histogram).  Randy Nulman

photo/ast_photography_adv.txt · Last modified: 2013/02/08 01:58 by gary1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki