here - Starstone Software

Choosing a Camera for
Astrophotography
Richard S. Wright Jr.
[email protected]
[email protected]
WSP 2015
About Me
11+ Years with Software Bisque
Graphics (OpenGL), Mac/OS X, Mobile (iOS/Android), Camera Add On
Plug Ins.
Imaging Nut Case/Evangelist
Paramount MX+ (Pier and dome)
Paramount ME 2 (Portable)
Paramount MyT (on the go)
DSLR & CCD (Mono & OSC)
Astrographs & Camera Lens’s
Image Gallery: www.eveningshow.com
Blog: www.eveningshow.com/AccidentalAstro
Twitter: @AccidentalAstro
My Gallery
www.eveningshow.com
See, I do take pictures too.
My Florida “Observatory”
Just another day at the
office…
I’ve written “most” of the Camera Add on plug-ins, so I’ve access to a lot of different kinds of cameras, and of course I love to use them, and I “have” to
use them for testing, etc. So this is a topic I feel pretty competent to be talking about. I’m not going to tell you which camera to buy, but rather I’m going
to try and help educate you so that you can be more informed about camera technology as your shopping and evaluating your choices.
Choosing a Camera
Mono or Color?
Sensor Size/Dimensions
Cooled… how much?
On Semi*/Sony/Canon
Sensitivity/QE
CMOS vs. CCD
Noise
Full Well Depth
Brand/Reputation/
Reliability
Blooming
Back Illuminated
Pixel Size
*Kodak was bought by TrueSense, which was bought by On Semiconductor
Bewildering…
First Principles
Noise & Signal
What is an ADU
Quantum Efficiency
Full Well Capacity
Pixel Size
Blooming
Science always
trumps opinion…
CCD or CMOS
Hammering with a wrench, example of focusing on stars on the near side of a globular cluster.
It’s all about the NOISE…
THIS is the single biggest enemy to good images.
Maximize Signal to Noise
S/N Ratio
The ONLY true
objective metric of
image quality*
Lower the noise
Raise the Signal
EVERYTHING we do is
aimed at increasing S/N
ratio
*Arguably contrast is of nearly equal importance…
Camera, Optics, Mount… they all contribute to the S/N Ratio. When your selecting a camera, your trying to optimize this among other things.
Contrast is easy to “fudge” in post processing, and is usually artificially tweaked anyway. It is also more dependent on your choice of optics than it is on
the camera. Reducing noise without destroying data is a much harder task.
Do Audio Demo…
Analog-Digital Units
ADU - a count of photons
detected at the pixel level… sort
of
Gain - electrons per ADU
-
e.g. Gain of 0.7 e /ADU
1000 photons detected =
1428 ADU’s
-
Gain of 1 e /ADU is Unity Gain
Typically gain is selected to
spread photon counts across a
full 16-bit range (0-65535)
Quantum Efficiency (QE)
How efficient at turning
photons into electrons
Not every photon is counted!
Color Cameras often < 50%
Mono cameras 60-80%
(even with color filters)
AntiBlooming - Back illuminated - 95%++
Pixel size also a factor in light
gathering capability!
Used with permission from qsimaging.com
Full Well Capacity
How big is your bucket!
Small Pixels have
smaller capacity
Large Pixels have more
capacity
This is why Gain is
important
How does this affect your
image?
Some Tiny pixels such as on the new Sony sensors are only 11,000 or so. Larger pixels such as on a 16803 have a FWC of 100,000.
Linear Response?
Used with permission from qsimaging.com
Blooming
100% Linear Response
Slightly higher QE
Back Illuminated non
anti-blooming - Best
you can do today
Non anti-blooming… talk about a double negative!
CMOS vs. CCD
CMOS has faster
readout/lower noise
Ideal for video imaging
Economies of scale re DSLR’s.
CCD’s generally have
better QE at UV and NIR
wavelengths
Both currently have good
read noise
characteristics for long
exposure visible light use
Practical Applications of
what we’ve learned…
Planets?
CMOS Video. We are
done, you can go now,
thanks for coming.
Color or Mono?
Cooled or Not (DSLR)?
Chip size/Pixel Size?
What’s the deal with
these new Sony chips?
Color/Mono Tradeoffs
Color is convenient
Ideal for some situations
Less sensitive - better
suited for fast optical
systems
Mono is more work
More versatile
More sensitive - well suited
for slow optical systems
How do color chips work?
They are really
monochrome sensors
with colored filters over
the pixels
Each 2x2 cell of 4
pixels is part of the
“Bayer Array”.
Typically contains 1
Red, 2 Green, and 1
Blue pixel.
How do color chips work?
Debayer, demosaic, colorize…
all are interpolations
Super Pixel - reduces the
image by 1/2, one pixel per
layer (green is averaged)
VNG - Variable Number of
Gradients - Best for
astrophotography
Bilinear - Works pretty well/
faster
In-camera based algorithms,
best for daylight (proprietary/
patented anyway)
Raw data has this
checkerboard type pattern.
OSC CCD’s use this workflow. Some DSLR workflows hide this from you… come see me later for more details on that ;-)
Multiband Imaging
Red
Green
RGB
Blue
Multiband Imaging
L (Luminance)RGB
L contains all three
(RGB)
Much stronger signal
Much better S/N
Low resolution or
quality RGB can be
combined with with a
high quality luminance
Multiband Imaging
L (Luminance)RGB
L contains all three
(RGB)
Much stronger signal
Much better S/N
Low resolution or
quality RGB can be
combined with with a
high quality luminance
Multiband Imaging
L (Luminance)RGB
L contains all three
(RGB)
Much stronger signal
Much better S/N
Low resolution or
quality RGB can be
combined with with a
high quality luminance
Binning
Sums neighboring pixels at readout
2x2, 3x3, etc.
Brightens image
Speeds up download
Reduces read noise by 4x
Great for short exposures of dim objects
Loose spatial detail and resolution
If your not read noise limited, why bother?
Binning
Which one of these was binned 2x2?
One test is worth a thousand expert opinions.
What is Narrow Band?
Filters allowing only a
“narrow” wavelength of
light through
Ha - Hydrogen Alpha
OIII - Ionized Oxygen
SII - Ionized Sulfur
Nitrogen, Hydrogen
Beta, others…
Light Pollution
Resistant
Moonlight resistant (OIII
not so much…)
Weather limited
imaging!
Mostly Emission Nebula
Star forming regions in
galaxies (Ha)
What is Narrow Band?
DSLR’s Are Perfect for…
DSLR’s Are Perfect for…
DSLR’s Are Perfect for…
DSLR’s Are Perfect for…
DSLR’s Are Perfect for…
DSLR’s Are Perfect for…
DSLR’s Are Perfect for…
DSLR’s Are Perfect for…
Can also work well for…
Modified DSLR
The stock IR filter also blocks HA (red)
Modified DSLR
The stock IR filter also blocks HA (red)
DSLR
vs.
Costs MUCH LE$$
Dual Purpose
CCD
Cooling/Noise!
Best for long exposures
Higher shutter speeds
over full frame CCD*
Monochrome/Narrow
Band/RGB
Competitive for short full
frame images (sun, moon,
milky way)
Far more sensitive for dim
detail (need shorter
exposures)
Simpler processing
Better dynamic range
Great/Ideal for wide field
Non-Antiblooming are
better suited for science
Bigger Chips!
For a color CCD, the only real advantage is the cooling hardware. Some DSLR moders make cooling boxes, and this does help.
Advanced processors, separate out the color channels and process them individually anyway.
Two advantages to blooming camera’s - linear response, and greater sensitivity
DSLR imaging is a really good gateway to CCD. Starting with CCD is expensive, and it’s more work. Don’t start with the hard stuff.
*Interline CCD’s have an “electronic shutter”, and can be very fast indeed.
DSLR vs. CCD
*Single raw 10 minute exposure @ f/3. No calibration,
stretched only.
Looking back, I think if I had calibrated the DSLR image, it probably would have come out a little better.
Cooling
Photo’s courtesy of Gary Honis: http://dslrmodifications.com
Cooling
It’s all about da’ noise…
All chips have “thermal
signal”… dark current.
Dark current has noise
too, and this adds to your
signal, making it noisy
Typically, every 5 to 6
degrees -C, cuts dark
current in half… and it’s
associated noise.
Cooled OSC
Every major vendor offers
a cooled OSC model of
some type
Some of the advantages
of a DSLR
REGULATED Cooling!
Lower dark current/
noise
Dark Libraries
Don’t stretch your darks…
Chip Size
Full Frame (35mm)
APS-C (cropped)
Various CCD flavors
http://www.flicamera.com/pdf/ccdposter.pdf
Chip Size
Pixel Scale
How big are your pixels?
e.g. 4.9 microns
Compare to optics “spot size”
Compare to seeing conditions (almost always the real limiting
factor)
Under sampling results in square stars
Oversampling can be a waste
Bin to reduce read noise or hide tracking problems
Small pixels are not always better
What you want is “Critical Sampling”
Pixel Scale
Compute Pixel Size
Use TheSkyX ;-)
Pixel Scale
Compute Pixel Size
Use TheSkyX ;-)
Pixel Scale
Compute Pixel Size
Use TheSkyX ;-)
Plate solve
Pixel Scale
Compute Pixel Size
Use TheSkyX ;-)
Plate solve
Compute it!
Pixel Scale
Compute Pixel Size
Use TheSkyX ;-)
Plate solve
Compute it!
e.g. 4.54 micron pixels,
600mm scope…
(4.54 / 600) x 206.3 =
1.54 arcseconds/pixel
Pixel Scale
Compute Pixel Size
Use TheSkyX ;-)
Plate solve
Compute it!
Spot size of your optic
Published specs
Rough formula (for
diffraction limited
optics)
A = Angular diameter in arc seconds
D = Diameter in mm
Airy disk:
Pixel Scale
Sampling
How many pixels for the
smallest detail/star?
Stars are round, not
square
Nyquist -> digitally
sampling an analog signal.
“2x or greater” - Critically
sampled.
*Star profiles are
sinusoidal in nature… 3
is better.
*Signals with sinusoidal
components must explicitly
be more than 2x sampled.
Wait wait… what is your seeing?
Remember, what’s your trade off for smaller pixels? Less full well, less sensitive, worse S/N
Match camera to optic/
seeing
Fast optics have smaller spot
sizes
Work great with small pixels
Good S/N
Slow optics/long focal length
Don’t over sample the
seeing (blurry mess)
Larger pixels or binning
necessary
Match camera to optic/
seeing
Fast optics have smaller spot
sizes
Work great with small pixels
Good S/N
Slow optics/long focal length
Don’t over sample the
seeing (blurry mess)
Larger pixels or binning
necessary
e.g. For diffraction limited
optics with sufficient spot
size, say you have 4
arcsecond seeing…
1.3 arc second per pixel is
sufficient.
Match camera to optic/
seeing
Fast optics have smaller spot
sizes
Work great with small pixels
Good S/N
Slow optics/long focal length
Don’t over sample the
seeing (blurry mess)
Larger pixels or binning
necessary
You would need to have
consistent, steady 1
arcsecond seeing & supurb
optics to justify 0.3 arc
second per pixel
sampling.
Match camera to optic/
seeing
Fast optics have smaller spot
sizes
Work great with small pixels
Good S/N
Slow optics/long focal length
Don’t over sample the
seeing (blurry mess)
Larger pixels or binning
necessary
Small pixels collect less light,
so the S/N is going to be
worse than for large pixels.
This is why they are better
suited for faster optical
systems as fast f-ratios are
inherently better S/N per
pixel.
Match your camera to your
optic
Small pixels work best
with fast optics
Large pixels work best
with slow or long focal
lengths
Mono with filters beats
OSC (one shot color)
hands down, but can be
compensated for with
additional exposure time
What’s a microlens?
Catches photons that
hit between pixels.
Increases QE
You want this
Unless your doing
UV…
Image courtesy of QSI
Computing Read Noise
Take a BUNCH of bias
frames (exposure
length of zero)
Stack them, use a
rejection algorithm
You’ve just stacked
away all the read noise
and are left with
nothing but the fixed
pattern noise (signal)
from the chip.
Computing Read Noise
Compute the “Standard
Deviation” of the pixel
values (via PixInsight
shown here)
SD (Sigma) is the
variation from the
average…
It’s a measure of
uncertainty from pixel
to pixel - the noise!
All the SD of the clean
bias tells us is relatively
how smooth the chip
itself is… and this is why
we want to subtract bias
frames anyway…
Computing Read Noise
To get the actually
READ noise…
Subtract the clean
frame from any
single bias frame
Measure IT’s
Sigma… tada - Read
Noise for a single
frame
*You will need to offset
the bias by some constant
to avoid negative pixel
values.
Computing Read Noise
Sigma is in ADU’s or
may be “normalized”
Normalized means
values are 0.0 to 1.0
Just multiply by
65535 to get ADU’s.
Multiply by gain to get
read noise in electrons
e.g. 0.0002562 x 65535
= 16.79 ADU’s of read
noise.
16.79 x 0.3 gain (from
camera specs) =
5 electrons read noise*…
*Starlight Xpress Trius 694, matches
camera vendors claimed read noise.
Computing Dark Current Noise
Ditto the Dark Noise
Stack a bunch of
darks.
Subtract the clean bias
Take any Dark and
subtract the bias from it
as well
Now subtract the single
dark from the dark
master.
Computing Dark Current Noise
Ditto the Dark Noise
Stack a bunch of
darks.
Subtract the clean bias
Take any Dark and
subtract the bias from it
as well
Now subtract the single
dark from the dark
master.
I did this on a Sony 694,
and found a 20 minute
exposure had <1 electron
worth of dark noise.
Don’t Need No Darks?
Dark current may not be
uniform per-pixel
Hot pixels just turn into
cold pixels
Cosmetic Correction
(pixel interpolation)
Dither - stack with a
good rejection
technique
You do need a bias frame! Always calibrate with a Bias if your not shooting darks.
RBI?
16803 is a very popular chip, and a camera using one should have RBI mitigation.
Residual Bulk Image
From “Residual Bulk Image Quantification and Management for a Full Frame CCD Image Sensor”,
by Richard Crisp
Residual Bulk Image
From “Residual Bulk Image Quantification and Management for a Full Frame CCD Image Sensor”,
by Richard Crisp
For more in
depth info on
this topic…
Photon Transfer
James R. Janesick