Some thoughts on selecting projection relay optics when using DSLR’s
(and other “lens-less” sensors) for photomicrography.

Charles Krebs
(contact author)
July 17, 2006



This article was written with the intention of providing some guidance in selecting the optics for the appropriate use of digital single-lens-reflex cameras (DSLRs) for photomicrography. It may also be of some use when incorporating other types of digital cameras that have no attached lens. While much of what is discussed here is pertinent to fixed-lens point & shoot digital cameras, it is directed at using projection type relay optics with cameras that allow projection directly onto the image sensor with no attached camera lens.

It is not intended to be a comprehensive dissertation on digital imaging in photomicrography. Some might want to read this short description of my purpose. This written document was intended to be used together with a spreadsheet titled “relay_micro.xls”, but most of the information is useful without it.
   
__________________________


When using a digital camera for photomicrography, it is important that the sensor size, pixel dimensions, and relay magnification be selected appropriately.

The compound microscope objective forms a circular image of the subject that is formed in “space”, and is located at the aperture of the eyepieces. (Generally this is 10mm below the edge of the eyepiece tube). This is usually referred to as the intermediate image (although it is also called the “primary image” in some references). The diameter of the useful part of this image circle is around 20mm. In newer microscope optical systems it can have a diameter of 25mm or more, while with some older objectives it may be 18mm or less. Our goal in digital photomicrography is to take this intermediate image and with the use of additional optics, or perhaps directly, “place” it on the surface of a camera sensor. We must be sure that the sensor characteristics allow it to record the detail resolved by the objective,  and that an adequate portion of the intermediate image is recorded.

Microscope viewing eyepieces cover, or “see”, a certain amount of this image and direct it to your eye. This amount is indicated in the specification known as the field number (FN), and is the diameter of the portion of intermediate image circle that is seen. A typical set of 10X eyepieces will have a field number of about 20mm. So the microscopist using 10X eyepieces will typically see most of the usable image provided by the objective. In photomicrography the intermediate image is projected onto the surface of a camera sensor, typically by using relay optics.  If the sensor has a diameter of 20mm it matches the size of the intermediate microscope image closely. In this case a 1X magnification for the relay optics could be appropriate. If the sensor has a diameter of 9mm, it is significantly smaller than the intermediate image. If we want to record most of what is seen through the eyepieces, the intermediate image must be reduced in size with the relay optics in order for it to fit on the sensor. Otherwise we will record only a center section of the view we see. In these cases the relay optics will have a magnification value less than 1.  If the sensor has a diameter larger than 20mm, we need to increase the size of the intermediate image with our relay optics so that it matches the size of the sensor.  Here the relay optics will have a magnification greater than 1. It should be noted that in practice, it is not common for the camera to see “exactly” what is seen through the eyepieces. Typically it records a cropped image relative to eyepiece view. It is up to the microscopist to decide (with the choice of camera sensor size and relay optics) what works best for their purpose.

So far we have looked at fitting the size of the image produced by the objective onto the camera sensor. Next we must look at what is required of the sensor pixels (their number, and the actual physical size of each individual pixel) if the sensor is to adequately capture the image we place upon it.

The resolution limit of an objective is determined by its NA (numerical aperture). For our purposes, this can be calculated as follows:

r = .61 λ /NA

Where r is the smallest resolvable distance between two points, the resolution limit.
λ is the wavelength of light. (For our purpose here, we will use 550 nm)
NA is the numerical aperture of the objective.

To attain this resolution, the condenser NA must be as large, or larger than the NA of the objective. If the condenser NA is smaller than the objectives, then the "system" NA will be the mean of the condenser and objective numerical apertures: (NAobjective + NAcondenser)/2     

The actual size this “smallest resolvable distance” as it appears in the intermediate image can be determined by taking its calculated dimension, and multiplying it by the objective magnification.

Further, the actual size of this resolution limit as measured in the image at the sensor’s surface  is determined by the amount the intermediate image is enlarged or reduced by any relay optics. (If direct projection is used – that is, if the sensor is positioned so that the intermediate image falls directly onto the sensor surface, with no intervening relay optics -- then this “magnification” is 1).

Once we have determined the size of the resolution limit at the sensor, we can then determine the largest physical pixel size that will allow us to record that detail. At a minimum, you need to have at least two pixels (samples) that occur within this smallest resolvable distance.
 The Nyquist Theorem, or Nyquist-Shannon Sampling Theorem, shows that you must sample at twice the rate (or greater) of the highest frequency you wish to reproduce
It can be seen that the most “demanding” commonly used objectives (as far as pixel density and pixel size are concerned) will be of relatively low power and high NA. 4X PlanApo’s and 10X PlanApo’s that have NA’s of about 0.16 and 0.40 respectively are prime examples.  

The required maximum pixel size (largest pixels allowable) has been determined. Note that the maximum size of the pixels we need is determined solely by the optics that form the image, and is independent of physical dimensions of the sensor  If the pixels on the sensor are larger than this maximum allowable, then the sensor is not suitable for use with the optics as configured. If this is the case, it  means that we need to modify (increase) the magnification of the relay optics used.  

With the above information, we can then determine the minimum number of pixels required on a sensor of given size. This is determined by the required physical pixel size and the overall physical dimensions of the camera sensor.  Also, the portion of what is seen through the eyepieces that is recorded is also now fixed, based on the sensor dimensions and the magnification of the relay optics.

Column D in the spreadsheet is an important number to consider. It shows the diameter of the field that will be recorded for different sensor sizes based on the information entered. This should to be compared to the view seen through the eyepieces.  As mentioned above, a typical set of 10X eyepieces will have a field number (FN) of about 20mm. You will see a circle of 20mm diameter from the intermediate image formed by the objective.  Depending on the sensor size and relay optics, you will record a certain portion of the intermediate image, but it may be very different from the portion seen through the eyepieces.  To avoid confusion with the eyepiece specification, I have labeled the number in Column D “FNOS” to indicate “field number of sensor”.  If you compare this number to the FN (field number) of your eyepieces you will be able to determine how much of what you actually see will be recorded with any given combination. For example, if the FNOS is 9-10mm, it indicates that you are recording about half of the diameter of the view as seen through typical 10X eyepieces. If the FNOS is “4” you will be recording a 4mm diameter circle portion of the intermediate image. This would be just 1/5 the diameter of the view seen through 10X eyepieces. This would be considered an excessive crop to most people. It is up to you to determine if the FNOS is adequate for your purposes.

For a point of reference… a 2.5X projection eyepiece was something of a standard recommendation when the 35mm film format was commonly used for photomicrography. Based on a 36x24mm “sensor” size,  the FNOS with this 2.5X  photo eyepiece is just over 17mm.

You can increase the FNOS by reducing the relay magnification. But care must be taken, as this will also dramatically reduce the physical size of the pixel required to record the finest detail.  When using the most demanding objectives, it is not really possible to have a large FNOS with some of the smaller sensors used in many of the microscope cameras manufactured today.  

It is very interesting to look at the specifications of one of Zeiss’ “high end” microscope cameras, the AxioCam MRc5. The sensor’s physical size is 8.7x6.6mm (2/3”). The pixel size is 3.4x3.4 micron. The number of pixels is 2584 (H) x 1936 (V).

If you plug in the values of a 10/0.40 or 4/0.16 planapo into the spreadsheet, you find that the calculated required pixel size is 3.4 micron, and the number of pixels needed in the x-dimension for the sensor size is 2560. Obviously the Zeiss camera’s specs are no coincidence!


Now let’s look at some practical considerations.

With nearly all older finite tube length microscope optics, the objective did not fully correct all optical aberrations. By design, some final corrections were accomplished in the viewing eyepiece or photo-relay (photo-eyepiece). The most common (but not the only) aberration corrected by eyepieces was CDM (chromatic difference in magnification).

The notable exception here would be the Nikon CF series which were
160mm tube length, but were” fully” chromatically corrected in the objective.

Unfortunately, there was no standardization between manufacturers as to the exact amount and type of correction to be accomplished by the eyepiece. So the safest approach is to use the eyepieces and photo-eyepieces that each manufacturer designed to be used with their objectives. That said, many have gotten satisfactory to excellent results using a mix of manufacturers optics, but it’s a gamble and must be examined on a case-by-case basis.

In practice, this means that users of finite systems need to be concerned that the optics they use to relay the image to the sensor also provides any necessary final image corrections.

In all of the more recent infinity optical systems, the image is fully corrected by the time it reaches the eyepiece. (In some, all correction is done in the objectives, in others some final correction is accomplished with the tube lens). As a result, users of infinity optics (and Nikon CF 160mm finite) have it a bit easier, in that relay optics need be of high quality and appropriate magnification, but no unique aberration corrections are needed. This also gives some users of “infinity” optical systems the option of directly projecting the image formed by the objective (and tube lens) onto the camera sensor with no other intervening optics. Typically these newer systems will provide a usable intermediate image of a diameter larger than the 20mm mentioned above (more in the range of 25mm). In this situation the physical sensor size is an important consideration. With current  Canon and Nikon DSLR’s the diagonal of the sensor is about 27mm, so the resulting images will need to be cropped. A camera using the “4/3” sensor size has a sensor diameter of about 21.5mm, providing a very nice fit for the intermediate image. If you use high NA, low power objectives in this manner (1X relay or “direct projection” from objective), care should be taken that the pixel specifications of the sensor are up to the task. Often you will find that the pixel size and density  are not quite sufficient for these demanding objectives. The referenced spreadsheet is helpful in examining this situation.

When finite optics was the norm, images were typically recorded on film or Polaroid. 4x5” was a common format, and 35mm, with frame dimensions of 24x36mm, was the small format. As a result, the commonly seen photo-eyepieces had magnifications ranging from 2.5X - 10X to magnify the intermediate image produced by the objective onto the desired film format. There was certainly no need to accommodate film sizes of 5x7mm and even smaller.  There is an Olympus NFK 1.67X that was made to be used with their LB series objectives (and likely other low-power photo-eyepieces I am not aware of), but 2.5X is generally the lowest common corrective photo-eyepiece encountered.  When a 2.5X photo-eyepiece is used with one of the reduced frame DSLR’s (Canon Rebel, 20D, 30D and Nikon bodies) most all of these bodies will meet the required pixel size specs for even the most demanding objectives, and the FNOS will be about 11mm. (Some may find this a bit small, but it should be noted that this is nearly identical to the FNOS obtained with the above referenced Zeiss microscope camera). When a reduced frame DSLR is used with the Olympus NFK 1.67X the resulting FNOS is about 16-17mm. The “specs” fall just slightly below those required by a 4/0.16 and 10/0.40 if they are used at maximum NA.

But again, it must be pointed out that these objectives, used at maximum NA , present the most extreme requirements. The vast majority of objectives are much less “demanding” .  Even if you have such objectives, you must realistically consider if they are actually used at maximum NA. Typically the sub-stage diaphragm is closed down at least a small amount. If the “effective” NA used is 90% of maximum, these cameras meet the requirements of these objectives when used with the 1.67X photo-eyepiece.

Many of the inexpensive “eyepiece” digital cameras (tubular, inserted directly into eyepiece or trinocular tube) use 1/3” or ½” sensors. I am not aware of any relay solutions for finite optical systems that would provide the necessary reduction for these small sensors while providing required correction or compensation. How important this correction actually is to the final image quality will likely vary with the objectives used, and the user’s tolerance.  

Finally, it is important to bring up the issue of Bayer pattern filters. With the exception of the Sigma DSLRs using the Foveon sensor, all current DSLRs use either CCD or CMOS sensors with a Bayer mosaic filter. This is also true for many “dedicated” microscope cameras. Under certain circumstances (such as monochromatic red, green or blue light with monochromatic subjects) resolution and color accuracy can be lower than expected. This is rarely an issue in “conventional” photography, and for most users will not be a serious concern in photomicrography. However, photographers should always be aware of the characteristics of their equipment, so that if necessary, they can make allowance for  unusual circumstances. In microscopy, we can occasionally face such conditions. It is is sometimes suggested that green light be utilized with achromatic objectives that display high amounts of chromatic aberration. This will often minimize the blurring effect this chromatic aberration can have on the image. Microscopists may encounter slides with specimens that have little “hard” detail and are monchromatically stained. In these situations, the Bayer pattern covered sensor might record less detail than would be expected. This is more likely to occur when working at the minimum sampling limit of two pixels per resolvable detail, which is generally seldom the case.

If you anticipate that you might frequently encounter these circumstances, you may wish to play it “safe”. When studying possible set-ups on the referenced spreadsheet, consider your pixel size to be to be 1.4X larger than it physically measures. A worst case scenario would be using blue or red monochromatic light, where a critical user might want to consider the pixel size to be 2X its actual physical measurement.

The following page contains a section with a discussion of this situation:
http://www.olympusmicro.com/primer/digitalimaging/cmosimagesensors.html

These two pages are also full of good digital imaging information that should be understood:
http://micro.magnet.fsu.edu/primer/digitalimaging/digitalimagebasics.html
http://www.microscopyu.com/tutorials/java/digitalimaging/pixelcalculator/index.html