Resalta Research Technologies
 
Options
TEM Cameras
KeenViewQuemesa
Quemesa
QuemesaQuemesa
Quemesa
Cantega G2Cantega G2
Cantega_G2
MegaView G2MegaView G2
MegaView G2
VeletaVeleta
Veleta
iTEM Software
iTEM platformiTEM platform
iTEM platform
iTEM CEiTEM CE
iTEM CE
iTEM desktopiTEM desktop
iTEM desktop
archiveFDAarchiveFDA
archiveFDA
webRacerwebRacer
webRacer
iTEM Solutions
Solution EFTEMSolution EFTEM
Solution EFTEM
Solution DiffractionSolution Diffraction
Solution Diffraction
Solution TomographySolution Tomography
Solution Tomography
Solution DetectionSolution Detection
Solution Detection
Free Software
siViewersiViewer
siViewer
measureITmeasureIT
measureIT
SEM Products
Software OverviewSoftware Overview
Scandium
ADDA3ADDA3
ADDA3
Resources
BrochuresBrochures
Brochures
Contact
SalesSales
Sales
SupportSales
Support
TEM products
TEM Transmission Electron Microscopy, invented in the 30s by Ernst Ruska, has proven to be an irreplacable technique for many aspects of science, research, development, medicine, forensics, and a myriad of other uses and applications. The technology has not stood still either, and the TEM field is bristling with new and improved techniques, such as the move to digital, aberration correction, remote microscopy, etc. In particular the move from film to digital image acquisition has revolutionized the field. Just 20 years ago, a tedious process of exposing film (with a limited number of film plates in the microscope), breaking the vacuum and removing the film (in complete darkness), developing the films in a darkroom filled with caustic chemicals, and finally making prints of the negatives was the standard. Today, most TEMs are equipped with digital cameras. These cameras allow a virtually unlimited number of images to be taken, the images are immediately available in digital form and can be evaluated on the spot, and transfer to other documents like reports or papers is just a push of a button. Of course, there are many different cameras available, and one of the major tasks when buying a TEM is to determine which camera to buy.
If you already know which camera you prefer, please feel free to explore the options ResAlta has to offer. If you are still deciding which camera is best for you, please use the tool below to give you a starting point. The recommendations are based on our experience with a variety of customers that have different requirements on field of view, resolution, frame rates, noise, sensitivity, etc. We hope that the suggestions can serve you as a guideline. Of course, they are just general suggestions, and your specific requirements may be different.
Camera Finder
Best Choice: More Information
2nd Choice More Information
3rd Choice More Information
Technology
Multiple parameters affect the choice of camera. Below is a list of the major parameters to help you find your way through the Camera maze

Finding the right camera is more than just picking the camera with the most pixels. Factors like camera mounting, chip technology, resolution, camera speed, or software need to be taken into account for a decision that meets the requirements. And there is no "best" camera. Some people need the highest sensitivity for their low-dose work, others need the fastest camera for research on dynamic processes. Those two requirements are almost mutually exclusive, and what is the best camera for one field, may turn out to be a sub-par camera for another field. below are a few of the parameters that you should take into consideration before deciding on a camera.

Camera mounting
Most TEMs, but not all, have 2 principle positions where a camera can be mounted without interfering with the normal work. These are the so-called "Side-mount" and "Bottom-mount" positions (sometimes also called "off-axis" and "on-axis"). We prefer to use the former to avoid confusion with, for example, detectors, which can also be mounted off-axis with a different meaning from the camera positions. Position of the camera will be one of the major decisions.
Side-mount cameras are installed on the 35mm port (so named after the old technology to attach 35mm film cameras). The port is located above the viewing chamber and allows access to the beam as it leaves the last lens. As the beam spreads in the viewing chamber on its way to the viewing screen, and the port is higher up in the column, the camera will see a smaller image than presented on the viewing screen. This permits side-mount cameras to have a very large field of view, often larger than the negative. This is their biggest advantage, although it requires certain trade-offs. The image created on the camera phosphor screen is small, which ultimately leads to a lower resolution of these cameras compared to the bottom-mount cameras. The beam must also be intercepted, which means that the image ceases to be visible on the viewing screen. Since either the camera scintillator or the camera itself needs to move in and out of the beam, a moving mechanism is required. And because of the limited space in the side-mount position, cooling of the cameras may be restricted. One aspect is the accuracy of movement. Most cameras use sophisticated background subtraction algorithm to remove contrast that stems from the phosphor or the beam itself. For highest quality, these reference images must not shift with respect to the actual image by more than 1 pixel. With the side-mount cameras moving over relative long distances (inches) and the pixel size on the order of several microns, this is not a trivial task, especially if the phosphor and camera are moved relative to each other, which is the case in all cameras that use 2 side-ports. Olympus Soft Imaging (the cameras Resalta distributes) are the only camera that use single port technology, where the phosphor and camera are rigidly coupled at all times, resulting in superior images.
Bottom-mount cameras are essentially complementary to their side-mount brethren. They are installed below the viewing chamber, so they do not interfere with the normal TEM work. Due to their position, they see a larger image, which allows the highest resolution, but often limits the field of view. Cooling is usually not an issue, and the bottom-mount position is therefore suitable for deep cooling. Background subtraction is less of a problem for these cameras as they typically do not move.
Resolution
Resolution of TEM cameras is a difficult issue, as the resolution is ultimately limited by either the phosphor of the camera or screen or by the resolution limit of the microscope, depending on the magnification. To understand this, consider a single electron striking the phosphor. The electron excites the phosphor to emit photons, and it can do so multiple times as it is scattered in the phosphor. From this it is already clear that even if the microscope had unlimited resolution, two electrons that hit the phosphor in close proximity may not be distinguishable if they are closer than the extent of the volume where the scattering takes place. The volume itself depends on various parameters, of which the energy of the electron is the most important. In general, the excitation volume ranges from several microns in diameter to several tens of microns for higher acceleration volumes.
If that is the limiting factor, it already provides a fairly accurate description of the pixel size that should be used for imaging the phosphor. It should be on the order of the width of the scattering volume. Much larger pixels lead to loss of resolution, while much smaller pixels lead to empty information. In most cases the pixel size of the CCD chip used can be adjusted to the desired size by the use of optical elements (lenses of fiber-optics).
With the pixel size basically determined by the physics of the imaging and the phosphor, the task is then to match the pixel size of the camera chip to the desired effective pixel size. This can be done through optics, either regular lens optics, or through fiber optics. Lenses have the advantage that they can be designed with more flexibility, but signal attenuation and distortions are a concern. Fiber optical systems are less flexible for the design, but provide a more stable solution, as they don't have to be refocused and any distrortions can be taken care of by online and offline correction procedures.
Lens-coupled vs. Fiber-coupled
Lens coupled systems are, as already mentioned, more flexible to design and ultimately less expensive to produce. The short working distances often require lenses that are prone to distortions, which then need to be corrected for. These systems also need to be focused onto the phosphor screen, which may require regular maintenance to maintain the focus, especially if the camera moves relative to the phosphor.
Fiber-coupled systems are less prone to changes that require maintenance, as the camera-fiber-phosphor block is usually a single piece that does not allow movement. However, to achieve a magnification or de-magnification with a fiber-optic bundle, the fiber needs to be "tapered". In this process the fiber bundle is heated and stretched. The stretching thins all the fibers and the cross section reduces. Obviously this is a complicated process with fairly high failure rates, so these tapered fibers are expensive. The range of "magnification" is also limited, so it may not be possible to find a solution for any CCD chip. Fortunately, most chips have pixel sizes that makes the application of fiber-optics possible.
As the fiber optic bundles arrange in a hexagonal pattern and the pixels are in a square pattern, artifacts are visible on the images. These artifacts take the shape of dark lines, arranged in a hexagonal format (the so-called "chicken-wire"). This is a fixed artifact that can be completely eliminated through appropriate correction images. The structure is also fixed, so that it is fairly easy to remove.
Lenses are usually compound lenses, and that means intensity losses at the glass-air interfaces. Those can be reduced by anti-reflection coating, but lens systems usually have lower sensitivity than fiber coupled systems. Lenses also tend to show more vignetting (the images become darker towards the edges, although in TEM the beam itself is often a stronger source of intensity fluctuations.
Resalta distributes OSIS cameras, where the side-mount cameras are lens coupled, while the bottom-mount cameras are fiber-coupled.
Sensitivity, Noise and dynamic range
As in any physical system, noise is present that degrades the signal. For TEM images the noise can come from shot noise of the electron beam for very weak beams. This is noise that can be reduced by either longer exposure times or by stronger beam currents, both of which increase the electron dose on the sample, which often must be kept at a minimum. It is therefore important to reduce instrument noise as much as possible while at the same time increase the sensitivity of the instrument.
Each electron creates many photons in the phosphor while it is being scattered around in the phosphor film. The scattering reduces the resolution of the film, so a thin film might be used to increase resolution. On the other hand a thin film will reduce the number of photons and thus the signal. OSIS cameras have phosphor screens that are optimized for a certain acceleration voltage (=Energy) of the primary electrons.
Each CCD chip has a maximum number of charge carriers it can hold. This is the so-called "full-well width" and expressed in the number of electrons the pixel can hold. Depending on the size of the pixel, this value typically ranges from several tens of thousands to several hundreds of thousands.
Noise is the part of the signal that is not related to the sample. There are several sources for noise:
As the phosphor film is usually sintered and has a grain structure, some areas may be less efficient at producing photons than others. This leads to a visible grain structure, if the raw image data are observed. As the structure is fixed, it can be eliminated (together with with "chicken-wire" structure) through carefully acquired reference images.
The pixels on the camera chip, although all produced at the same time, may have slightly different characteristics. Some may be slightly less sensitive than their neighboring pixels. Again, this is a fixed structure, and reference images can eliminate the differences of the chip characteristics.
A significant source of noise is the so-called "dark current". Signal in a CCD chip is created when a photon strikes the Silicon in a pixel and creates an "electron-hole pair". One of the two (depending on the type of the silicon) is then retained until the pixel is read out, the other discarded. Physics allows this "pair creation" also to happen through "Phonons", i.e., energy that is transmitted through vibration of the silicon lattice, or heat. The hotter the CCD chip, the more carriers are created, which are then registered as signals. Obviously, this gets worse the hotter the chip is, and the longer the exposure is. Most TEM cameras use cooling of the chip to reduce the contribution of this noise.
Readout noise is another source of noise. To read a CCD chip, the charge carriers have to be shifted from one pixel to the next until they reach a read-out gate. At higher speeds, this transfer becomes harder and harder to control, and charge carriers remain behind and end up being measured in the wrong pixels.
With all this information we can now ask what the dynamic range is, i.e., how many levels of gray the camera can differentiate. If the camera has a well width of 100,000 electrons, the simple answer would be that the camera can distinguish 100,000 levels of gray, one for each electron. That may be the right number, but it neglects noise. We must ask: How many levels can the camera distinguish and so that we can be reasonably sure those are true signal levels, not noise. The simple answer is that if we have, for example a total noise level of 10 electrons, we cannot be sure about the signal differences if they are less than the noise level. In this case, the camera can realistically differentiate 100,000 / 10 = 10,000 levels of gray. This corresponds to 13-14 bit of dynamic range.
OSIS cameras typically provide a 14 bit dynamic range, which far surpasses that of the human eye (about 6-8 bit).
Chip type (full frame vs. Interline)
CCD chips come in various different technologies. One of the major differences is the way they deal with the transfer of the data to the computer. there are pincipally two different techniques, and it is important to understand these techniques and their implications for imaging.
The first technology, and the one that is the original CCD technology is known as frame transfer technology. This technology is straight forward. All pixels of the CCD chip are exposed to the signal. Once the pixels have accumulated sufficient signal, the pixels are cut off from further exposure through the use of a shutter, and the image is then read out into the computer. During this time, of course, no new signal can be acquired. This technology is simple and has the advantage that nearly 100% of the chip surface can be used for image acquisition, allowing the highest possible sensitivity. The two major drawbacks of the technology are that a shutter is required to shield the pixels during the read-out period, and that this technology typically results in fairly low read-out times. These two factors make full frame chip less desirable for cameras with live imaging capabilities. The potentially higher sensitivity, however, makes this the preferred technology for techniques like cryo microscopy or low dose microscopy. Most cameras of this type require an external shutter, which for TEMs is usually provided by blanking the beam. Obviously this requires a precise timing scheme with the camera and a direct connection to the TEM. OSIS provides this type of chip in their Cantega camera, which is also deeply cooled to further increase sensitivity and noise
Other than the frame transfer chips, interline transfer chips trade a little bit of sensitivity for higher read-out speeds. In these chips, each "line" of pixels has a "hidden" neighbor line, which is covered and thus does not receive any signal. Once the exposed line has accumulated sufficient signal, the information is quickly transferred to the "hidden line", which happens very quickly, and the hidden line is then read out during the next exposure time of the exposed pixels. Due to the speed with which the transfer can be achieved internally, there is no shuttering necessary, and higher readout speeds can be achieved.
The obvious drawback is that the interline chips trade of active area for read-out speed. The "hidden" lines take up area which is not sensitive to any signal. Most interline chips have therefore prismatic lenses over the hidden lines that deflect the signal into a neighboring pixel. This recovers a large part of the missing signal, but for highest sensitivity, frame transfer chips are the better solution.
OSIS cameras use the interline transfer technology on all but the Cantega camera. These cameras combine high readout speeds for high frame rates with good sensitivity and S/N ratios. In fact, most of the cameras (either directly or through some binning) provide frame rates of 15 frames per second or higher. Most people find frame rates of 10/s or less incompatible with live work due to delayed feedback. With the higher frame rates, the OSIS cameras allow live work directly with the camera, i.e., samples can be surveyed and focus and astigmatism corrected directly on the computer screen, significantly simplifying the operation of the TEM.
Read-out speed
As already indicated above, the read-out speed is an important criterion that needs to be evaluated to find the optimum camera. The pertinent question here is to ask yourself: Do I want to be able to work directly with the camera, or do I simply want to replace the film option. The cross-over between these two options lies at about 10-12 frames per second. Cameras that provide frame rates only less than this threshold can rarely be used for live work. Experience shows, that most people need an immediate feedback within about 100 msec to be able to work directly with a camera. For example, focusing is typically an interactive process, where the lens excitation is changed, and the specimen is observed (either directly or through an FFT process). Each change of the lens setting must be evaluated by the user (unless some form of autofocus is employed), and the lens current must be optimized by changing it again. If the image frequency drops below 10-12 Hz, most people find it difficult to slow down their process to accomodate the lower frame rates. Similar reasoning applies to sample surveys.
If you plan to use the camera regularly for direct live work on the monitor screen, you should opt for a camera that provides at a minimum 10-12 frames per second, which are most likely interline transfer cameras and may exclude cameras that are used for highest sensitivity. If your goal is to simply replace the film, but leave all other processes unchanged, a camera with a lower frame rate may be the better choice. One caveat though: Once users start to see the advantages of digital cameras over film, most will try to use the cameras for live work, and frustration can result from cameras with low frame rates. It is important to evaluate the future use of the camera before making a decision.
Software
Looking at the above explanations, it is clear that TEM cameras are not something that can be put together by anyone. A lot of technology and physics flow into the design process, and often parameters have to be weighed against each other. For example, larger pixels allow to collect more electrons and thus theoretically permit a better signal to noise ratio. But the larger pixels also have larger capacitances, and thus larger time constants, which makes them slower to read out. Once the read out rate drops to below 10 frames per second, it becomes almost impossible to work with the cameras in live-mode, so these cameras are principally replacements for film.
It is also clear, that much of the factors affecting the performance of a camera can be controlled through software. Shading correction (the reference images mentioned above) are a point in case: Typically the cameras need several sets of reference images for different operating conditions, and the correct images must be applied, often at high speeds with 20 or 30 frames per second. This requires sophisticated software algorithms, and the right software will shield the user from the complexity and provide superior images. The best solution is of course where software and hardware comes out of the same hand. The software engineers have direct access to the hardware development and can help with the device development to produce a superior product. We think that OSIS with their long experience and cross-fertilization from light microscopy are the perfect company for this task. Check out the multitude of software products for TEM (the iTEM family of products) and SEM (Scandium products) before you make a decision what camera to buy.



 
corner COPYRIGHT 2009-2011 RESALTA RESEARCH TECHNOLOGIES corner