How does ccd camera work




















Consequently, the dynamic range of a CCD is usually discussed in terms of the minimum and maximum number of electrons that can be imaged. As more light falls onto the CCD, more and more electrons are collected in a potential well, and eventually no more electrons can be accommodated within the potential well and the pixel is said to be saturated. For a typical scientific CCD this may occur at around , electrons or so.

The minimum signal that can be detected is not necessarily one electron corresponding to one photon at visible wavelengths. In fact, there is a minimum amount of electronic noise which is associated with the physical structure of the CCD and is usually around electrons for each pixel. Thus, the minimum signal that can be detected is determined by this readout noise.

In the example above, the CCD would have a dynamic range of , taking the upper noise level. But - this dynamic range is also dependent on the ability of the electronics to be able fully digitise all of this dynamic range see the more detailed CCD information for discussions on electronics resolution.

Linearity On the whole, the eye is not a linear detector except over very small variations in intensity and has a logarithmic response. An important consideration in a detector is its ability to respond linearly to any image it views.

In such a situation, we say that the detector has a linear response. Such a response is obviously very useful as there is no need for any additional processing on the image to determine the 'true' intensity of different objects in an image. Noise One of the most important aspects of CCD performance is its noise response. There are a number of contributions to the noise performance of a CCD, these are briefly listed here: Dark current - i.

At room temperature, the noise performance of a CCD can be as much as thousands of electrons per pixel per second.

Consequently, the full well capacity of each pixel will be reached in a few seconds and the CCD will be saturated.

Dark current can be massively reduced by cooling. For example, the noise performance of the CCD could be reduced from thousands of electrons at room temperature to only tens of electrons per pixel per second at degrees C.

By cooling down to temperatures below about degrees C dark current can be virtually eliminated substantially below one electron per pixel per second. Download our guide to intrusion detection. Discover the best solutions to protect your business. Group Policy and its Role. PoE Access Control. Wireless Access Control. Mobile Access Control Guide. We use cookies to enhance your experience and for marketing purposes.

A widely used analogy to aid in visualizing the concept of serial readout of a CCD is the bucket brigade for rainfall measurement, in which rain intensity falling on an array of buckets may vary from place to place in similarity to incident photons on an imaging sensor see Figure 5 a.

The parallel register is represented by an array of buckets, which have collected various amounts of signal water during an integration period. The buckets are transported on a conveyor belt in stepwise fashion toward a row of empty buckets that represent the serial register, and which move on a second conveyor oriented perpendicularly to the first.

In Figure 5 b , an entire row of buckets is being shifted in parallel into the reservoirs of the serial register. The serial shift and readout operations are illustrated in Figure 5 c , which depicts the accumulated rainwater in each bucket being transferred sequentially into a calibrated measuring container, analogous to the CCD output amplifier.

When the contents of all containers on the serial conveyor have been measured in sequence, another parallel shift transfers contents of the next row of collecting buckets into the serial register containers, and the process repeats until the contents of every bucket pixel have been measured. There are many designs in which MOS capacitors can be configured, and their gate voltages driven, to form a CCD imaging array.

As described previously, gate electrodes are arranged in strips covering the entire imaging surface of the CCD face. The simplest and most common charge transfer configuration is the three-phase CCD design, in which each photodiode pixel is divided into thirds with three parallel potential wells defined by gate electrodes.

In this design, every third gate is connected to the same clock driver circuit. The basic sense element in the CCD, corresponding to one pixel, consists of three gates connected to three separate clock drivers, termed phase-1, phase-2, and phase-3 clocks. Each sequence of three parallel gates makes up a single pixel's register, and the thousands of pixels covering the CCD's imaging surface constitute the device's parallel register. Once trapped in a potential well, electrons are moved across each pixel in a three-step process that shifts the charge packet from one pixel row to the next.

A sequence of voltage changes applied to alternate electrodes of the parallel vertical gate structure move the potential wells and the trapped electrons under control of a parallel shift register clock. The general clocking scheme employed in three-phase transfer begins with a charge integration step, in which two of the three parallel phases per pixel are set to a high bias value, producing a high-field region relative to the third gate, which is held at low or zero potential.

For example, phases 1 and 2 may be designated collecting phases and held at higher electrostatic potential relative to phase 3, which serves as a barrier phase to separate charge being collected in the high-field phases of the adjacent pixel.

Following charge integration, transfer begins by holding only the phase-1 gates at high potential so that charge generated in that phase will collect there, and charge generated in the phase-2 and phase-3 phases, now both at zero potential, rapidly diffuses into the potential well under phase 1.

Charge transfer progresses with an appropriately timed sequence of voltages being applied to the gates in order to cause potential wells and barriers to migrate across each pixel. At each transfer step, the voltage coupled to the well ahead of the charge packet is made positive while the electron-containing well is made negative or set to zero ground , forcing the accumulated electrons to advance to the next phase. Rather than utilizing abrupt voltage transitions in the clocking sequence, the applied voltage changes on adjacent phases are gradual and overlap in order to ensure the most efficient charge transfer.

The transition to phase 2 is carried out by applying positive potential to the phase-2 gates, spreading the collected charge between the phase-1 and phase-2 wells, and when the phase-1 potential is returned to ground, the entire charge packet is forced into phase 2.

A similar sequence of timed voltage transitions, under control of the parallel shift register clock, is employed to shift the charge from phase 2 to phase 3, and the process continues until an entire single-pixel shift has been completed. One three-phase clock cycle applied to the entire parallel register results in a single-row shift of the entire array. An important factor in three-phase transfer is that a potential barrier is always maintained between adjacent pixel charge packets, which allows the one-to-one spatial correspondence between sensor and display pixels to be maintained throughout the image capture sequence.

Figure 6 illustrates the sequence of operations just described for charge transfer in a three-phase CCD, as well as the clocking sequence for drive pulses supplied by the parallel shift register clock to accomplish the transfer. In this schematic visualization of the pixel, charge is depicted being transferred from left to right by clocking signals that simultaneously decrease the voltage on the positively-biased electrode defining a potential well and increase it on the electrode to the right Figures 6 a and 6 b.

In the last of the three steps Figure 6 c , charge has been completely transferred from one gate electrode to the next. Note that the rising and falling phases of the clock drive pulses are timed to overlap slightly not illustrated in order to more efficiently transfer charge and to minimize the possibility of charge loss during the shift. With each complete parallel transfer, charge packets from an entire pixel row are moved into the serial register where they can be sequentially shifted toward the output amplifier, as illustrated in the bucket brigade analogy Figure 5 c.

This horizontal serial transfer utilizes the same three-phase charge-coupling mechanism as the vertical row-shift, with timing control provided in this case by signals from the serial shift register clock. After all pixels are transferred from the serial register for readout, the parallel register clock provides the time signals for shifting the next row of trapped photoelectrons into the serial register.

Each charge packet in the serial register is delivered to the CCD's output node where it is detected and read by an output amplifier sometimes referred to as the on-chip preamplifier that converts the charge into a proportional voltage. The voltage output of the amplifier represents the signal magnitude produced by successive photodiodes, as read out in sequence from left to right in each row and from the top row to the bottom over the entire two-dimensional array.

The CCD output at this stage is, therefore, an analog voltage signal equivalent to a raster scan of accumulated charge over the imaging surface of the device. After the output amplifier fulfills its function of magnifying a charge packet and converting it to a proportional voltage, the signal is transmitted to an analog-to-digital converter ADC , which converts the voltage value into the 0 and 1 binary code necessary for interpretation by the computer.

Each pixel is assigned a digital value corresponding to signal amplitude, in steps sized according to the resolution, or bit depth, of the ADC.

For example, an ADC capable of bit resolution assigns each pixel a value ranging from 0 to , representing possible image gray levels 2 to the 12th power is equal to digitizer steps. Each gray-level step is termed an analog-to-digital unit ADU. The technological sophistication of current CCD imaging systems is remarkable considering the large number of operations required to capture a digital image, and the accuracy and speed with which the process is accomplished.

The sequence of events required to capture a single image with a full-frame CCD camera system can be summarized as follows:. In spite of the large number of operations performed, more than one million pixels can be transferred across the chip, assigned a gray-scale value with bit resolution, stored in computer memory, and displayed in less than one second.

A typical total time requirement for readout and image display is approximately 0. Charge transfer efficiency can also be extremely high for cooled-CCD cameras, with minimal loss of charge occurring, even with the thousands of transfers required for pixels in regions of the array that are farthest from the output amplifier.

Three basic variations of CCD architecture are in common use for imaging systems: full frame , frame transfer , and interline transfer see Figure 7. The full-frame CCD, as referred to in the previous description of readout procedure, has the advantage of nearly percent of its surface being photosensitive, with virtually no dead space between pixels.

The imaging surface must be protected from incident light during readout of the CCD, and for this reason, an electromechanical shutter is usually employed for controlling exposures. Charge accumulated with the shutter open is subsequently transferred and read out after the shutter is closed, and because the two steps cannot occur simultaneously, image frame rates are limited by the mechanical shutter speed, the charge-transfer rate, and readout steps. Although full-frame devices have the largest photosensitive area of the CCD types, they are most useful with specimens having high intra-scene dynamic range, and in applications that do not require time resolution of less than approximately one second.

When operated in a subarray mode in which a reduced portion of the full pixel array is read out in order to accelerate readout, the fastest frame rates possible are on the order of 10 frames per second, limited by the mechanical shutter. Frame-transfer CCDs can operate at faster frame rates than full-frame devices because exposure and readout can occur simultaneously with various degrees of overlap in timing.

They are similar to full-frame devices in structure of the parallel register, but one-half of the rectangular pixel array is covered by an opaque mask, and is used as a storage buffer for photoelectrons gathered by the unmasked light-sensitive portion. Following image exposure, charge accumulated in the photosensitive pixels is rapidly shifted to pixels on the storage side of the chip, typically within approximately 1 millisecond.

Because the storage pixels are protected from light exposure by an aluminum or similar opaque coating, stored charge in that portion of the sensor can be systematically read out at a slower, more efficient rate while the next image is simultaneously being exposed on the photosensitive side of the chip.

A camera shutter is not necessary because the time required for charge transfer from the image area to the storage area of the chip is only a fraction of the time needed for a typical exposure. Because cameras utilizing frame-transfer CCDs can be operated continuously at high frame rates without mechanical shuttering, they are suitable for investigating rapid kinetic processes by methods such as dye ratio imaging, in which high spatial resolution and dynamic range are important.

A disadvantage of this sensor type is that only one-half of the surface area of the CCD is used for imaging, and consequently, a much larger chip is required than for a full-frame device with an equivalent-size imaging array, adding to the cost and imposing constraints on the physical camera design.

In the interline-transfer CCD design, columns of active imaging pixels and masked storage-transfer pixels alternate over the entire parallel register array. Because a charge-transfer channel is located immediately adjacent to each photosensitive pixel column, stored charge must only be shifted one column into a transfer channel. This single transfer step can be performed in less than 1 millisecond, after which the storage array is read out by a series of parallel shifts into the serial register while the image array is being exposed for the next image.

The interline-transfer architecture allows very short integration periods through electronic control of exposure intervals, and in place of a mechanical shutter, the array can be rendered effectively light-insensitive by discarding accumulated charge rather than shifting it to the transfer channels.

Although interline-transfer sensors allow video-rate readout and high-quality images of brightly illuminated subjects, basic forms of earlier devices suffered from reduced dynamic range, resolution, and sensitivity, due to the fact that approximately 75 percent of the CCD surface is occupied by the storage-transfer channels.

Although earlier interline-transfer CCDs, such as those used in video camcorders, offered high readout speed and rapid frame rates without the necessity of shutters, they did not provide adequate performance for low-light high-resolution applications in microscopy. In addition to the reduction in light-sensitivity attributable to the alternating columns of imaging and storage-transfer regions, rapid readout rates led to higher camera read noise and reduced dynamic range in earlier interline-transfer imagers.

Improvements in sensor design and camera electronics have completely changed the situation to the extent that current interline devices provide superior performance for digital microscopy cameras, including those used for low-light applications such as recording small concentrations of fluorescent molecules.

Adherent microlenses , aligned on the CCD surface to cover pairs of image and storage pixels, collect light that would normally be lost on the masked pixels and focus it on the light-sensitive pixels see Figure 8. By combining small pixel size with microlens technology, interline sensors are capable of delivering spatial resolution and light-collection efficiency comparable to full-frame and frame-transfer CCDs.

The effective photosensitive area of interline sensors utilizing on-chip microlenses is increased to percent of the surface area. An additional benefit of incorporating microlenses in the CCD structure is that the spectral sensitivity of the sensor can be extended into the blue and ultraviolet wavelength regions, providing enhanced utility for shorter-wavelength applications, such as popular fluorescence techniques employing green fluorescent protein GFP and dyes excited by ultraviolet light.

In order to increase quantum efficiency across the visible spectrum, recent high-performance chips incorporate gate structures composed of materials such as indium tin oxide, which have much higher transparency in the blue-green spectral region. Such nonabsorbing gate structures result in quantum efficiency values approaching 80 percent for green light. The past limitation of reduced dynamic range for interline-transfer CCDs has largely been overcome by improved electronic technology that has lowered camera read noise by approximately one-half.

Because the active pixel area of interline CCDs is approximately one-third that of comparable full-frame devices, the full well capacity a function of pixel area is similarly reduced.

Previously, this factor, combined with relatively high camera read noise, resulted in insufficient signal dynamic range to support more than 8 or bit digitization. High-performance interline cameras now operate with read noise values as low as 4 to 6 electrons, resulting in dynamic range performance equivalent to that of bit cameras employing full-frame CCDs. Additional improvements in chip design factors such as clocking schemes, and in camera electronics, have enabled increased readout rates.

Interline-transfer CCDs now enable bit megapixel images to be acquired at megahertz rates, approximately 4 times the rate of full-frame cameras with comparable array sizes. Other technological improvements, including modifications of the semiconductor composition, are incorporated in some interline-transfer CCDs to improve quantum efficiency in the near-infrared portion of the spectrum. Several camera operation parameters that modify the readout stage of image acquisition have an impact on image quality.

The readout rate of most scientific-grade CCD cameras is adjustable, and typically ranges from approximately 0. The maximum achievable rate is a function of the processing speed of the ADC and other camera electronics, which reflect the time required to digitize a single pixel.

Applications aimed at tracking rapid kinetic processes require fast readout and frame rates in order to achieve adequate temporal resolution, and in certain situations, a video rate of 30 frames per second or higher is necessary. Unfortunately, of the various noise components that are always present in an electronic image, read noise is a major source, and high readout rates increase the noise level.

Whenever the highest temporal resolution is not required, better images of specimens that produce low pixel intensity values can be obtained at slower readout rates, which minimize noise and maintain adequate signal-to-noise ratio.

When dynamic processes require rapid imaging frame rates, the normal CCD readout sequence can be modified to reduce the number of charge packets processed, enabling acquisition rates of hundreds of frames per second in some cases. The image acquisition software of most CCD camera systems used in optical microscopy allows the user to define a smaller subset, or subarray , of the entire pixel array to be designated for image capture and display.

By selecting a reduced portion of the image field for processing, unselected pixels are discarded without being digitized by the ADC, and readout speed is correspondingly increased. Depending upon the camera control software employed, a subarray may be chosen from pre-defined array sizes, or designated interactively as a region of interest using the computer mouse and the monitor display. These alternating strips allow for rapid shifting of any accumulated charge as soon as image acquisition is complete.

As this process is so rapid, the likelihood of charge smear is removed , and images can be taken in quick succession. However, as the masked are of each pixel makes each pixel effectively smaller, decreasing the sensitivity of the sensor. Microlens arrays can be used to overcome this, increasing the amount of light that can be captured for each active pixel. Silicon-based CCDs are optimized for photons in the visible wavelength range nm.

However, thicker silicon sensors, called deep-depletion CCD sensors , are able to detect NIR wavelengths and higher energy x-rays as they provide enough material for the generation of a signal charge at these longer wavelengths, as shown in Figure 4.

Standard and deep-depletion silicon sensors are typically comprised of a bulk silicon substrate onto which an epitaxial layer is grown. These epitaxial layers are incorporated into a device via a deposition process in which doped silicon is grown onto an existing bulk silicon substrate Figure 5. The silicon substrate and doped silicon layer will be either n- or p- type silicon. These types of silicon are formed when pure silicon is intentionally doped with different elements to control the electrical, structural and optical properties of the material.

N-type silicon is formed when the pure silicon is doped with arsenic or phosphorus. These elements have 5 electrons in their outer orbital, so are able to form 4 bonds within the silicon structure and still have a bond free to move any electric current. This makes n-type silicon negatively charged. P-type silicon is doped with boron or gallium, both of which have 3 electrons in their outer orbital. They are still able to conduct an electric current as they can accept electrons from neighboring atoms.

When creating a CCD semiconductor, the deposited epitaxial layer must be a different type to the silicon substrate. Therefore, an n-type epitaxial layer will be deposited onto a p-type silicon substrate, and vice versa.

This produces high quality sensors, with moderate resistivity, that are relatively thin. However, for high QE even further into the red region devices with an even thicker depletion are required. This deeper depletion is dictated by the operating voltages of the device and the resistivity of the substrate, with higher voltages and resistivity generating a thicker depletion region.

HiRho sensors are comprised of an epitaxial layer which is grown, via the deposition process, onto a very high resistivity bulk silicon substrate.



0コメント

  • 1000 / 1000