A 680Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE
When I was in undergraduate school (majoring in Physics), I had the good fortune of spending a summer at Louisiana State University working on a "big physics" project (gravity wave detection) as a National Science Foundation undergraduate fellow. I roomed with a fellow who was working on a different big physics project. His involved detecting extremely infrequent particles (neutrinos) by placing extremely (there are a lot of extremes in big physics) sensitive detectors thousands of feet underground in salt domes.
The theory was that occasionally a neutrino would undergo a reaction with another particle and, among other particles, release a photon (a light particle). This photon would be detected by a Photo Multiplier Tube (PMT). The idea of a PMT was a device that relied on the photoelectric effect. In this phenomenon, when a photon interacts ("strikes") with an electron in a metal, the electron may be freed from the atom to which it is bound. If this electron were freed into a very strong electric field, it could gain energy, freeing other electrons in a cascading chain reaction (the M in PMT), creating a current. This current could then be detected in support electronics.
Although the vacuum PMT is still used in specialized applications (where very low noise is necessary), it has been supplanted in commercial systems by the solid state equivalent APD (Avalanche Photodiode Detector, Avalanche Photo Diode and a few other variations). The APD is a solid state device that functions in a manner similar to a PMT except (mainly due to quantum dark noise) it is a very `noisy’ device at low signal levels.
An important measure of the effectiveness of a detector is the Quantum Efficiency (QE) or Incident Photon to Converted Electron (IPCE) ratio. This is the ratio of charge carriers produced per incident photon. For example, if one electron is produced per 100 incident photons, the QE is 1%.
Let’s think about these light particles called photons. Assume we are using an airborne laser scanner with an average power of 2 kilowatts (kW). Now this is all back of the envelope stuff so we will ignore things such as pulse rise/ fall time and so forth. If the pulse time is 5 nanoseconds then the energy in the pulse is 1 x 10-5 Joules (J). Let’s assume we are using a short wave infrared laser with a wavelength of 1.541 microns (a typical eye safe wavelength used in laser scanning). The frequency is related to the wavelength by the propagation speed of the wave. In our example, we use the speed of light in a vacuum (c), 3 x 108 meters/second. Thus the energy of a single photon in our laser beam is (a tiny bit of quantum physics here):
E = h = hc/ = (6.63 x 10-34 Js)(3 x 108 m/s)/(1.541 x 10-6 m/s) = 1.29 x 10-19 J
Thus the number of photons (light particles) in our pulse is
# photos = pulse energy/energy per photon = (1 x 10-5 J)/(1.29 x 10-19 J/ photon) = 7.75 x 1013 photons!
This is about 78 trillion photons (at least here is one thing larger than the USA national debt)! Now of course, this is the bundle of photons leaving the sensor, not those reflected back to the detector. However, it does emphasize that we have a lot of particles with which to work. All commercial time of flight LIDAR systems today use a scheme of a pulsed laser, a collector (a lens) and an APD to "integrate’ the returned signal. The optical path is moved (by aircraft motion and scanning optics) and the next pulse fires. Manufacturers have become so good at the mechanics and electronics that systems producing over 400,000 pulses per second are now the standard. Even so, it is apparent that such a method of collecting range/intensity data is far less efficient than a digital camera. In a 5 return LIDAR system, those 78 trillion emitted photons we computed result in an optimal detection of 5 single point ranges. This seems incredibly inefficient!
A digital camera uses an array of detectors (a Focal Plane Array, FPA) that simultaneously collects millions of photons in a gridded pixel array. Today, CCD and CMOS arrays of 200 million individual collector elements (pixels) are common in high performance aerial imaging systems. Why don’t we use this technology in LIDAR systems? Unlike most visible spectrum cameras where we integrate incident light in a array detector (the FPA), in time of flight ranging operations we must know the exact time of the emission of a laser pulse and the exact time of detection. In state of art sensors, exact time is on the order of 10s of picoseconds. After detection, the single element must be reset to prepare for the next detection (this is called quenching time). It is technically very difficult to build APDs with these integrated timing circuits as an array rather than a single diode detector.
Nevertheless, research and development on FPA avalanche photo detectors has been marching along. Advanced Scientific Concepts (ASC) of Santa Barbara, California is a leading commercial company developing FPAs. They offer a 128 x 128 InGaAs detector array optimized for 1,570 nm wavelength photons (eye safe lasers). A schematic diagram of this detector is presented as Figure 1.
APDs can be operated in several different modes. The primary modes of interest in our fields of LIDAR are linear mode and Geiger mode. In linear mode, the APD is operated below breakdown voltage and collects a number of photos prior to triggering an event. This mode is effectively the same as the operating mode of a scanning LIDAR system. It provides intensity as well as range. You can think of this mode as equivalent to using a digital camera in the dark with an infrared flash with the added metadata per pixel of time of flight (or range). For this reason, the mode of operation is often referred to as a Flash LIDAR. Operated in this mode, a charge carrier is produced for every 100 photons or so. Thus it is fairly sensitive yet has a QE low enough such that the dark noise is manageable (dark noise is the production of charge carriers in the absence of any incident photons–it is a quantum devil that cannot be eliminated!).
The second mode of operation is Geiger mode. In this mode, the APD elements in the focal plane array are operated beyond the breakdown voltage of the diode junction. You can think of it as being in a hair trigger state where even a single photon may cause the diode to avalanche. Operated in Geiger mode, an APD can have a quantum efficiency as high as 30%. This means that the APD may trigger with an average of only 3 collected photons! While this seems an amazing result, it has really been around for decades in the aforementioned PMT. It is the conversion to solid state and miniaturization that is the technology revolution.
We have been down in the quantum weeds for several pages now. What, you ask, is the practical significance of all of this? How will the world of kinematic laser scanning be impacted by FPA APD technology?
The most obvious top level advantage is simple efficiency. Rather than a pulse per point, we simply flash a LIDAR and collect an array of points. The second big advantage is that focal plane arrays have geometric constraints that will allow much higher geometric accuracy than scanning systems. They allow us to take advantage of the well known photogrammetric correction techniques of block bundle adjustment.
The advantage of operating in Geiger mode is phenomenal sensitivity. For airborne systems, it can translate into very high altitude flights (Lear jet altitudes) while still employing reasonably small collectors (lens). Due to rapid quenching times of "10’s of picoseconds’ (or recycle times) it can also mean hundreds of returns captured per single flash within a single APD. It is hard to get a firm idea on practical quenching times but let’s be conservative and say 50 picoseconds. A photo travels about 1.5 cm in this length of time. This means that we could stack multiple returns in a sensor operating in Geiger mode at 1.5 cm intervals. Thinking about the applications in this arena boggles the mind. One of the more exciting to me is the possibility of detecting wires in urban areas for electrical distribution modeling. The real challenge will be developing efficient software that can process the data returned from FPA APDs. It will be several orders of magnitude larger than the data produced by an equivalent standard imaging CCD.
Operating in linear mode means we have an active infrared digital ranging camera. This will allow the collection of both range and intensity in a single operation for an array of pixels with no moving parts.
So why are scanning lasers still around? Well, there are several challenges facing us before FPA APDs replace scanning systems. The first is cost. A tiny 128 x 128 array is still on the order of $50,000. An additional factor is the small array size. We will really need to have commercially available FPAs in the area of 512 x 512 pixels before manufactures will undertake the risk of assembling complete systems. At this small array size, schemes where the array is scanned will no doubt be used; obviating some of the advantages of the array (we will still have moving parts).
However, I distinctly recall when imaging CCDs were a tiny 128 x 256 pixels and the naysayers of the industry (naturally the film advocates!) predicted that CCDs could just never approach the quality or resolution of film. Now we have a whole generation of teenagers snapping away with a wide variety of CMOS arrays who never have heard of film. This was not over a period of 50 years but more like seven years!
I predict in perhaps 5 years we will be imaging routinely with FPA APDs and marveling that the prior `generation’ could do a crude million point per second 3D imaging with a moving, single detector!
Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LIDAR production and workflow tools and consulting services for airborne and mobile laser scanning.
A 680Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE