We have been adding a new “Live View” control to our point cloud exploitation product, LP360. This control allows much easier manipulation of the display of point cloud data based on attributes such as classification and returns. While testing the Returns portion of the control, I was reminded of the very high value of this attribute for exploiting lidar data.
Editor’s note: A 3.237Mb PDF of this article as it appeared in the magazine is available HERE
Recall how multiple return lidar works. When the pulse (a short, small diameter column of photons) impinges upon a material that does not totally block the beam, photons can be reflected from the first surface (say a bit of vegetation canopy) while other photons in the column continue on their original path, perhaps reflecting from an object (say a branch) farther into the surface material. This process can continue until either all photons are dissipated or no further penetration is possible. If the canopy is not too thick, the final reflective area reached will be the ground. Meanwhile, the lidar system pulse detector is recording the returned photons. Depending on the type of system, it may be digitizing the return energy into a waveform or declaring “returns” based on criteria such as magnitude of energy return, time since the last declared return and so forth. In all but experimental applications, we see the results as a point cloud with an attribute called “Return.” These are labeled to tell us which return this is within the ensemble of total returns from this outgoing pulse such as “Return 4 of 5.” This example means that for this particular outgoing pulse, 5 returns were detected and this is the 4th of those 5. One of the features in the specification comparisons of lidar systems is the number of returns possible from the system (more usually being better, of course!).
Multiple return capability is extremely important in modern lidar data analysis (especially automated processing routines). This phenomenon is used for applications such as separating not the forest from trees but rather the forest from the ground! For example, if we have points that are 1 of 1, then only a single return was detected for this pulse. This indicates that the pulse reflected from a surface opaque to the laser light. These returns are dominated by bare ground, buildings, roads and other light opaque surfaces. Importantly in the case of ground, it is an area devoid of overhead structures such as trees and wires. In Figure 1 is depicted a scene of lidar data superimposed over an orthophoto showing only Single returns (1 of 1 returns) in a rich data set containing up to five returns. Note that, as expected, these data are dense in areas of bare earth and quite sparse in areas of trees (also note our cool new Live View interface in LP360!). The data are sparse in the tree area because most pulses will reflect multiple returns as the beam penetrates the canopy but we have filtered these multiple returns out of the display. Of course, this is not a fool-proof indicator of ground as is evidenced by the fact that there are some single returns over canopy.
On the other hand, consider returns that are not the last return such as 1 of 2, 3 of 5 and so forth. In other words, “all but last” returns (recognizing, of course, that 1 of 1 is a last return and must be excluded). These points cannot be Ground since we know that following this return we will see another return from the same outgoing pulse. This means we will see a point “below” this point and thus the current point cannot be ground. This is illustrated in Figure 2. Notice how beautifully the trees are detected, the power line (the thin vertical line to the left of the road in the figure) but no ground points. You can imagine how powerful it is to exclude all these points from an automatic ground classifier since we know these points cannot be ground. Of course, once again this does not ensure that points that do pass the filter are indeed ground but it does remove points that could confuse the ground classification algorithm.
As a final, fun observations, consider Figure 3. Here we have set the filter to show only pulses that have 5 returns. You can clearly see the trajectory of the pulse as it traveled through the canopy of a treed area (circled example in Figure 3). Of course, this is the X, Y projection of the ray so we do not know which end is the first return and which is the last without examining the same area in the profile (cross-section) view or by coloring the returns by return number (a capability in Live View).
Besides reminding me of the extreme value of multiple return lidar data, this experiment has also reminded me of the value of making tools very easy to use. While we have always had rich filtering tools for examining returns in LP360, they have been fairly awkward to use. Exposing these tools in a live view manner (meaning the display changes as soon as I change a display parameter) tremendously enhances the exploration experience.
The next time someone tells you that data from correlated imagery (so-called multiray photogrammetry or dense image matching) is equivalent to lidar data, give them a demonstration of multireturn lidar data in dense canopy!