Random Points: Data FusionPart 1

A 854Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

Now that we have a plethora of data collection devices hosted on a single platform, we often hear the term "sensor fusion" bandied about. For example, a mobile laser scanning (MLS) system might contain a position & orientation system (POS), several framing cameras, a 360 camera, several laser scanners and perhaps a straight down pavement scanner (see Figure 1). I myself would think of sensor fusion as blending a variety of different sensors on a common bus, using signaling to coordinate the operation. An example might be a transmission line corridor mapper that uses a real time range finder to precisely fire an oblique camera that is imaging insulators on the towers. However, the term is more commonly used to mean the amalgamation of data from the sensors, post-acquisition (even if done in real time) than the coordination of the sensors themselves.

Often the big advantage of sensor fusion (when the term is used to mean having multiple, disparate sensors on the same platform) is data registration. Since the sensors all share the same absolute position and orientation, a sensor for which absolute position solutions is relatively easy (say a metric framing sensor) can be used to tie all other sensors to the same solution. This use of a sensor as a reference for other sensors on the same platform is a compelling reason for sensor amalgamation but it is not sensor fusion, in the true sense of the term.

Data fusion (I’ll switch to this term since I like to reserve sensor fusion for the aforementioned real-time cooperation between sensors) is seen as a solution to a broad class of problems. However, I find that these problems are often not very well defined. We need to ask questions about the class of problem we are trying to solve and then back in to the types of data fusion that might be appropriate in arriving at a solution.

Data fusion is generally more flexible than "sensor fusion." With true sensor fusion, only the sensors integrated on communicating platforms (not necessarily the same platform) can participate in the fusion process. With data fusion, one need only move the various sources to a common reference system and have some handle on the data accuracy to perform useful combinations.

We also need to consider the wide varieties of fusion available and how they fit into the solution. The approaches to fusion can range from simply dumping data from different sources into a common bucket to complex combinations of attributes from different sources to form a unified composite (something that we have been doing for years in the GIS space called "conflation").

It is easy to fuse point cloud data because each point is an independent entity. Figure 2 is an illustration of helicopter LIDAR data (red points) fused with MLS data (orange points) of a highway overpass. The MLS was deployed on a rail line running under the bridge. The orange points are the girders under the bridge imaged by the MLS. The red points are the top side of the bridge deck, imaged from the helicopter scan. The immediate question that comes to mind is how one judges the network (or absolute) and local (relative) accuracy of such merged point clouds. Well, it is actually not different from problems encountered in multiple passes from the same sensor. Notice in Figure 2 the two distinct lines of data on the bridge deck (red points) from the helicoptermounted sensor. These two lines should be coincident (they are both on the same surface) but are not due to positional errors of the sensor solution. You can discern this same type of error in the MSL data where the girders have been imaged. Hence multi-sensor point data fusion is a similar problem to resolving multiple takes from the same sensor.

One could imagine visualization where points were colored by network or local "error." This sort of visualization would provide a user with an easy to distinguish choice in point selection when performing data modeling. Unfortunately, we (in the industry) are not very advanced with error propagation in imaging (LIDAR or passive electro-optical, EO) sensors. We usually have only modeled error as it relates to the overall data set. But enough on error–that is a topic for another discussion!

A common and useful example of data fusion is combining red-green-blue (RGB) data from EO sensors with points from laser scanners. This provides for rich, colorized visualization contexts. Again, the relative error between the sources must be minimized before fusion or 3D color anomalies occur. Figure 3 illustrates a simple example of MLS data fusion where the point cloud has been colorized by data extracted from a framing camera.

A question arises as to the appropriate point to perform fusion. Should we create "pre-fused" products such as EO colorized points clouds, perform fusion within exploitation algorithms or keep different types of data fundamentally segregated, fusing only at the point where the fusion provides some visual or derivative information? Part of this discussion will be deferred to my next column when I will explore some of these "just in time" ideas with a member of the ESRI imaging team.

But back to the original proposition at the beginning of this missive. Too often we see software and solutions that allow merging of various types of data without much thought to the actual products that are desired. The better approach is to define the ultimately desired result and then see what can be cobbled together in arriving at a solution.

Consider, as an example, impervious surface extraction for a tax base. Impervious surfaces tend to be flat so we need a sensor that will produce data useable for this parameter. We also know that we often have canopy over the desired surface (say trees in a parking lot) and thus we will need a sensor that is adept at canopy penetration. A planar field looks a lot like a paved parking lot but it is pervious so we will need data that allows discrimination of grass from pavement. This analysis goes on and we see what combinations might yield the desired data sources for analysis.

In our example, photo-correlated point clouds are ruled out since we do not get canopy penetration. We might experiment with hyperspectral classification as a means of discriminating surface types and so forth. After we have arrived at the types of sensors that need to be deployed, we look at the feasibility of single platform deployment. Our solution might be a combination of medium format framing camera, a hyperspectral camera and a LIDAR system. Deploying on a common platform would be ideal since it would significantly aid in registration of the hyperspectral data.

Often the question arises as to the generation of so-called "derived" products. Should ultimate end-use customers receive derived data or should they receive segregated data that is subsequently fused in exploitation environments? For example, a simple orthophoto is a fusion of elevation data with EO data. Should users receive these as separate sources and fuse them "on-the-fly" or are pre-constructed data sets more useful? This will be the subject of the next column.

Note: Data for images in this month’s column were provided courtesy of Terrasolid OY, and Surveying & Mapping, Inc.

Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LIDAR production and workflow tools and consulting services for airborne and mobile laser scanning.

A 854Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE