LIDAR Magazine

Point Clouds from Images

LP360 is a family of tools for exploiting point clouds. The fundamental difference between an image and a point cloud is that the point cloud is a three dimensional representation of an environment whereas an image is a two dimensional representation (or, at best, a “two and a half” dimensional representation). A second major difference is that images are most typically represented using a uniform grid structure whereas a point cloud allows a non-uniform point (or “sample”) spacing. This difference is shown in Figure 1. Since the grid is a uniformly spaced structure, it is only necessary to store the values at the gird intersections (for example, Z for an elevation raster or Red-Green-Blue values for an image). The point cloud, on the other hand, has values that are randomly distributed and thus it is necessary to store all coordinates (for example, X, Y, Z and perhaps other values as well).

In general, an image is a compact way of representing a two dimensional view of an object (such as a digital orthophoto) whereas the point cloud is useful for a richer, fully three dimensional view. An image allows planimetric measurements (for example, how far one road intersection is from another without regard to height differences). A point cloud allows fully three dimensional measurements. Of course, a point cloud also allows robust 3 dimensional “walk around” viewing of a scene.

It is possible, through “photogrammetric” techniques, to derive a three dimensional point cloud from two or more overlapping images when the images have been acquired from different vantage points. Perhaps you have examined some of the three dimensional point clouds derived in this manner on the Microsoft Photosynth (www.photosynth.net) web site. This type of synthetic generation of point clouds is becoming more prevalent in the professional mapping and surveying world through newly developed processing algorithms. One that you will no doubt be hearing about is called “Semi-Global Matching” or simply SGM. Mathematical modeling algorithms of this sort produce point clouds of the “first reflective” surface of images and hence are often called Photo-Correlated Digital Surface Models. These algorithms allow high density point clouds to be rapidly extracted from overlapping aerial or terrestrial images.

GeoCue has been working with developers of SGM (and developers of other algorithms for extracting point clouds from images) for the last few years. One of the early providers of this type of algorithm is North West Geomatics LTD. of Calgary, Canada. North West created the XPro processing software for Hexagon’s ADS-xx series of line sensor cameras. As an add-on for XPro, they have developed a semi-global matching (SGM) module to generate point clouds. GeoCue worked with North West to modify LP360 to accept these colorized point clouds. Since that development, we have worked with most of the vendors of SGM software (and similar techniques) to ensure that their models properly display in LP360.

In Figure 2 is depicted a point cloud model of an urban scene. This model was produced from helicopter imagery using a highly customized SGM algorithm developed by the University of Stuttgart, Institute for Photogrammetry (IFP). Note the amazing detail within this model such as individual roof tiles and the solar panel clarity.

SGM models (or photo correlated digital surface models) are point clouds that generally include colorization “tags” for each point such as Red-Green-Blue or Near Infrared (NIR). These point clouds, which are generally available in the industry standard LAS format, can be processed much the same as LIDAR data. They can be directly imported into LP360 just like LIDAR data. LP360 provides a viewing mode (in all viewing panes) to display by RGB colorization. This is the rendering shown in Figure 2. Under the LP360 Symbology Control (RGB Values tab) are controls for setting the desired display channels and for performing basic image enhancements.

SGM point clouds will not replace LIDAR data. These point clouds, since they require multiple image rays per derived point, are not very useful for deriving ground models (where LIDAR’s single ray, multi-return capability is optimal), particularly in the presence of vegetation. The SGM (and related) algorithms also do not perform well when there is not a distinct texture present in the images (for example, over sand or very smooth, monochromatic surfaces) or there are deep shadows. However, one of the exciting new application areas will be 3D models derived from images captured by micro Unmanned Aerial Systems (UAS). A 2 or 3 kg UAS can carry a small camera payload but not a LIDAR system. Thus a UAS carrying a small camera, combined with post-processing SGM algorithms, will be an ideal point cloud source for very localized areas such as highway intersection construction and open pit mining. Figure 3 depicts a 3D point cloud model of a quarry that was collected using a sub-$2,000 camera carried by a Gatewing UAS (www.gatewing.com). Note the excellent detail in the 3D view and the profile of the ground classified surface. Such data will prove invaluable for rapid computations such as volumetric analysis.

We are at the dawn of ubiquitous three dimensional imaging. Many uses for this type of data will emerge over the next few years. It is an exciting area of development for the LP360 team so you can rest assured that new features will continue to be added to make maximum use of this exciting new source of point clouds. Best of all, users under current maintenance contracts will receive these new features as part of their standard software upgrades!

Figure 2: Courtesy University of Stuttgart, Institute for Photogrammetry

Figure 3: Courtesy Gatewing, a Trimble company

Exit mobile version