A 783Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE
Being able to fully manage terabytes of LiDAR data is one of the biggest challenges that organizations working with geospatial data face today. The value of point cloud data is tremendous and the availability of sensors to capture that data increases along with the consumer market’s understanding of how to use the data to facilitate and accelerate the decision making process. With data essentially overflowing, it is becoming more difficult for end-users to manage, distribute, exploit, and visualize their point data from the desktop through to the server and over to the world.
But more importantly, it is not the `static’ data that interests the end user, it’s the value-added dynamic information products that can be created on demand from the point cloud data, synthesized with other sources to produce `real’ and `actionable’ geographic information products. To make this data truly actionable, organizations need the ability to automatically find, describe and catalog LiDAR data, thereby enabling their customers (internal or external) the ability to discover, view, analyze and download the data AND value added derivative information products. That’s the `secret’ to managing large volumes of point cloud data.
Complementing the traditional raster and vector data, LiDAR, in the form of point clouds, has recently entered the mainstream as the "third" primary type of geospatial data. Before diving into how to overcome the challenges of processing high volumes of LiDAR data, let’s start with where the majority of LiDAR data originates.
First, we have terrestrial LiDAR systems; the device is stationary and collects points by rotating in both the horizontal and vertical axis. Often, the collected data is used in engineering applications such as as-built models, as well as archeological applications to make precise non-destructive measurement of artifacts.
Another common method for collecting LiDAR data is from airborne LiDAR systems that are mounted in an aircraft, and scan in one direction perpendicular to the flight of the aircraft in a whiskbroom fashion. In conjunction with GIS, remote sensing or photogrammetric applications, such as land cover mapping, hydrology analysis and site selection, the application of this data is more oriented toward direct measurement of the earth’s surface.
The fastest growing point collection technique is mobile LiDAR. Whether it is from mounting a mobile LiDAR unit on a train or automobile, these systems collect vast amounts of precision data along road and rail corridors for a multitude of real world applications.
In addition to these traditional survey techniques, advances in computing hardware have allowed for the generation of very dense point clouds from stereo imagery. This technique generates a single return LiDAR-like surface with the advantage of RGB encoding and the ability to use historical air photo data.
As the volume of LiDAR surveys increase each year, the sheer volume of data coming in is simply overwhelming with the most actionable data often left out. Because of this, the data may be underutilized and thereby rendered ineffective. As a result, many GIS professionals have legitimate concerns about getting the ROI on their point cloud data. The problem is that they can’t do anything with it after having spent thousands to millions of dollars on having the data collected.
Today, enterprises aim to efficiently centralize, manage and deliver volumes of geospatial data to a large audience in near real-time. The challenge with storing and delivering LiDAR data is the size and volume of the data. A typical collection is terrabytes in size and consists of billions of points.
LiDAR data is typically stored in a LAS file. LAS is a binary format that stores the x,y,z coordinate, the return intensity, and other metadata about the points. The LAS format also allows each point in the cloud to be assigned a class, such as ground, low vegetation, high vegetation, or building generated in the post processing step.
Collecting point data can be expensive, making it paramount to understand and get the most out of the data ensuring quantifiable ROI.
Once collected, it is important to be able to catalog the data. The ability to discover and serve large data collections is also vital to making LiDAR data truly valuable. Users should be able to automatically catalog their LAS data holdings and search the catalog using queries based on rich metadata and spatial extents. Next, they should be able to review and QA the results of the queries by viewing the data using styles like relief-shaded surfaces. Finally, they should be able to download the data in either the original LAS format or as a raster elevation format to use in desktop applications.
With the data catalogued and managed in its `static’ state we can ignite the power of point clouds by using the data in a spatial model as an input into a spatial `recipe’ to derive information about the changing earth. For example, if an organization has cataloged `static’ multi-band imagery together with point cloud data and a consumer would like to identify areas of high erosion along a river bank, an erosion vulnerability model can be created that calculates land cover and slope information `dynamically’ and determines those areas of erosion based on a slope range (calculated directly from the point cloud) and loss of vegetation (dynamically extracted from the multi-band imagery). Using point clouds as input in a spatial model delivered over the internet as a web processing service, further extends the power of point cloud data. The key to supporting these workflows is that the data is first cataloged and managed.
Once the data is discovered, there are dozens of solutions to display and manipulate the points in 3D. There are prep tools that also allow users to merge multiple elevation data sources for a continuous surface, split thin and filter large LAS files into more manageable files, as well as generate a raster surface for visualizations and analyses. Beyond this, end-users should also be able to create derived products by classifying points as ground, buildings, trees and other features, generating contours, slope, aspect and intervisibility as well as doing various analyses like finding vegetation encroachment on a power line corridor or doing change detection on two dates of LiDAR data to identify earthquake damage or illegal building activity.
RGB encoded point clouds provide additional value for classification, interpretation and modeling, as well as providing photorealistic 3D scenes. An important aspect of the lifecycle is being able to use distributed processing, especially when it comes to deriving point clouds photogrammetrically. The lifecycle comes full circle when the derivatives of the LiDAR data are again cataloged and made available to more users.
Now that we have covered the full lifecycle for LiDAR data processing, let’s highlight an example of how this type of data processing would work in the real world. Last year’s tornado outbreak in Alabama caused a tremendous amount of damage to homes and businesses in the region. FEMA, as well as local governments and emergency responders, were required to move quickly to help save lives and restore a sense of normalcy after the devastation tornadoes.
There are significant advantages to using point clouds for disaster planning, response, impact assessment and recovery efforts in this type of situation. By combining both a set of assisted feature collection capabilities and quantitative image processing that supports change detections and the extraction of details about the earth, along with providing simultaneous access to the geospatial data, government agencies like FEMA and other responders can easily unite them in a single map view for efficient processing, analysis and sharing in real time.
In any disaster response situation, geospatial data needs to be timely, relevant and actionable, as lives are often at stake. In addition, as we saw in 2011, our world is now comprised of confounding global uncertainties that are underscored by an increase in man-made and natural disasters.
LiDAR will continue to become a key remote sensing solution for providing situational and geographic awareness for the military and government, as well as for a number of vertical markets. As more methods for collecting this data become available, it is critical to ensure that you are using the right tools for managing and processing these high volumes of data, otherwise your mission and efforts will be rendered ineffective.
Mladen Stojic is Vice President, Geospatial at Intergraph SG&I. Stojic provides direction in product management and marketing strategies for Intergraph SG&I’s geospatial business unit.
A 783Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE