Challenges in Aerial LiDAR Processing in Floodplain Studies

A 1.418Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

Federal and State agencies are using LiDAR data in different application areas including floodplain and coastal zone mapping. For example, FEMA uses LiDARderived digital elevation models in its processes for definition of flood zone boundaries and plan remediation and mitigation strategies in floodplain mapping studies. Flood risk mapping and hydrological model reliability is dependent upon on the quality of the LiDAR data, which depends on the following factors including Pulse density (calculated as pulses per unit area commonly known as pulses per square meter)
Grid resolution of the derived terrain model
Spatial accuracy of the collected LiDAR
The LiDAR derived topographic data must meet the industry standard guidelines (see references) and specifications

The aerial LiDAR data collection and processing workflow are wellestablished; however, there are a few challenges faced in floodplain studies especially if the processing is done in an incremental manner. This paper addresses some of the issues faced and methods used to remedy them.

Collection and Processing Areas
Historically, aerial LiDAR collection is performed covering an administrative unit like County/City boundary or a natural boundary like watersheds. Ideally processing and generation of deliverables are performed for the entire study area (administrative/natural). As illustrated in Figure 1, the processing is performed in the entire collection area.

However, with respect to floodplain mapping related studies, the scenario is completely different. For flood studies, high-accuracy digital elevation data is needed for hydraulic modeling of floodplains, which may comprise hardly a fraction of the total watershed area. Normally, it is not realistic or advisable to acquire new digital topographic data only of the meandering floodplains as it will be cost prohibitive to place the flight lines and meet the control point requirements. It is more practical to acquire digital topographic data of the entire administrative or watershed area, but utilize the most rigorous post-processing procedures within the expected floodplain area plus a buffer zone that is somewhat larger than the estimated floodplain area. This is illustrated in Figure 2. As a result, in project planning one confronts the realities of the variables for collection parameters to provide product inputs to the workflows associated with collection/ acquisition areas and processing areas.

After the data has been collected for the entire collection area, LiDAR preliminary processing is normally performed for the entire acquisition area. Preliminary processing involves filtering the data for LiDAR "noise" points which are extremely high or low points outside the range of realistic elevations for the project area, differentially correcting, and assembling data into flight lines by "return layer." All overlapping flight line point data is then matched. The deliverable of the preliminary processing task is a fully calibrated point cloud data set, containing ground, above ground, water, and noise points which has been tiled and prepared for delivery in LAS 1.2/1.3 format.

The fully calibrated point cloud data within the processing area is then subject to processing leading to classification of ground and non-ground points. To get the bare-earth elevation data, automated and manual post-processing is used to eliminate points that impinged on elevation features. LiDAR post-processing consists of classifying the LiDAR data’s first and last return data points to filter vegetation and buildings. This classification process would only be performed in the processing areas as in Figure 3. Points will be filtered, and those representing above-ground features (such as trees and buildings) will be classified "out" to obtain points that represent the ground surface. In this phase, the data shall undergo an intensive filtering process to separate various classes of points. The spatial data distribution of geometrically usable points will be uniform and free from clustering.

Challenges in Floodplain processing
The collection of data over larger areas and processing of data within selected areas is a logically correct approach, but it creates a few issues. For example, let’s assume a project was performed with the processing areas as shown in Figure 3 and the results including classified ground points and the corresponding DEM were delivered. The data is subsequently validated through a QC process and the DEM is used in the modeling process. During modeling additional DEM is determined as necessary for the extended floodplain areas as shown in Figure 4 or the additional buffer as shown in Figure 5. Contracting for another collection or using the existing LiDAR for the new add-on areas also requires processing for a bare-earth surface and new DEMs have to be generated. In generating the new DEM, care must be taken to not alter the old DEM as that existing data has already been validated and used in the modeling.

In such situations, we face the following issues and each issue has a bearing on the processing and the deliverables:
Data processing related issues leading to artifacts and data void.
The projection, datum, resolution and the format of the old DEM may be different.
The software used to create the old DEM may have undergone changes.
The parameters that were used to create the old DEM may not available.

One of the problems encountered in merging the old and new DEM is the creation of artifacts in certain overlapping areas. Artifacts are regions of anomalous elevations or oscillations and ripples within the elevation data. The artifacts are created due to the combination of the following:
Systematic errors in the sensor
Environmental conditions
Incomplete post processing for generation of bare-earth digital terrain models (DTM)
Temporal elevation changes, natural or manmade, between the two acquisition dates

In some cases, perceived artifacts could be representing real world terrain conditions or manmade changes, and harm can be done to the hydraulic modeling process by over-smoothing LiDAR datasets to remove all apparent artifacts where the old and new terrain data merge. For example, aggressive filtering and subsequent editing of LiDAR data may automatically smooth apparent artifacts and smooth the slope of stream banks. This inadvertent smoothing will redefine actual stream channel geometry needed for hydraulic modeling. Associated with artifacts is the problem of data void. In fact, the removal of artifacts may create significant data voids. Artifacts within study areas cannot be neglected when they can potentially impact hydraulic modeling. The software and parameters used in DEMs have to be analyzed thoroughly before merging the new and old DEM so that the old DEM values are not altered and, at the same, there are no artifacts and data voids in the areas where the new DEM joins with the old DEM.

Michael Baker Jr., Inc. (Baker) has developed a comprehensive approach for meeting the requirements of terrain development projects that involves merging datasets from different sources. The process flow is flexible and accommodates different scenarios to obtain the highest fidelity in source accuracy while maintaining an appropriate transition between different sources in the final terrain model. The process starts with comparing the coverage of the input datasets, overlap areas, acquisition dates, spatial projection, post spacing or resolution, file formats, reported accuracy, and other relevant metadata in the flight reports if available. The initial data inspection is followed by a visual examination of the datasets as shaded relief to assess their overall quality and to evaluate visible differences in elevations in the areas of transition by tracing profile lines that show the severity of detected elevation misalignments. The initial assessment and visual inspection of the input terrain datasets is the lead step in deciding which process to follow in developing the merged dataset and in prioritizing the overlay of source datasets when merging based on acquisition dates, resolution, and quality.

The workflow for producing an elevation model from one LiDAR dataset is a straight forward process. A Triangulated Irregular Network (TIN) is produced from LiDAR multipoint data and, if available, breaklines are incorporated. Breaklines are supplemental 3D linear features used to define edges and control smoothness in surface models. The TIN can be processed into any grid format with a set resolution. The process workflow for merging two or more elevation datasets depends on the format and parameters of the input elevation data. Mindful that spatial processing of elevation datasets induces threedimensional transformations of the data, Baker’s workflow includes a step that analyzes the start and end parameters for each input and minimizes the number of processes performed on each dataset(s) to bring all inputs to the same resolution and projection required for the project. For example, when classified LiDAR data is available for two elevation datasets to be merged, the data is merged at the point level and then processed into TIN and grid formats. If one of the datasets is a raster DEM, then the merge will occur at the DEM level by bringing both datasets to the projection and resolution required for the project. The same logic can be extrapolated to merging multiple inputs.

In some cases stepping occurs between datasets. Stepping is an elevation seam that is visible at the transition line between data sources when the merged terrain is viewed as shaded relief. Stepping occurs as a result of differences in vertical and horizontal accuracies, as well as, resolution differences between two LiDAR datasets. Stepping may also reflect actual temporal changes in the terrain as a result of natural events (i.e. landslides, erosions, mud flows, earthquakes, etc.) or human activities (i.e. landfills, quarries, new developments, etc.). To address stepping while maintaining the fidelity of the data to its original accuracy, Baker applies a smoothing process that applies to the immediate buffer around the seam line between data sources.

The process developed by Baker will enable creation of DEMs from single LiDAR collect, but processed in incremental manner or multiple LiDAR collect without altering the original DEM significantly. The process also takes care of other issues like artifacts, data void and elevation seam leading.

Terrain data merging for FEMA floodplain studies are unique in their requirements. A need for a reliable process emerged to standardize the development of larger terrain coverage from existing and newly acquired topographic data while maintaining the fidelity of inputs to existing flood studies and floodplain mapping results. Baker developed a methodology for producing merged terrain datasets that focuses on maintaining terrain fidelity and elevation accuracy inherited from the input datasets as much as possible. The quality results obtained are further bolstered when the process is applied to multi-date LiDAR data covering the same flood plain area. The developed process encapsulates added-value by providing the ability to continually mine the disparate LiDAR datasets to provide continuous elevation data coverage for future flood plain studies. The ability to develop quality terrain models from different LiDAR datasets provides an opportunity to leverage existing valuable products in a costeffective and timely manner.

FEMA’s Memorandum for Regional Risk Analysis Branch Chiefs, Procedure Memorandum No. 61: Standards for LiDAR and Other High Quality Digital Topography, Effective Date September 27, 2010.
USGS LiDAR Base Specifications Version 1.0 Techniques and Methods 11-B4.
American Society for Photogrammetry and Remote Sensing (ASPRS), ASPRS Guidelines, Vertical Accuracy Reporting for LiDAR Data, version. 1.0, May 24, 2004.

Dr. Srinivasan "Srini" Dharmapuri has over 26 years of extensive, wide-ranging experience within the Geospatial industry; most notably with LiDAR, Photogrammetry, and GIS. He has worked in both the private and public sectors, as well as internationally.

Pascal Akl, GISP, is a skilled GIS data analyst, geodatabase developer, topographic data SME and RS imagery interpreter. For the last 7 years with Baker Mr. Akl has worked extensively on providing GIS analyses and technical support.

A 1.418Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE