LIDAR Magazine

An Object-Based Approach to Feature Extraction using LiDAR Data Fusion

A 1.940Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

In this day and age, the amount of high-resolution remotely-sensed data far exceeds the ability to process and deliver actionable datasets. With new sensors being developed with ever higher spatial, spectral, and temporal resolutions, it is difficult to maximize the potential of the available data. Rapidly converting these data into highly accurate and easily ingestible products is imperative because the information they provide improves urban modelling efforts and informs policy decisions concerning green infrastructure, storm water management, and much more. Traditionally, pixel-based approaches have been used to classify land-cover; however, features of interest are not fully captured by a single pixel, and sub-pixel routines also have their limitations. One study found that a pixel-based approach to urban land-cover mapping produced overall accuracies of approximately 63.33% (Myint et al. 2011). To improve the efficiency and accuracy (approximately 94.6% in our lab) of data production, Object-Based Image Analysis (OBIA) has been used to automate feature extraction by grouping pixels together and assessing them as an image object. Using OBIA, it is possible to combine LiDAR point clouds, rasterized LiDAR surface models, high-resolution imagery, and vector data sets to mimic human interpretation of large study areas. This is achieved by analyzing spectral, spatial, textural, and contextual metrics within and among image objects, taking advantage of enterprise parallel processing and creating expert systems within Trimble’s eCognition.

Automated feature extraction has become more versatile now that it is possible to utilize point clouds directly within OBIA routines. In the past, it was only possible to exploit LiDAR surface models such as Digital Surface Models (DSM), which are derived from first LiDAR returns, and Digital Terrain Models (DTM), which are based on last LiDAR returns. Statistical metrics could not be calculated for the original point cloud. Although these surface models are certainly effective in differentiating tree canopy from buildings, by calculating the difference between the DSM and the DTM, they are prone to nonsystematic error; the full potential of the original point cloud is not realized in LiDAR derivatives. For example, errors in post-collection LiDAR processing can leave large swaths of point cloud data without return attribution. The resulting DSM-DTM for these swaths will be zero not because the first and last returns are identical but because the LiDAR derivatives are faulty. Differentiating between tree canopy and buildings is then much less accurate without a reliable DSM-DTM metric to guide image segmentation (grouping pixels based on similar characteristics and creating the initial image objects) and the subsequent classification. This error can be circumvented by using the original point cloud in eCognition to create DSMs and DTMs based on the 99th and 1st elevation percentiles, respectively. The 100th and 0th percentiles are omitted to remove spurious high and low points. It is thus possible to classify objects of interest with high accuracy despite incomplete surface models.

Another example of point-cloud manipulation in eCognition involves power lines. These features were previously quite difficult to extract using an automated approach because power lines have similar spectral and spatial characteristics to tree canopy. However, it is now possible to discriminate these classes by subtracting the standard deviation of a small range of elevation (e.g., 25 ft. to 35 ft.) from a large range of elevation (e.g., 8 ft. to 40 ft.). The difference between these parameters will result in a value of zero if the image object contains points within the smaller range of elevation but none outside of it (Figure 1). Essentially, zero indicates a horizontal sliver of points while a value greater than zero indicates a broader vertical range. A tree at least 25 ft. tall will generally have a range of points more than 10 ft., which will result in points outside of the smaller range and ultimately a value greater than zero when subtracting the two standard deviations. A series of these equations using varying elevation ranges are necessary to classify power lines effectively due to sagging wires between poles and fluctuating elevations of power lines. This technique will inevitably result in some tree canopy being classified as power line, but such errors can be easily fixed by reclassifying image objects that have a small number of surrounding power line image objects within a certain radius.

While these metrics are an effective tool to classify electricity distribution lines, they are less effective with high voltage lines. High voltage power lines generally have a set of three lines stacked on top of each other (Figure 2). To accommodate this difference, one approach is to use the difference in elevation for percentiles such as the 97th and the 95th. The number of points picked up on power lines are sparse and as a result, there are typically fewer than 100 points found in an image object. Conversely, tree canopy that would be at a comparable elevation to high voltage power lines have hundreds or even thousands of points. Thus, the elevation of points for the 97th percentile and 95th percentile within power lines are usually the same as they are actually measuring the same exact point. Even if they are measuring different points, the points should be at almost exactly the same elevation. Using this same logic, the elevation for the 97th and 95th percentiles for points found in a tree will likely be different. The subtraction of these two percentiles should then result in zero for power lines and greater than zero for tree canopy. A combination of the standard deviation and percentile metrics tailored to specific elevations produces the best results for power line classification (Figure 3). These techniques were developed using sparse linear LiDAR point clouds and significantly reduce the amount of manual edits required to produce an accurate tree canopy dataset.

OBIA can exploit the most useful aspects of a given dataset while simultaneously disregarding errors or spurious values. We do a preliminary point cloud classification using LAStools to differentiate high vegetation and buildings (Figure 4). This classification is highly accurate as long as the input LiDAR dataset is able to penetrate vegetation to some degree. It is then possible to build a rule set (set of algorithms used to segment and classify image objects) using the classified LiDAR point cloud as a starting point. While this is often an effective technique, it is not without shortcomings: some high vegetation will inevitably be classified as building and vice versa during the preliminary classification. This error can be exacerbated if the LiDAR collection does not penetrate vegetation, which results in only one return located on tree canopy crowns. To properly incorporate a classified point cloud, it is necessary to integrate spectral characteristics such as the Vegetation Index (NDVI), geometric characteristics such as the size of image objects and spatial characteristics such as the DSM-DTM difference. If an image object has a high ratio of points classified as building, low NDVI, a low DSM-DTM difference, and is large, it can then be easily assigned to a building class. However, if an image object has a high ratio of points classified as building, high NDVI, a high DSM-DTM difference, and is relatively small, it is most likely tree canopy and further analysis of the image object must be performed before finalizing the classification. For example, a contextual analysis that assesses the length of the relative border between objects can be an effective Figure 5: Hydrography classified with missing water under tool in classifying overhanging tree canopy. ambiguous features. If 80% of an object’s border is adjacent to tree canopy, it is most likely tree canopy as well. Integrating multiple different datasets and playing to their strengths is often the best way to overcome flaws in individual datasets. When attempting to classify water from a bottom-up perspective, it is difficult to properly Figure 6: Hydrography classified using a bottom up approach including water under tree canopy (Red). identify image objects as water with simple parameter thresholds when they contain return located on tree canopy crowns. overhanging tree canopy (Figure 5). In this case, fuzzy classifiers can be used as a last resort to statistically isolate features of interest. Fuzzy classifiers use membership functions to combine several variables, ultimately deriving a single classification value (Benz et al. 2004). These membership functions are versatile and can be used to categorize phenomena across multiple variables, much like human cognition. To classify water under trees, an effective fuzzy classifier combines the Normalized Difference Water Index (NDWI) with nDSM values, both using Positive Curivlinear membership functions with a range of -0.5 to 0.15 and 0 to 8 ft., respectively (Figure 6). This fuzzy logic routine identifies water obscured by trees with a specificity that cannot be acquired with traditional parameter thresholds.

OBIA is a constantly evolving field that facilitates more rapid and accurate data processing, permitting successful exploitation of the vast and ever-increasing volume of remotelysensed data. The introduction of direct point cloud exploitation and increased literacy of fuzzy logic not only increases the efficiency of processing but also makes OBIA expert systems more flexible in the face of corrupt data by diversifying the techniques needed to achieve a given goal. As automated feature extraction techniques develop, so too does the ability to make informed decisions that materially improve the fabric and resilience of communities in a rapidly-changing world. Moreover, these advancements are essential to ensuring that the massive investments made by private and governmental remotesensing organizations see returns that justify their spending. In essence, improved data acquisition begets improved data exploitation; to successfully analyze our changing planet more accurately and quickly, high-resolution imagery and LiDAR must be exploited as efficiently as possible.

Noah Ahles, a Remote Sensing Analyst at the University of Vermont Spatial Analysis Laboratory, has a background in Environmental Sciences/GIS and specializes in Object Based Image Analysis. He leads feature extraction projects creating high-res land cover datasets for municipalities and counties across the country.

A 1.940Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

Exit mobile version