In previous articles, my colleague Jarlath ONeil-Dunne has described the value of high-resolution LiDAR for tree-canopy mapping efforts in urban environments. Shadows created by buildings can obscure trees in passive-sensor satellite or aerial imagery, making delineation and classification of these features very difficult. This limitation is especially problematic when mapping street trees, so named because they are planted within or adjacent to transportation rights-of-way. Many cities have begun tree-planting programs that focus on trees in publicly-owned rights-of-way, so it is imperative that street trees be included in any assessment of a municipalitys green infrastructure.
An ongoing project in New York City illustrates the increasingly vital contribution of LiDAR to this mapping challenge. Excellent color-infrared orthophotography (0.5-foot resolution) was acquired for the city in 2008, but many street trees were obscured by deep shadows. Any mapping effort based exclusively on orthophotography would thus underestimate the citys existing urban tree canopy and would also fail to capture the actual spatial distribution of street trees. Fortunately, high-resolution LiDAR (1-foot post spacing) was acquired for New York City in 2010, permitting mapping of trees in even the most canyon-like of the citys streetscapes.
LiDAR provides two key pieces of information for tree-canopy mapping: 1) the height of landscape features aboveground; and 2) the surface texture of these features. We processed the vendor-delivered LAS files in Quick Terrain Modeler 7.1.2 Beta (Applied Imagery), creating a normalized digital surface model (nDSM) to provide feature heights and a gridded statistical surface to characterize texture. We used the standard deviation of elevation values for the texture surface (Z Deviation), but the mean number of LiDAR returns also would have worked well. With 1,119 LAS files constituting 307 GB, efficient processing was a significant challenge, requiring batch processing of tiles organized into 53 overlapping groups followed by mosaicking into seamless, city-wide data layers.
We next used eCognition Developer 8 (Trimble) to produce a land-cover map showing New York Citys primary landscape features: Tree Canopy, Grass/Shrubs, Bare Soil, Water, Buildings, Roads/Railroads, and Other Paved Surfaces. This software package relies on object-oriented image analysis, an approach that focuses on groups of pixels forming functional landscape objects rather individual pixels. Another hallmark of this package is that it permits fusion of multiple data formats, which helps maximize the value of existing investments in geospatial data. For New York City, we combined LiDAR with the available color-infrared orthophotography and with high-quality vector datasets delineating building footprints, roads, ball fields, and water.
Although data fusion is vitally important to the overall land-cover map, the Tree Canopy class itself was based almost exclusively on LiDAR. In an initial segmentation step, we used the nDSM to discriminate all features according to height: tall objects vs. ground-level objects. We then used the Z Deviation layer to isolate tall features that encompass a range of elevation values (i.e., high standard deviation); objects with highly variable surfaces (i.e., textured) are likely to be trees while low-variability objects are more likely to be structures or other elements of the built environment. Following a series of contextual steps designed to refine classification of tall objects adjacent to building footprints, we applied morphological routines to ensure realistic representation of individual trees (i.e., a tree should look like a tree). The resultant map captured individual street trees, trees in deep shadow, and tree canopy overhanging aboveground features of modest height such as residential buildings.
No automated feature-extraction technique is perfect, so we are currently reviewing the draft city-wide map for completeness and making corrections as necessary. Without LiDAR, however, this process would be many times more labor intensive, essentially reverting to manual interpretation in parts of the city encompassing dense concentrations of tall buildings. With time and money limiting, tree-canopy assessments for large urban centers like New York City cannot rely on such time-honored methods of image analysis, and indeed LiDAR will assume an even more prominent role in urban tree-canopy mapping as high-resolution datasets become more widely available.
Sean MacFaden is a research specialist with the University of Vermont (UVM) Spatial Analysis Laboratory. He has 18 years of experience in geospatial technologies, focusing initially on GIS for wildlife habitat mapping and biodiversity assessment but recently delving more into remote sensing applications. In particular, he has used object-based image analysis (OBIA) techniques in conjunction with high-resolution imagery and LiDAR to map land cover in urban and suburban settings, including multiple urban tree canopy assessments (UTC) for cities and counties in the northeastern United States. His educational background includes degrees in Biology from Williams College and Wildlife Biology from the University of Vermont.