A 5.342Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE
Stovepipes–we hear a lot about them in organizational management–complaints that different parts of the organization fail to communicate or work together due to the structure of the organization.
The stovepipe analogy also holds true for the geospatial community. Our data models, and the corresponding tools we have for processing data, are "stovepiped." We have excellent processes for working with vector, raster, and point cloud data independently, but combining the three is problematic at best. In any data integration project we are forced to translate between data models and accept the consequences of such data translation operations. Rasterizing vector data is such an example.
Going forward these consequences will be increasingly hard to live with. The explosion of new LiDAR collects will occur in areas for which other types of data are also being acquired or already exist. Can you think of a single system that allows you to extract information from point cloud, raster, and vector data in an integrated environment? Probably not.
eCognition came on to the geospatial scene in the early 2000’s. The software gained notoriety for being the first commercial Object-Based Image Analysis (OBIA) platform. OBIA, or GEOBIA (Geographical Object-Based Image Analysis) as it is sometimes known, is an approach used to extract features from remotely sensed data in which the unit of analysis is the object rather than the pixel. OBIA techniques gained a strong following in the remote sensing community as they proved to be far superior to the pixel-based feature extraction approaches that had been used for decades.
The starting point in OBIA is the creation of image objects through the use of a segmentation algorithm in which pixels are grouped into polygons based on their spatial and spectral properties. Topology is inherent to the objects allowing not only the spectral information, but also the spatial information, such as shape, size, texture, and context to be used in the feature extraction process. Right now you are probably wondering what image objects have to do with LiDAR and data fusion, but trust me, I am getting there.
A number of years ago Trimble purchased eCognition from Definiens, a German company that continues to offer OBIA solutions to the biomedical-imaging sector. This started the steady migration from eCognition as an earth imaging centric product to a data fusion platform. At version 9 eCognition has native raster, vector, and point cloud handling capabilities. This might not sound like a big deal, as there are probably dozens of other geospatial software packages that can say the same thing. The difference with eCognition is that the object-based approach allows for true data fusion.
For example, one can generate objects from imagery, compute statistics of the points from a LAS file that fall within that object and also calculate what percentage of a vector feature intersects said object. If you are having trouble picturing how this works, I don’t blame you. When I first started using eCognition I found that it took me a few weeks to think in object space as opposed to pixels or points. Once I made the transition I found that I could do things with eCognition that are simply impossible in any other software package.
Let’s take a look at a simple example of how data fusion works within eCognition. The scenario I chose uses publicly available data from the City of Philadelphia. LiDAR were acquired in both 2008 and 2010, but to differing specifications. High-resolution orthophotos were available for the year 2008 as were building polygons. Figure 1 shows the building polygons overlaid on both the 2008 and 2010 LiDAR. As you can see there are buildings that have been removed and one building that was never even there in 2008.
Major cities spend good money to update their building datasets, typically through manual interpretation so as to insure maximum accuracy and consistency. There are also a good number of automated approaches to extracting buildings from LiDAR. What if all you wanted to do was quickly scan the 2010 LiDAR data to find areas of change and then use this information to determine how much work needed to be put into updating the building data? I decided to put eCognition to the test.
The first step was to batch load in the tiled LiDAR data, which was easy to do using eCognition’s import capabilities. Then I set out to build a routine to identify change. The feature extraction routines in eCognition are developed using a proprietary language called the Cognition Network Language (CNL). CNL is made readily accessible through an intuitive GUI interface that allows one to stitch together segmentation, image processing, classification, vector processing, morphology, and export algorithms into a single rule-based expert system that is then executed on the data.
Figure 2 shows the workflow for the same area as in Figure 1. Objects were created from the 2008 building polygon data, then the 2008 and 2010 LiDAR point clouds were used to determine if a building had not changed, was removed, or was an error and never existed in 2008. The criteria for determining building change was simple, a combination of the number of points above ground and the percent of non-ground points for both the 2008 and 2010 time periods.
Once I had completed testing of my CNL expert system I submitted the data for processing using eCognition Server, which allowed me to distribute the load across multiple cores. In less than 3 hours I went from start to finish, identifying possible locations of building change throughout the City of Philadelphia, processing hundreds of thousands of vector features, hundreds of millions of LiDAR points, and billions of imagery pixels, all in a single software package with no data translation.
The example presented above is a simple one. eCognition’s capabilities are far more extensive. Figure 3, for example, shows automated feature extraction from mobile LiDAR using a far more expansive process. eCognition does not have an "extract features" button; it is a development platform that contains the most robust collection of algorithms I have come across.
The software modules consist of eCognition Developer (Figure 4), in which data loading, exploration, and feature extraction development is done within; eCognition Server, which extends batch processing capabilities to Developer; and eCognition Architect, which allows users with limited experience to execute solutions created in eCognition Developer. There is a vibrant online community, the eCognition Community, where one can find sample solutions and interact with other users. I have found support to be prompt and the rare bugs I have identified to be fixed in short order.
eCognition is not a replacement for your existing GIS, remote sensing, or LiDAR software. Although it has both 2D and 3D viewing capabilities it is not a data exploration, data management, or cartographic production environment. What it does, better than any other platform I have come across, is allows one to turn data into information.
Jarlath O’Neil-Dunne is a researcher with the University of Vermont’s (UVM) Spatial Analysis Laboratory (SAL) and also holds a joint appointment with the USDA Forest Service’s Northern Research Station. He has over 15 years experience with GIS and remote sensing and is recognized as a leading expert on the design and application of Object-Based Image Analysis Systems (OBIA) for automated land cover mapping.
A 5.342Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE