Just about every industry from construction to utilities to roofers to insurance is being impacted, and often transformed, by the availability of high resolution Earth observation (EO) imagery from satellites, planes and helicopters, and UAVs. Initially 2D, then 3D and in 2014 4D (2D/3D + temporal) imagery is available from many sources. Imagery from international satellites,low cost satellite constellations and UAVs is poised to dramatically reduce the cost of EO imagery.
An exponentially increasing volume of EO imagery is being captured every day from a variety of devices. The Committee on Earth Observation Satellites (CEOS) reports 286 Earth observation devices in orbit. The next Digital Globe satellite Worldview-3 scheduled for some time in 2014 will be capable of 31 cm resolution and can store two terabits of data between downloads. Two start-up satellite companies have already started putting satellite constellations in space, that promise to provide high frequency revisits to every point of the Earth’s surface at low cost. Planet Labs and Skybox Imaging each plan to launch constellations of 24+ low cost satellites that will capture high-resolution imagery of every spot on earth many times per day. They promise to deliver the first ever HD-video of any spot on earth,allowing users to track changesfrom traffic jams to deforestationin near real time.
A few years ago I remember that the total amount of imagery being downloaded daily from all EO satellites was estimated to amount to about a terabyte. Now a single satellite could be responsible for a terabyte daily. If you add advances in aerial photogrammetry and the rapid development of UAV-based photogrammetry, we are looking at huge volumes of imagery every day, probably approaching a petabyte.
All of this data requires processing before it is made available to customers and end users. Currently this processing is time consuming and a significant proportion of it still involves manual and semi-automated processing. This results in delays between the time the data is captured and when it is made available to customers which can range from days to weeks or even months. Customers are becoming impatient with the time it takes to process data and are increasingly pushing for near real-time availability.
At the PCI Geomatics Reseller Meeting in Ottawa Wolfgang Lck, whose company Forest Sense cc which is based in South Africa customizes image processing work flows and provides remote sensing training for the African market, gave an overview of the sensor web and how automating image processing is enabling near real-time imagery availability.
The sensor web is comprised of satellites (EDRS Geostationary Relay and supercomputer, wide area systematic Optic/SAR monitoring, VHR SAR , Optical VHR/hyperspectral), field sensors, mobile devices, and DRS (earth receiving station). According to Wolfgang, the communications protocols and the standards (for example, the OGC Sensor Observation Service) are in place for these systems to talk to each other. Most of the satellite systems are in space, with just the sentinel systems still needing to go up. These are wide area systematic Optic/SAR monitoring systems being launched into space by ESA. On the ground there are receiving stations (DRS), field sensors, and mobile devices carried by people. The mobile devices capture as well as receive information.
Imagery requires involves complex processing to make it usable. With the huge volumes of data that the sensor web is capable of capturing this processing has to be automated if the data is to be available in near real-time at low cost. There are many actors that need to be accounted for, but the most important actors are end user customers (machines or human beings) because customers determine the products that are to be delivered.
There are different workflows depending on the sensors that are used and the final product requested by the customer. To provide a feel for just how complex this processing is and the challenges involved in automating these workflows, Wolfgang went through a typical workflow beginning with raw satellite imagery and following through to producing an orthorectified image suitable for classification, for example to identify different types of vegetation. Even for this pretty basic image processing workflow, there are many steps.
1. Ingestion of data from pick-up point
2. Binary data to supported format conversion
3. Relative radiometric correction and artifact removal
4. Band alignment
5. DN to ToA reflectance conversion
6. Haze cloud and water classification
7. Auto-ground control point collection
8. Orthorectification
9. DSM generation
10. Topographic correction
11. Spectral preclassification
12. Level 4 product generation
13. Image compositing / mosaicking
14. Delivery of data to pick-up point
In the past many of these steps were manual or semi-automated. The important breakthrough that Wolfgang was able to demonstrate is that he has been able to automate the entire process, which means that end user products, whether orthorectified images, digital surface models or classified vegetation maps are available much faster and at lower cost.
In a subsequent article we’ll go through each o f these steps in some detail so you can appreciate the amount of processing involved and why performance has become the critical factor in image processing.