A 1.070Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE
At the end of 2012, SPAR-E and ELMF each held its 3rd conference & exhibition in the same venue as last year: SPAR-E in the World Forum The Hague (Nov 12-14, Netherlands); ELMF in the Salzburg Congress (Dec 4-5, Austria). Venuewise, the Salzburg Congress definitely has the edge. This holds for the location and for the offered exhibition facility. Next year SPAR-E will be organized in Amsterdam, which to my opinion could bring an improvement to SPAR-E’s exhibition facilities. Based on what I’ve read, heard, and seen, my opinion is that ELMF sparkled with an excellent exhibition, where SPAR-E’s strongest point was its conference program, focused as it was on technical trends in 3D data-acquisition and modeling. SPAR-E has put the footage of keynotes and plenary sessions online. To access ELMF’s on-line proceedings contact info@lidarmap.org.
ELMF’s exhibition floor consisted of 50+ booths, where the number at SPARE’s exhibition was 30+. Outdoors of both events stood a row of vehicles equipped for mobile mapping. The overlap in exhibitors, however, was rather small. Only 12 participated in both events, amongst these all major suppliers of devices for laser scanning and mobile mapping. But where ESRI and Topcon were present at both exhibitions, Autodesk and Trimble participated in SPAR-E only.
With my recap of the 2011 events in mind and remembering Lewis Graham’s recent column in LiDAR Magazine (see side bar), my priority list for selecting presentations during the latest events was influenced by two issues: portable devices for 3D data-acquisition and image-based point-cloud generation. An intriguing question to me was whether photogrammetry is emerging from its ashes. The SPAR-E program explicitly included "LiDAR versus photogrammetry" and "next generation photogrammetry." Though ELMF’s program offered some photogrammetry-related issues as well, such issues were not specifically addressed.
Semantic point-cloud segmentation and information extraction were also high on my list, because the pressing problem is handling the ever-growing point-cloud sizes. By way of explanation, "semantic segmentation" is what old-school mapping–Aerial Photo-Interpretation (API)–was all about: adding "external knowledge" to an image segmentation process. If the adding of external knowledge is supposed to be performed by means of a computer–hardware–then the human brain–wetware–has to be replaced by computer code–software. This is what Artificial Intelligence (AI) is all about.
Portability and indoor highlights
Hand-held devices for 3D data-acquisition still remain rare, especially laser-based systems. In 2011, the only laser-based system on exhibit was the not-so-lightweight solution of the German company dhp:i (Dr. Hesse & Partner Ingenieure) at SPAR-E. This year the company– renamed to p3d systems–demonstrated an improved version at SPAR-E, which could also be put on a wheeled support for indoor mapping. This year too, 3Dlasermapping (UK) introduced a laser-based system of a totally different kind at both SPAR-E and ELMF called Zebedee–officially branded as ZEB1. With its light weight (700 grams for the hand-held part) and its oddly "nodding" LiDAR, mounted on top of a coil spring on a simple handle incorporating an IMU, this ZEB1 clearly contrasts with the bulky p3d systems product. Its developer, Dr. Elliot Duff, explained that the nodding with a frequency of 1 Hz enables acquiring a well-distributed point cloud with a single rotating laser. It is claimed to show a lessthan-a-10 m offset after a 1-km/20-minute operation. Primarily designed for mining applications on robots, its potential applications in other fields are numerous. In any event, a lighter handheld laser scanning device is hardly imaginable.
If using a light-weight device is paramount in acquiring 3D data, then the imaging-based option indicates the way to go. However, migrating from LiDARbased to image-based 3D acquisition is like migrating from a total-station to a theodolite in old-school surveying: you get directions only–no ranges. Thus, some technical trick needs to be applied in order to get 3D spatial information out of 2D imagery. One option is to use a (spatially known) structured lightpattern projected by an infrared source that is fixed to the camera. Mantis Vision (Israel) has chosen this solution for its F5 systems. The huge advantage of such an image-based system over a LiDAR-based system is in the double-dynamic data-acquisition: both the recording device and the recorded objects are allowed to move.
When recording objects that are not dynamic an even simpler solution becomes feasible: "Back to photogrammetry?" to quote Lewis Graham (see side bar). Based on what I’ve heard and seen, especially at SPAR-E, I dare to state the answer to Lewis’ question is: "Back to the future of photogrammetry!" Going back in time to the costly photographic two-camera tripod-fixed system Zeiss once produced for terrestrial photogrammetry would be maniacal. With today’s imaging technology, the trick can be done with an affordable simple consumer product like Microsoft’s’ Kinect, though with limitations in geometric accuracy.
Back to the future of photogrammetry!
The future of image-based 3D data-acquisition is already here, pictured by a bunch of acronyms (see sidebar 2): OTS for a suitable and affordable digital camera; INS & GPS for positioning that camera; a UAV for maneuvering the camera around if needed; OTF data-acquisition and -transfer; 3D data-processing based on SGM, SIFT, SLAM, SURF, RANSAC, and more. These developments do not come from a photogrammetric "graveyard" but from computer vision and robotics. Despite these roots, it is photogrammetry at the heart!
The current revival of photogrammetry enabled by modern image-matching techniques doesn’t imply that imagebased and LiDAR-based techniques are exchangeable with respect to 3D data-acquisition. Not at all. This was clearly pointed out in several presentations I attended during SPAR-E. And as both Alexander Weichert (Vexcel Imaging) and Lewis Graham (GeoCue) were featured on ELMF’s contributor’s list, the LiDAR-versus-photogrammetry issue has been adequately addressed during both events. Simply stated, there’s no alternative to LiDAR when dense vegetation is an issue. The same holds for thin linear objects like powerlines and railroad overhead-wires.
For getting a picture of current developments in 3D data-acquisition and 3D-modelling, an excellent way is checking the two keynotes and the two plenary presentations at SPAR-E that are available online. Prof. Luc van Gool spoke about `image-based 3D city modeling & mobile mapping’ and Prof. Marc Pollefeys addressed the subject "extracting 3D data from image and video data." Prof. Dieter Fritsch’s plenary presentation pertained to "dense image matching algorithms" like SGM, and the other presentation by Mirko Stock (Intergraph) and Chris Thewalt (Leica) addressed the benefits of "photo-realistic laser scans and intelligent 3D models’ in plant operations and maintenance." Since what these five speakers brought to the table in this two and a half hour "Grand 3D Tour" is available online, I’ll leave the reader with brief interpretations.
My own take-home message
Where LiDAR meets photogrammetry is two subjects: semantic 3D point-cloud segmentation and automated 3D triangulation. The latter is known from old-school photogrammetry and a must to get a multi-image record into a single geo-referenced 3D coordinate system. In general, a LiDAR record from a single static position will provide just as little 3D coverage as a single pair of stereoimages. Hence, old-school 3D triangulation is as much at the heart of modern image-matching as it is at dynamic laser scanning. Both acquisition techniques can mathematically produce 3D points by the millions resulting in the well-known point-cloud; each point being composed of four `numbers’: an identifier and three coordinates. With both techniques spectral information can be added to each such point of a cloud: waveform and multiple reflections (structure) with LiDAR; and wavelength and spectral radiance (color) with imaging.
The crux is, however, that as such, these datasets do not yet provide spatial information. Some form of interpretation is needed to transform data into information. Old-school visual interpretation of such huge datasets is clearly not possible. Thus, computational procedures must be brought to the rescue. This computational approach invokes the problem of mimicking, replacing, or bypassing human interpretation skills by programmable machine operations. The solution to this problem is what semantic point cloud segmentation is meant to provide. However, there is no `one-stop-shop’ for this technique, as Marc van Gool concludes at the end of his presentation.
My recap last year concluded with: no "one-size-fits-all" solutions; no proprietary "prisons"; no professional "stove pipes. Further development of 3D " modeling will be driven from a userperspective not from that of various 3D data-capture technologies. Technology remains the dominant enabler, but technology providers understand that their market is rapidly shifting from supply-side to demand-side. The problem for both providers and users: there is no one-stop-shop for 3D information modeling. Questions like "LiDAR or imaging?" are a thing of the past.
Ir. Jan Loedeman is an agricultural engineer by education and recently retired from a long career in teaching surveying, photogrammetry and GIS at Wageningen University in the Netherlands. He is also a former editor of GIM Magazine.
Sidebar
Looking back…ThE 2011 SPAR-E & ELMF Recap
…The only portable LiDAR-based MM equipment on show at SPAR-E was from the German company Dr. Hesse & Partner Ingenieure– acronym dhp:i. The complete system weighs a beefy 46 lbs, including harness and rucksack.
…though photogrammetry featured on both agendas, photogrammetric solutions mainly addressed draping imagery over LiDAR point clouds… Point clouds can be generated by both LiDAR and photogrammetry.
…Maybe classical photogrammetry will see a revival if hand-held portability surfaces as a serious issue. And it definitely is expected to do so in settings where space is limited and portability is essential, i.e. in short-range indoor applications.
…Alexander Weichert, director of Microsoft’s Vexcel, stressed that generating point clouds is no longer restricted to LiDAR.
…Whether photogrammetric point cloud generation can do better than LiDAR depends on the application. Photogrammetry falls short when it comes to terrain models of densely vegetated areas and fine linear features such as power lines.
…Weichert addressed the problem that…when draped over a point cloud, it is the imagery that provides the additional information needed for feature extraction. "What are we going to do with this amount of data?"
…In other presentations feature extraction turned up with respect to various applications. "Semantics is the real complicated part."
…There are three "no-s" to be dealt with in the advancement of this highly dynamic technological field: no `one-size-fits-all’ solutions; no proprietary `prisons’; no professional `stove pipes.’
…geometry is a key issue in the point-cloud arena…
Lewis Graham, in His Recent Column "Random Points", Lidar Magazine, Vol. 2 No. 6, Back To Photogrammetry?
…digital surface models (DSM) from stereo photographs…were typically extracted using local correlation windows to match points in the base image and the match image.
…since 2005…the Semi-Global Matching (SGM) algorithm provides a high speed method of generating digital surface models on a pixel by pixel basis…a 5 cm ground sample distance (GSD) can be used to also generate a DSM with a point spacing of 5 cm.
…About high density digital surface modeling (HD-DSM)…Dr. Alexander Weichert of Microsoft…pitted the UltraCam aerial camera products not against cameras from other manufactures but against LiDAR systems!
…LiDAR retains the tremendous advantage that only a single laser ray need reflect from an object space point to discern an accurate height measurement.
…Optical DSM also does not work well in areas of low texture with very high depth disparity (e.g. power line detection).
…Does this mean that we dismiss HD-DSM? Of course not!…particularly interesting in those situations where it is simply not feasible to fly a LiDAR unit.
…the emergence of micro-Unmanned Aerial Systems (UAS)…autonomous aircraft with a mass of less than 5 Kg.
Sidebar
Looking forward
The future of photogrammetry is here. It delusively looks like "boy’s toys," but it definitely is highly advanced stuff, for instance, this presentation: F. Remondino et al: "UAV Photogrammetry: Current status and future perspectives" The list of acronyms belowdefinitely not exhaustiveoffers a brief guideline into the new photogrammetric arena.
uAV or MuAVMicro or Miniature Unmanned Aerial Vehicle, also denoted "micro drone", equipped with INS & GPS for flight control. Is currently widely used as a radio-controlled (RC) flying platform for acquiring high-resolution imagery with an OTS digital camera.
OTSTools or devices that are commercially available "Off The Shelf," such as digital consumer cameras of numerous types, sizes and optical qualities; both still and video.
INS & GPSAn integrated miniature Inertial Navigation & Global Positioning System; weighs no more than 25 grams and measures only 2 x 3 x 4 centimeter.
OTFOn The Fly, quite literally when an RC UAV is used as a camera mount.
SGMSemi-Global Matching (2005) "successfully combines concepts of global and local stereo methods for accurate, pixel-wise image matching at low runtime…The main power of SGM can be demonstrated on aerial images of urban scenes. A study (Hirschmller and Bucher, 2010) compared SGM based digital surface models from images of different commercial aerial cameras with each other, with a laser model and ground control points." Prof. Dieter Fritsch’s presentation about SGM at SPAR-E is available on-line.
SIFTScale Invariant Feature Transform (1999) "is an algorithm in computer vision to detect and describe local features in images…SIFT matching is done for a number of 2D images of a scene or object taken from different angles. This is used with bundle adjustment.
A 1.070Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE