Wow, can you believe it is 2018? I completely lost 2017! Anyway, the best of success in the New Year!
I read article after article by employees and owners of traditional lidar and photogrammetric mapping firms who say, due to some sort of mystical physics, it is impossible to achieve accurate mapping with a low cost drone/camera using dense image matching. In this article, I hope to dissuade you from accepting these patently false claims.
Our AirGon subsidiary specializes in creating hardware and software for high accuracy drone mapping. We do a lot of flights for testing equipment, proving accuracy to customers, demonstration flights and so forth. To date, we have conducted over 1,000 metric mapping drone flights with high end systems (both fixed wing and rotary wing), the SenseFly eBee and just about every DJI drone made. By a metric mapping drone flight, I mean we have specifically targeted the project site for the purpose of quantifying accuracy. All flights are conducted with on-board direct geopositioning; for DJI equipment, we use our own Loki system. For the eBee, we use the RTK version. Interestingly, both SenseFly and us use a GNSS engine from Septentrio.
A bit of background. We know quite a bit about metric mapping cameras. Prior to GeoCue/AirGon I and my core team were at Z/I Imaging. Part of my job in running Z/I Imaging was overseeing the development of the Digital Mapping Camera (DMC). This was the world’s first large format framing metric mapping camera. We learned a lot about both building and testing camera systems!
Of course the focus of a good drone mapping system should be primarily aimed at the sensor. As long as the drone can carry the sensor over the mapping area according to a flight plan and not bounce all over the place, it is not going to contribute in any significant way to the quality of the mapping.
I ask a lot of persons to define for me a metric camera. I get a variety of vague answers about this but the most typical definition is that it is the camera in their airplane! Actually, the notion of a metric camera dates to the days when mappers had to work directly from film with mechanical stereo plotters. Distorting aspects of a camera such as radial lens distortion could not be modeled in these exploitation systems so very expensive, distortion free lenses had to be manufactured (we continued this tradition with the lenses of the DMC, even though it was not necessary). There are probably two major properties of a camera that are necessary for accurate mapping in this age of digital data correction–a reasonable modulation transfer function (MTF) and stability. MTF describes how well the optical system responds to spatial frequencies (in film cameras, we used to talk about line pairs per millimeter). The stability of a camera refers to how much the modeling parameters (such as focal length) vary from project to project or even from shot to shot. These days even a truly terrible lens can be modeled but only if it is not changing in an erratic way from project to project.
Camera manufacturing processes and the material used have become so good that even a sub $500 camera can provide adequate performance for metric mapping. The main factors these days that introduce errors that are difficult to model include rolling shutters (just don’t try to map with this type camera–they are for video) and zoom lenses.
In a recent mine site mapping project, we did a fairly extensive test using a DJI Inspire 2 with a DJI X4s camera and our AirGon Loki direct geopositioning system. The X4s is a fixed magnification lens with a hybrid shutter that provides a mechanical leaf exposure below a specific exposure time. For our direct geopositioning test, we used fixed camera calibration that we had conducted in our lab. This demonstration was really aimed at how well our Loki direct geopositioning system could perform using no ground control but, of course, the system can do no better than the camera will allow! The overall goal was to generate 1′ contours.
We set out 18 ceramic tile photogrammetric targets (see Figure 1) over this 175 acre limestone mine site, surveying them in using RTK with a local base station. Our base station was positioned using the National Geodetic Survey’s Online Positioning User Service (OPUS) thus this represents the network tie in the following discussion.
The overall accuracy of the resultant 3D point cloud (as produced in PhotoScan Pro) was a root mean square error (RMSE) of 0.11 feet (3.36 cm). This is with no ground control! Obviously we are well within the accuracy requirement for 1′ contours. As an additional check, we compared our survey in undisturbed areas to contours that had been derived from a high accuracy LIDAR scan performed some months before. This conformance test was performed by extracting points along a road surface (see Figure 2) from the LIDAR-derived contour data and using these points as test points in the current drone-derived point cloud. This test yielded an RMSE of 0.13 feet (4.0 cm). You have to admit that these are remarkable results. We would have been very happy to see this with a 1.5 million dollar DMC back in the day!
So the proof is in the pudding, not in pontificating about the pudding. With very careful procedures and attention to detail, you can fairly easily achieve accuracy with a very low cost drone system comparable to a very high end photogrammetric mapping camera. Of course, a drone is very seldom a replacement for a manned aerial survey (due to the postage stamp project areas that are practical with drones). Thus my advice is that everyone who currently has a manned photogrammetric mapping practice should examine where or if drones will fit in their business model rather than perceiving the technology as a threat to their current practices.