Random Points: Calibrate This!

A 288Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

We have recently been doing a lot (and I mean a lot!) of flight testing with various cameras on small drones. This testing has been in conjunction with the release of Loki, our new direct geopositioning system for DJI and other drones. We have been analyzing the aspects of "calibration" that can be performed as part of the actual project as opposed to what should rightly be done in the laboratory.

I put quotes around calibration because, first of all, we seldom actually calibrate anything these days and secondly, we very often conflate parameters that are a function of the sensor system with those that are project specific ("environmental" factors). This is particularly true of LIDAR and LIDAR "calibration" software.

Calibration is the process of characterizing a sensor system and then using these characterization parameters in data processing. For example, suppose in a LIDAR sensor we know that the mirror encoder is off by +3 steps such that when the mirror is commanded to direct the beam orthogonal to the sensor plane where we would expect an angle of zero, we are seeing this 3 step error. A calibration parameter would then be "Mirror Angle Correction" and its "calibration adjustment" would be -3 (to bring the +3 back to zero). Now just to be accurate (pun intended), when we add correction values to adjust for known, stable errors, we are performing sensor characterization. If we were to actually fix the mirror servo to remove the 3 step error, that would be true instrument calibration. An analogy would be a scale that reads 2 pounds when nothing is on the scale. If the scale has a calibration control (a "zero" control), the readout could be adjusted back to zero. This would be calibration. If not, you just mentally subtract 2 pounds each time you weigh something on the scale. This would be calibration characterization. Thus, as you can see, we very seldom do actual calibration since most systems do not have a "zero" control on each parameter. We nearly always simply characterize error and then apply corrections during data processing. It is also very important to characterize system parameters at their extremum. For example, if we routinely weight objects with weights up to 950 grams, we probably want to check our scale with a known 1 kg weight. Finally, many systems do not exhibit simple offset behavior with respect to calibration. For example, assume we zero set our scale such that it reads zero with nothing on board. We then add weight in 100 g increments, noting the scale reading. If we are lucky, it might be a nice, linear relationship such as reading 101 g, 202 g, 303 g and so forth for each 100 g addition we make to the scale. We can then devise a characterization formula (in this case, WT = WM / 1.01 where WT is the true weight and WM is the scale reading. Usually we are not so lucky and the relationship is much more complex. In these cases we either resort to a high order function for relationship mapping or simply use a lookup table. An example of a high order function are the typical polynomial functions used to characterize radial lens distortion in cameras. A lookup table example would be corrections of stepper motors in LIDAR mirror controls such as Figure 1.

When compensating for errors between measured values and "truth" , it is critically important to separate system errors from what we might call environmental or project-specific errors. For example, a global navigation satellite system (GNSS) receiver error contains elements of both. A simple example are the so-called lever arms which define the position of the phase center of the antenna with respect to the reference system of the platform. If the lever arms are carefully characterized, their contribution to systematic error can be totally eliminated. If they are not correct, their contribution will manifest as project (environmental) error. We were recently testing a new antenna on a drone and consistently seeing a 3 cm height error in our project results. Now a height bias can be caused by an incorrect vertical lever arm, incorrect measurement of drone attitude, a focal length error and a few other issues. After some research, it was discovered that we had used the wrong phase center for the antenna. In fact, our error was exactly 3 cm!

If you do not separate system errors from environmental errors, you will not be able to reliably model data collected from a project. In our example of stepper motor error, this will manifest in LIDAR data as a roll error. Various corrections can be applied to the LIDAR data based on measurements within the data but these corrections might be highly correlated with other error sources. If the stepper error were calibrated and a lookup table applied, this source of error would be eliminated, allowing one to focus solely on correcting environmental error.

A more insidious example is calibration of drone cameras. Unlike metric cameras used in manned aerial mapping, drone cameras tend to be "calibrated" in situ. That is, self-calibration is typically used on a project by project basis. This approach comes from computer vision where it is assumed that little is known about the actual characteristics of the camera. The big problem here is that focal length is highly correlated with other parameters such as flying height. With control on the same plane and some ambiguity in flying height, you will resolve to an inaccurate focal length. This causes both elevation bias and elevation scale errors. You can address this issue by always using a laboratory calibration of the camera.

The final word on calibration is its use as a diagnostic tool. Tracking various static calibration values such as lever arms, stepper position, focal length and so forth over time can be an indicator of a system problem. For example, if focal length has remained relatively constant over a number of calibration cycles and then suddenly changes, you may have a loose lens to camera coupling, a shift in the camera CMOS sensor or some other contributing cause. This would lead you to investigate and correct the problem before it manifests as an error in customer delivered data. A routine calibration process is just a good best practice!

Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LIDAR production and workflow tools and consulting services for airborne and mobile laser scanning.

A 288Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE