Behind Sanborn’s Indoor Mapping Initiative

A 1.143Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

In 2013, global market research and consulting firm, MarketsandMarkets, published a report about the indoor positioning and navigation (IPIN) market1 stating that "the global indoor location market is estimated to grow from $448.6 million in 2013 to $2.60 billion in 2018." It also predicted North America to be the biggest market in the short term and Europe to follow in the longer term.2

The Sanborn Map Company, Inc. (Sanborn) is addressing a segment of this market where demand exists for highly accurate 3D information of environments that can be seamlessly integrated with other data, for high value analytics. Accurately mapping and documenting as-built indoor environments and their objects is a logical extension of Sanborn’s timetested expertise in the production process of 3D building exteriors.

The SPIN (Sanborn Platform for Indoor Mapping3) initiative is a step towards making data acquisition and post-processing of complete environments more efficient than current industry practices. A common example is the request for improved solutions to support the requirements of assets, facility and inventory management. Whether analyzing object locations, remodeling, moving or implementing a new floor layout, there is a demand for representing the current as-built environment with all the assets/objects geolocated.

More often than not, existing information about a facility–including the layout of the office furniture, inventory and other assets–is out of date and completion deadlines for such projects are tight. Most current solutions require human resources performing some form of measurement. The measurement process is time-consuming and requires significant resources when dealing with large or multi-storied environments. Now consider the future of high accuracy indoor mapping solutions: While the employees are sleeping, a facility can be scanned with multiple self-navigating scanning robots spread throughout the facility. In the case of a building the robots quietly move around the floors simultaneously, going inside cubicles, down aisles and into rooms, scanning everything while detecting and avoiding obstacles (including any people present) autonomously and returning to the starting point when they are finished scanning. The data from each of the robots is processed all at once on-site and an engineering grade accurate point cloud of laser points is delivered to the client by the end of the day!

This may sound like a futuristic case; however, it is a goal that Sanborn researchers have been pursuing during the last year and a half. The philosophy driving this approach is based on the following considerations:
Autonomy: The robots can do all the mundane tasks of going around obstacles in the indoor environment independently;
Reliability: If a robot encounters a failure, it does not compromise the mission;
Scalability: Multiple robots can traverse different floors or sections of a building simultaneously;
Affordability: One person can monitor multiple scanning robots; and,
Mission Speed: Sharing the area between several robots decreases the exploration time.

Our efforts benefit from Sanborn’s long history of accurately mapping outdoor environments from airborne, terrestrial and mobile platforms using multiple sensors.

Our project to develop the indoor mapping capability started with the gathering of a set of requirements for indoor 3D mapping and modeling. The envisioned primary applications required accuracy and dictated the functional specifications such as:
Range measurements from 0.1m (minimum) to 30m (maximum)
Angular resolution of less than 0.5 degree
Scalable automated process for data acquisition with minimal human involvement
Robust enough to withstand extended use (3-4 hours at a time) in normal indoor environments
Efficient automated post-processing of data
Able to produce 3D point clouds of the environment and 2D floor layouts
Flexible enough so that sensor orientations and new sensor types can be changed or added easily

Based on this list of specifications, we evaluated the existing commercial off-the-shelf (COTS) systems. The most popular approach for scanning building interiors involved a stop-and-go process, with people lugging around expensive terrestrial scanners on tripods and setting them up repeatedly to collect scans around occluded objects. This is especially cumbersome in residential or office space settings. The more expensive of such scanners also offered colorized point clouds as an output with very high accuracies (on the order of a few mm) and long ranges (generally more than 100m). However, it was clearly not an easily scalable or automated solution when dealing with cluttered scenes such as those often found in office, residential spaces and warehouses (on average, each scan takes about 5 minutes for data acquisition at one location). The post-processing involved importing the scans one-by-one, requiring about 7 minutes per scan import, until the entire point cloud was stitched together, with or without targets added in the scene.

Another approach involved relatively inexpensive depth cameras for creating colorized point clouds of the desired scene. While a number of firms have recently shown the photo-realistic quality of the resulting meshes from such scanning devices, the accuracy is heavily dependent on the scene in question. Depth measurements often fluctuate because of the limited range of such cameras (typically 0.5 3m), the presence of black objects or shiny metallic surfaces and ambient sunlight coming through windows. The resulting 3D reconstruction depth maps contain many `holes’ where no depth readings could be obtained. These new scanners definitely have utility as low-cost handheld consumer devices, but are not designed for the quality dictated by our functional specifications. Our in-house experiments with this approach showed ghost points with fairly significant errors, when compared to laser scans of the same scene.

The third approach we examined included the use of 2D laser scanners with IMU for indoor mapping. Some of these sensors met most of our needs in terms of the range and accuracy. However, the requirement for each sensor to have an operator to push it or tow it around did not allow us the flexibility of easily adding more sensors in the future or any automation in data collection. Some of the sensors required vendor control of the data post-processing (done by the vendor in the cloud). This would add risk in anticipating any future cost escalations, making this option a non-starter.

Sanborn’s solution was driven by recent advances in the fields of robotics and computer vision that support robots in autonomous navigation of unknown environments. These capabilities have become important in search and rescue situations after natural disasters and for battlefield surveillance. Such robots essentially scan the surrounding environment to build situational awareness, detect any openings (doors/windows) for them to enter and keep track of their position in space from the local point of origin. The most fascinating part of the workflow of this type of robot was how they scan the scene in 3D and then discard the data, except for the potential path of interest (e.g. the opening that they need to enter). The eureka moment arrived when we realized that if only we could retain such data, instead of discarding it frequently, our problem would be solved nearly perfectly!

Another consideration was to decide on a sensor platform. The basic decision was whether to adopt airborne versus ground-based sensing platforms for data collection. The fixed wing airborne platforms were instantly rejected as an option for indoor environments, since they cannot hover in place–necessary when the system needs to query its environment and do independent path-planning in real-time. The rotary platforms (such as mini UAVs/quadcopters) met some of the requirements, but their limited payload capacity, short flying times (usually under 30 minutes for a payload of 4lb) and potential collision risk made them a difficult choice. The six degrees-of-freedom (6DOF), viz. x, y, z, roll, pitch, yaw, introduced more errors in indoor data collection (with no GPS or GNSS to help apply regular corrections) than the relative advantages of an airborne system.

The preferred solution became the ground-based mobile scanning platforms. Recently, several companies have offered push-cart systems for indoor scanning. We were impressed with their design, but wanted to take it a step further by using an autonomous self-navigating system. This is where we leveraged the advances made by the robotics community for the basic architecture of the robots.

SPIN combines a 2D simultaneous localization and mapping (SLAM) system based on the integration of laser scans in a planar map (using scan-matching) and an integrated 3D navigation system based on an inertial measurement unit. The combination of high update rate simultaneous on-board 2D mapping, 6DOF pose estimation and 3D modeling, consumes low computational resources and thus can be used on low-weight, low-power robots and requires low-cost processors. The system also uses a depth camera to detect obstacles in front and avoids them as much as possible. It is not a perfect navigation mechanism, owing to the limitations of the depth camera technology that we discussed earlier.

SPIN currently outputs the following data: [1] a 2D map of the scanned environment; [2] a 3D LiDAR intensity point cloud of the scanned area; and [3] a topological map of the area. The dense and accurate 3D LiDAR intensity point cloud can be vectorized to extract the objects in the scene in a semi-automated fashion using COTS software. Sanborn’s historical exterior modeling expertise gives us the ability to create a photo-realistic 3D model of the interior of the building. This can further be used to build an inventory of the quantity, locations and dimensions of all scanned objects.

Indoor location information is valuable as underscored by the entry of many large companies in this area. Most of the focus right now is on the location of people in environments such as malls, stores, etc. The recent introduction of Google’s Project Tango phone prototype has generated a lot of buzz about the potential of hand-held scanning devices for 3D printing and augmented reality video-gaming. We believe that there is a market for highly accurate 3D models of environments such as buildings, homes and warehouses, seamlessly integrated into databases containing additional facility information to enable advanced analytics. The SPIN initiative makes data acquisition and post-processing more efficient for highly accurate datasets. The miniaturization of sensors, their falling costs and the increasing availability of 3D modeling software are only going to make the push towards `3D everything’ speed up during the coming years. Watch this space!

1 "Indoor Positioning and Indoor Navigation (IPIN) Market [(Network-based Positioning; Independent Positioning; Hybrid Positioning); by Solutions (Maps and Navigation; Location based Analytics)]: Worldwide Market Forecasts and Analysis (2013 2018)" Available at

Sharad V. Oberoi is a research scientist coordinating 3D modeling and visualization products and services for The Sanborn Map Company, Inc. He has a PhD in Civil & Environmental Engineering from Carnegie Mellon University and an MA degree from The University of Chicago.
Sanchit Agarwal (GISP, CP, CMS) is the Director of Mapping Operations at The Sanborn Map Company, Inc. He has over 9 years of experience in the field of geo-informatics and obtained his MS degree in Mapping & GIS from Ohio State University.
John R. Copple is the CEO of The Sanborn Map Company, Inc., where he focuses on the overall management and growth of the company. He has a broad knowledge of information technology, mapping processes, software development and digital processing systems; and, has extensive business experience spanning 36 years of leading, developing and creating value with high technology multi-national corporations.

A 1.143Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE