Art, LIDAR, and Natural History

A 1.542Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

For many people, National Geographic provided the necessary inspiration to explore a vast and diverse planet that would otherwise have gone unnoticed. As technology evolves the demands that the viewer has for immersion and realism increases. With digital cameras becoming increasingly inexpensive and sophisticated, and 4k recording now in the hands of the amateur videographer, people expect more than a flat and two-dimensional representation of reality. LCD televisions are becoming concave in order to offer greater immersion, virtual reality goggles are now available to consumers at affordable prices, and video games are more closely resembling the world around us than ever before. MadSpelunker LLC looks to address this increasing demand for immersion and realism, and they believe that the integration of LIDAR, photography, and videography is the answer that consumers are looking for.

For more than a decade, LIDAR technology has been used primarily for industrial and civil engineering applications. MadSpelunker looks to change all that by integrating full-frame digital photography and 4K videography into point clouds, in order to create navigable 3D experiences. These creations are not to be confused with video game environments.

The Claustral Canyon project, in part inspired by National Geographic, was funded by the generosity of Kickstarter backers. Kickstarter is a "crowdfunding" platform that enables people to contribute to projects that backers deem having creative or artistic merit. Having met their funding goal, Luke Farrer, CEO and founder, and Michael Breer, Chief Photographer and Digital Artist set off for the Blue Mountains in New South Wales, Australia. What would ensue was an effort involving 12 days of photography, videography, and laser scanning in one of Australia’s most remote slot canyons. The intent: collecting the necessary data to create a navigable 3D experience, bringing the end user from their lounge chair to a virtual experience that approximates what it is like to be there.

You might ask, what is the interface going to be for such an experience? While MadSpelunker looks to be a major content provider of virtual experiences, companies like Sony and Oculus Rift look to innovate on the hardware (interface) side of the equation. Oculus Rift, also initially funded by Kickstarter, has become the most innovative startup in the field of Virtual Reality (VR) headsets. Their Cinderella story culminated in a $2 billion acquisition by Facebook in late May of this year. Consumers use the Rift, their flagship product, to have a more immersive virtual experience, normally within the context of a gaming environment. Any gaming experience that is Unity (a popular gaming platform) enabled can be viewed and played on the Rift and Sony’s headset, the Morpheus. MadSpelunker will be releasing the Virtual Claustral experience as a Unity enabled experience.

Though the integration of these technologies is in its infancy, and the renderings are not yet approaching the level of realism that MadSpelunker is aiming for, new techniques and methods are being developed to bring virtual experiences closer to the real thing. Both Farrer and Breer are striving for the type of realism that would fool the viewer into thinking that he or she is really at the location in question.

To surmount some of these challenges, great care has been taken in compressing and mapping animation loops. In other words, anything that is "animated" within an environment; e.g., trees, plants, water, even fauna is recorded in HD video, the frames separated, and loops run in such a way that there is sufficient data to replicate a moving, changing, dynamic environment.

Farrer and Breer have noted that some of the dynamism necessary to create a virtual environment is taken for granted by the casual observer. For example, plants and trees, although sometimes appearing to be static are actually perpetually in motion. While this is something that often goes unnoticed, Farrer and Breer have noted that stationery vegetation within the context of a virtual environment looks "video gamey" . Madspelunker’s photo mapping solution allows video, as well as photography, to be mapped onto meshes created from point clouds. The only requirement is that the video be shot from a camera that is stationery. This allows video to be made three-dimensional by being mapped or wallpapered onto a 3D surface, making the environment instantly appear more alive and realistic.

Through their participation at the HXGN/Leica conference in March, it became clear to Farrer and Breer that there was an overwhelming enthusiasm and need for the integration of point clouds and realistic color information and shading. A number of participants, ranging in interest from crime scene investigation to historical preservation, expressed interest in collaborating with Farrer and Breer for the purposes of making more realistic renderings of 3D environments. In fact, Geoff Jacobs, Senior Vice President for Leica Geosystems made it clear in his introduction of the duo, that Breer was the first photographer in the history of the conference to be presenting to the laser scanning community, and that this should be an indication to attendees of a closer relationship between scanning, photography, and videography.

MadSpelunker is quick to distinguish what they are doing from what video game companies are doing concurrently. The 3D environments are not fabricated in the way video game worlds are, they are closely and carefully calibrated recordings of, in this case, a slot canyon in the Blue Mountains of Australia. In fact, were there to be a need to construct a bridge or install a water purification system, or anything else man-made for that matter, the measurements recorded would hold up to scrutiny. And though some of the renderings can appear "gamey" on first glance, they are in fact a record of a natural environment that is arguably closer to the real thing than what any other single technology, in isolation, could produce.

Both Farrer and Breer have acknowledged the challenges in integrating these technologies. For example, when documenting a natural environment the need for videography is critical and currently laser scanners cannot properly capture point clouds in motion.

Claustral Canyon is full of ferns that are perpetually in motion, even in the slightest of breezes, and while the P20 scanner is able to capture these ferns in considerable detail, multiple scans from different positions create a composite that distorts the ferns, much in the way that a long exposure photograph can capture a moving object in several positions within a given frame. As the viewer moves closer to the subject, in this case the fern models, the level of distortion increases.

Creating realistic and moving fern models that can be incorporated into point clouds is one of the many problems that MadSpelunker endeavors to solve. This, of course, begs the question, will kinetic or moving point clouds of large-scale natural or man-made environments be possible in the near-future? At the moment, the group is settling with mapping motion pictures on static point cloud meshes, but both are hopeful that the reality of dynamic laser scanning is not far behind.

Luke Farrer lives in Brookline, MA and is an entrepreneur with more than 10 year’s experience in 3D computer graphics. Prior to founding MadSpelunker LLC, Farrer worked at a start-up in Somerville, MA, that specialized in facial motion capture technology.

Michael Breer is a professional photographer and entrepreneur living in Cambridge, MA. Breer has extensive experience in the field of photography and digital art. Having worked extensively for Sotheby’s International Real Estate in Boston’s Back Bay, Breer has labored to bring richer multimedia content to high-end real estate. Breer has a keen interest in the integration of LIDAR, photography and videography, and would like to see this synthesis result in a new and exciting artistic medium.

A 1.542Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE