LIDAR Magazine

ISPRS Symposia Highlights

IMG 5686

Who is really writing this?

As I embark on another account of my travels, I am acutely aware that readers may be in doubt as to the provenance of the scribble below. I read in an article in The Economist that experts can infer whether prose is written by a human or by large language models (LLMs) from the world of AI: the clues lie in the use of certain words. LLMs like “pivotal”, “realm”, “delves”, “potential”, “intricate”, “meticulously”, “crucial”, “significant”, “insights”, though style can be a giveaway also.

Lidar morsels

Sometimes I like to add nougats to my editorials in the print edition of the magazine, to provide flavors of articles that I’ve come upon and which describe interesting aspects of our lidar world and where it’s going. Sadly, when I exceed my word limit, these are the first victims of my publisher’s red pen. In this scribble, therefore, I’ll start there.

It’s always gratifying to discover lidar-related articles in the popular press. The British newspaper The Guardian reported recently on crowdsourcing of archaeology in England, based on multiple sources, including satellite imagery and lidar. Members of the public are helping to find ancient sites and monuments in a project called Deep Time, a joint initiative between DigVentures and the National Trust. This “citizen science” project has proved successful in three parts of England so far, with multiple new finds being revealed and work in other areas is planned.

A piece in The Economist explained that Ukraine is adding navigation packages to its military UAVs so that they can operate in GNSS-denied areas. LIDAR Magazine readers understand the necessary technology, but it’s satisfying to read an account for the layman that espouses the advantages of IMUs and of matching of UAV mission imagery with satellite reference imagery, aerial photography or video. Lidar is used too and the results go beyond navigation to target detection. Most of the solutions are pragmatic and economical, of course, reflecting the circumstances in which they are being put to work.

There’s a guest editorial in our sister magazine, The American Surveyor, by Jed Gibson of R.E.Y. Engineers in Folsom, California. It’s another piece on a subject that arose in many different forms – as I’ve reported previously – at Geo Week in Denver this February. In a nutshell, anyone can press buttons, but it takes a professional to produce a deliverable that’s accurate and reliable. Part of that professional’s approach to the job, moreover, probably involves ground control points. Here’s Jed’s take: “Having a good lidar point cloud or orthoimage, calibrated to surveyed control, and mapped to the professional standards set by countless professional groups, is what distinguishes a surveyor’s product from that of the average Joe.”

Many readers will have encountered or read about detection of gases such as methane using a form of lidar, often deployed on crewed aircraft or UAVs. The goal is not so much to identify the miscreants as to encourage them to fix the leaks. The science of leak detection using lasers is advancing apace and one recent development is intriguing – the use of quantum cascade lasers for mid-infrared detection of gases. A successful system has been developed by Aeris Technologies, but there are several players in the market. Interband cascade lasers are cheaper, but don’t go so far into the infrared. These tools are incredibly sensitive and can detect at the fraction-of-a-part-per-billion, which is important, for example, for monitoring emissions from semiconductor manufacture. Is this lidar? Maybe it isn’t, as ranging is not the goal, but geospatial folk should know about this so that they can offer a service that collects yet another type of spatial data, for a market that can only grow.

More down-to -earth is my most recent “find”. There’s a concise, accessible summary of 3D imaging – the term is used to encompass measurements from both cameras and lidar – for industrial applications in the current issue of Photonics Spectra, well worth a read.

ISPRS Technical Commission I symposium

ISPRS’s four-year cycle was disrupted by covid: its big quadrennial congress, in Nice, France, was postponed from 2020 to 2022. The next one is in Toronto in July 2026. In the year midway between congresses, ISPRS runs symposia, one for each of its five technical commissions. So far, LIDAR Magazine has attended two of these. Technical Commission I (Sensor Systems) ran an exquisitely organized symposium in Changsha, a city in China of over eight million people, in May, with just under 800 participants on-site and another 800 online for a software competition. Participants were generously made welcome in a fine hotel overlooking the confluence of the Xiangjiang and Liuyang rivers, not far north of Juzizhou Island with its enormous status of the young Mao Zedong.

IMG 5706

The likeness of a young Mao Zedong, Juzizhou Island, Changsha.

The opening session and the plenaries were elegant and well attended. Christian Heipke gave a fine overview of deep learning in remote sensing, applauding the many successes while sounding a note of caution that not all questions are yet answered. Two venerable doyens of Chinese photogrammetry and remote sensing, Shen Jun and Deren Li, belted out their stuff in English – grandstand performances from these global names – on, respectively, China’s 3dRGLm 3D mapping program – the underlying thinking of which bears some similarities to 3DEP – and the country’s satellite constellations for earth observation. The latter are dense and growing. I marveled that Tang Xinming, who plays a lead role in the design of these systems, finds time to be Technical Commission President.

In the afternoon, Gong Jianya gave a keynote complementary to Christian’s, covering the latest approaches to interpretation of remotely sensed data. Charles Toth and Naser El—Sheimy gave magisterial overviews on the challenges to navigation and positioning. The day ended with further keynotes, from Guo Renzhong on the use of remote sensing technology to empower smart cities and why this matters in today’s increasingly urbanized world, Wolfgang Kainz on GIscience and spatial data science, and Jonathan Li on mobile lidar in difficult environments and the success of various deep learning models. All these keynotes included context and some historical background. The audience was privileged indeed to have been present on this opening day: this collection of presentations provided a brilliant cameo of the status of geospatial science.

Unfortunately, it is the lot of ISPRS Council members that meetings take place on the same days as technical sessions, so when I read the technical program I suffered only disappointment at what I would miss. Nevertheless, I managed to attend some of the sessions on lidar, laser altimetry and sensor integration. Some of the papers focused on photogrammetry, however, but the one on using satellite imagery to generate DEMs of the Tibet Plateau was well worth hearing. Results were presented of work at The Ohio State University on combining multiple DSMs of the same area. A paper on mass balance using altimetry (ICESat-2), gravimetry and in situ measurements was above my head, but served as a useful reminder that LIDAR Magazine mustn’t forget gravimetry. A paper on hyperspectral lidar equally underlined that the technologies available to us are always improving and expanding.

The afternoon session on point cloud generation and processing also included some strong contributions. Again the scope extended to photogrammetry, lidar and SAR, but the focus was more on the algorithmic side. Li Guoyuan’s invited talk on satellite laser altimetry data processing was a fine, up-to-date overview. He was also the lead author of the aforementioned paper on Tibet. Next on the podium was Xing Yanqiu, with a new lidar-visual-inertial (plus deep learning, of course) approach to SLAM for forest environments where GNSS can be flaky. This intense, well illustrated paper was followed by one on synthetic tree models, then a Finnish contribution on lidar in forests with GNSS difficulties – the conference organizers had arranged the program well. The day drew to a close with three papers that reminded us of the data sources that must figure in our thinking: SAR – simulated from UAVs and real from satellites – thermal infrared from satellites, and UAV nadir imagery reinforced by oblique iPhone 15 lidar acquired on the ground.

Back row, left to right, Charles Toth, The Ohio State University, chair of ISPRS Scientific Advisory Committee (and numerous previous role); Jiang Jie, Beijing University of Civil Engineering and Architecture, ISPRS secretary general; Clive Fraser, emeritus professor, University of Melbourne, and world-class photogrammetrist; Christian Heipke, Leibniz Universität Hannover, ISPRS past president; Wolfgang Kainz, a professor at both the University of Vienna and Wuhan University and editor-in-chief of the ISPRS journal International Journal of Geo-Information. Front row, left to right: Orhan Altan, emeritus professor Istanbul Technical University, guest professor at Wuhan and various other places, president of ISPRS 2008-12; Nicolas Paparoditis, director of research and higher education at IGN, the French national mapping and forest inventory agency, and head of ENSG-Geomatique, the French engineering school of Geoinformation, and vice-president of University Gustave Eiffel, and vice president of ISPRS.

ISPRS Technical Commission II symposium

Technical Commission II (Photogrammetry) ran its symposium in a stifling Las Vegas, but this attracted only 136 participants. It may turn out to be the smallest of the five 2024 symposia. Why? Has photogrammetry shot its bolt and no longer attracts interesting research? Has Changsha replaced Las Vegas as the world’s top conference destination? Can nobody get a US visa these days?

Nevertheless, the event was interesting and rewarding in many ways. Unsurprisingly, the US delegation was the largest of the 24 countries represented (and I was excited to meet the big deputation from University of California San Diego), but there were significant groups from Germany, China, Italy and Japan, countries that have led the photogrammetric research agenda for most of this century.

In some ways this was a conference for the monastic. Participants who had registered late wore badges in a smaller font, described to me by one as “public shaming”. The tutorials the day before the main symposium opened were held – absent breakfast, coffee, tea or lunch – in a room at around 15℃. While temperatures climbed towards 40℃ outside, I noticed a shivering student in coat and sweater, trying to focus on the second tutorial, by Erica Nocerino, Fabio Menna and Alessio Calantropio on benchmarking underwater photogrammetry. I was delighted that during the introduction there were several references to the ISPRS Scientific Initiatives and Educational Capacity Building Initiatives. I manage these and it was heartening to hear testimony that the grants of 10,000 Swiss francs have been well spent. See here.

This tutorial was more introductory than the first, “From traditional to AI-based 3D scene capture and modeling,” by identical twins Martin and Michael Weinmann together with Dennis Haitz. What a shame the organizers were unable to persuade NASA stars Mark and Scott Kelly to keynote! This tutorial was superb, drawing on a wealth of literature while starting from the basics of photogrammetry and zooming into more advanced topics such as structure from motion, depth from single images and, of course, the use of AI, 3D scene reconstruction and advanced scene representations. This was the first of many appearances during the week of 3D Gaussian splatting, a popular, modern approach to rasterization. Further to my comments above on geospatial folk always being footed in reality and their responsibility to provide geospatial information according to project specifications, some of the questions at the end of this tutorial focused on the applicability to mapping of some of the new techniques.

Returning to the ascetic later in the week, all speakers had to repeat questions from the audience as there were no traveling microphones. We admired the one-hour, alcohol-free “gala dinner”, surely the way to go in these days of instant gratification. I realized how old and out-of-date I am while I watched Technical Commission President Alper Yilmaz manage the whole event with his phone. No pen, no notes, no briefcase, no laptop, sans everything but iPhone. Yet the old world had the last laugh, as much time was spent persuading, in the face of a loose connection somewhere, the grumpy projector to countenance the PowerPoints.

The opening ceremony the following morning was minimal compared to the pomp in Changsha and we quickly moved to the first keynote, “Graph synchronization and rigidity: unraveling the theory underneath structure from motion,” by Andrea Fusiello, quite arcane and mathematical yet staple fare for us bundle adjustment freaks.

The formula for this symposium was similar to the Changsha event – keynotes from important people connected by a conveyor belt of mainly youngish researchers presenting their work, one every 15 minutes. Four days of this was intense and demanding. There’s no space here even to attempt a precis, so let’s focus on the keynotes and a few papers that grabbed my attention. And remember that many of the papers reported only on imagery rather than lidar or SAR.

Lidar-centric papers that I enjoyed covered topics such as: using the number of returns in a DJI Zenmuse L1 point cloud; under-represented urban objects for 3D point cloud classification; UAV-based lidar bathymetry at an Alpine lake; deep-learning-assisted exponential waveform decomposition for bathymetric lidar (this was from RIEGL researchers in Austria, but was presented by Mai-Linh Truong of RIEGL USA, well known to LIDAR Magazine readers); deep-learning-based DSM generation from dual-aspect SAR; large-scale 3D terrain reconstruction using 3D Gaussian splatting.

The second keynote was delivered by Dalton Lunga of Oak Ridge National Laboratory, entitled, “The role of AI and earth observation in safeguarding society and economic stability.” Naturally, this was technically astute, but Lunga put the science in context by describing the sort of situations it can address.

Qassim Abdullah, Woolpert’s chief scientist, is a pillar of our community and was given a few minutes to talk about the latest variant of the ASPRS Accuracy Standards for Digital Geospatial Data. Among several new addenda, there’s one on mapping with lidar. Again, we saw the link between the academic forum and the reality of delivering geospatial products that meet specifications. What a pity he had so little time.

He was followed by a keynote from Tanya Birch of Google Earth, “Pixels with purpose: illuminating the path to change”. She started with the sobering statement that we’re losing species at an unprecedented rate. She described how AI, cloud computing and many millions of images – and more millions for training the models – can shed light. When hearing this sort of presentation, I become more depressed than usual: as my own time on our planet draws to an end, I’m leaving it in the hands of those who favor deforestation and fossil fuels. Why? Why is it so hard to accept the reality of what is going wrong?

Topics from the third day’s papers that may be of interest included: archival framework for re-use of cultural heritage 3D survey data: OpenHeritage3D.org (this is the work of the UCSD group I mentioned earlier – I’m going to visit them soon, in pursuit of an article and/or a podcast); multimodal approach to rapidly documenting and visualizing archaeological caves in Quintana Roo, Mexico (a poster by the UCSD group); end-to-end geometric characterization-aware semantic instance segmentation for ALS point clouds; automatic vectorization of power lines from airborne lidar point clouds (I asked why they didn’t use commercial software for this; the answer was that they didn’t have any!); airborne lidar point cloud filtering algorithm based on supervoxel ground saliency. Curiously, the day ended with a pitch for The Photogrammetric Record!

Hannah Kerner of Arizona State University kicked off the Thursday program with an engaging keynote, “Unlocking the potential of planetary-scale machine learning for a sustainable future.” There was some overlap with previous keynotes, in the sense that we clearly have the imagery and the technology to characterize land use and its changes on vast spatial and temporal scales. Can this make a difference? The afternoon keynote, “Effective monitoring of the dynamic marine environment,” by Konstantinos Karantzalos of the National Technical University of Athens, had hardware descriptions that would have belonged more in Technical Commission I, but also encompassed cloud AI and edge AI. Again, the theme was magnificent science to address pervasive, worsening, global problems, one example being detection of marine plastic litter.

Thursday topics of interest included: roof wireframe reconstruction based on self-supervised pre-training; 3D reconstruction of a complex architecture (like several other pieces of work that were reported during the symposium, this one suffered from lack of ground control points); high-detail low-cost underwater inspection of large-scale hydropower dams; active and passive UAV-based surveying for eulittoral zone mapping (i.e. the intertidal area); development of a precise tree structure from lidar point clouds (poster); extraction of block walls from MMS point clouds; accuracy assessment of UAV-lidar compared to traditional total station for geospatial data collection for land surveying contexts (poster); potential of hyper-temporal terrestrial laser point clouds for monitoring deciduous tree growth (and the TU Dresden crew did use commercial software!); structural health monitoring of bridges with personal laser scanning; efficient calculation of multi-scale features for MMS point clouds; zero-shot detection of buildings in mobile lidar using language vision model; accurate calculation of tree stem traits in forests using localized multi-view registration; optimizing mining ventilation using 3D technologies.

Friday began with a keynote of the very highest quality from Zan Gojcic of NVIDIA, “Text-to-3D pipeline for improved visual fidelity.” Zan starred at PhD level at ETH Zürich before moving to NVIDIA’s research center in Toronto. He was fabulous and presented, at impressive speed, both theoretical and practical approaches to generating the very best renderings possible in real-time. We were transfixed! Nevertheless, undergirding much of what he described was the photogrammetry that we all know.

Once again, the day provided numerous papers of interest. The lidar-centric topics included: mobile phone based indoor mapping [by Christoph Strecha, CEO of Pix4D, but it was a pitch for the technology rather than his software, and for Open Photogrammetry Format (OPF); he included a scan of the conference room, but the iPhone lidar didn’t have the range to reach the ceiling; and yet again, there was a discussion of GCPs]; SLAM for indoor mapping of wide area construction environments; specialist approaches to driving scene perception in poor visibility conditions.

Let us remember that ISPRS symposia feature academic research, presented mainly by students and professors with an eye on publication. In the geospatial world, however, even the most recondite research impinges on the practical. LIDAR Magazine is following up with several authors, requesting them to enhance our pages. In the meantime, most of the papers have been published in the ISPRS Annals or Archives and the tutors and some of the keynotes are usually willing to provide their slides to participants.

As I ground homewards on the I-15, inching across the Inland Empire in heavy traffic, a six-hour marathon of just under 500 km, I marveled at memories of the five-hour, 1600-km train ride from Beijing to Changsha, comfortable, punctual and about the same price as an air ticket. It is reassuring that we will have one or two high-speed rail lines in the US in the next 20 years, as well as lots more lanes on the I-15.

Let me end on a humorous note. Here’s a photo I took in a restaurant in the Wulingyuan Scenic and Historic Interest Area, which is a UNESCO World Heritage site not far from the city of Zhangjiajie. It’s incredible, fantastic, in a way that’s not entirely different from the wonders of Bryce Canyon, Utah. I looked at the menu and uttered the obvious remark, “Ah, everything you could want to set up a small photogrammetry business!” The three world-class photogrammetrists in the background didn’t get it. Can anyone bring insights to this mystery?

Three photogrammetrists walk into a bar…

 

[1] Anon, 2024a. Scientists, et ai, The Economist, 451(9403): 66-67, 29 June 2024.

[2] Alberge, D., 2024. Hobbyist archaeologists identify thousands of ancient sites in England, The Guardian, 27 May 2024.

[3] Anon, 2024b. Seeing for themselves, The Economist, 451(9399): 66-67, 1 June 2024.

[4] Gibson, J., 2024. Don’t be a button pusher, The American Surveyor, 21(1): 2, March/April 2024.

[5] Eisenstein, M., 2024. A quantum leap for sensitive gas analysis, Photonics Spectra, 58(4): 36-42, April 2024.

[6] Dechow, D., 2024. The evolution of 3D imaging in industrial applications, Photonics Spectra, 58(6): 30-36, June 2024.

Exit mobile version