Dr. Florent Poux is an award-winning researcher and adjunct professor in 3D Data Science at the University of Liège. Based in Toulouse, France, Florent operates the 3D Geodata Academy, a training resource dedicated to building innovations with 3D technology. In this episode, Austin gathers Dr. Poux’s thoughts on how to employ novel data processing pipelines to extract value from 3D data. They also discuss his recently published book, 3D Data Science with Python, which seeks to help readers understand modern algorithms and spatial AI models, with a focus on hands-on learning and automation.
Episode Transcript
#24 – Florent Poux
October 13th, 2025
{Music}
Announcer (00:01.998)
Announcer: Welcome to the LIDAR Magazine Podcast, bringing measurement, positioning and imaging technologies to light. This event was made possible thanks to the generous support of rapidlasso, producer of the LAStools software suite.
Hello, my name is Austin Madson and I’m an associate editor at LIDAR Magazine. Thanks for tuning in as we continue our journey exploring the many different applications of lidar remote sensing. Today we’re really happy to have the opportunity to chat with Dr. Florent Poux, an adjunct professor and founder of 3D GeoData Academy and an expert in 3D data science. Dr. Florent Poux received his PhD in geomatics sciences from the University of Liège in 2019. He’s now an adjunct professor at several universities in Europe where he leads advanced courses in 3D data applications and processing.
In 2021, Dr. Poux decided to branch out from academia and started 3D Geodata Academy, an amazing resource for all things related to 3D data science. Earlier this year, Dr. Poux published a book titled 3D Data Science with Python, which seeks to help readers learn and employ modern algorithms and spatial AI models to extract value from 3D data, all with a focus on hands-on learning and automation. So let’s go ahead and dive right in here. Thanks for joining us today, Dr. Poux.
Florent Poux
With massive pleasure. Thank you.
Austin Madson (01:39.15)
Of course. So I want to start our chat today by having you talk a little bit about your background and how you came to be involved in this whole 3D data science world that you’re so embedded in.
Florent Poux
Yeah, of course. Yeah, it started mostly as a land surveyor, you know, with mud on my boots, scanning construction sites in the rain and so on… I was just learning what matters to project, not what looks good in demos, let’s say, and pushed a bit more. I was stuck at one point where we didn’t have the manpower to do research, essentially, at our land surveying firm, and we were converting the scans into maps, into floor plans, manually.
So you have a massive point cloud and you just draw everything on AutoCAD. And I was thinking, okay, there must be a better way to do that. And that’s what propelled me into academia world. So I pushed and did a PhD in spatial science as a teaching and research assistant. I guess you have the same, right, in the States, kind of position.
Austin Madson
Yeah, similar positions here, yeah.
Florent Poux
It was a six year contract, more or less, and within that time you had to do your PhD and teach practical to students. So I finished it between the four years and I pushed the concept of smart point clouds. So, trying to inject intelligence closer to the raw data. But of course, when you finish your PhD and you know that, you have more questions. So I continued and I did a postdoc in Geometric Computer Vision, Geometric Deep Learning, actually…
Florent Poux (03:20.878)
…at Aachen University in Germany in computer graphics. And after that, I had the luck to get a position as an adjunct professor at the University of Liège. And after that, I was building my team, but then COVID hit and we decided with my wife to come back to France. I was thinking, okay, what do I do? Should I continue fully to only push the world with new people, new students coming in onto the market?
But there was so many times I asked my professional colleagues, how do I do that? How can I make that? I started consulting a bit, I was, yeah, my hands were tied and always the same kind of questions were popping up. And I was thinking, okay, let’s just make the best resources for everyone to grow and also to create new innovation. And that’s where I created the Academy to help professionals in this space with 3D and spatial AI.
Austin Madson
So this is really interesting, this transition. Can you talk a little bit more about that decision to leave your full-time academic position and kind of follow these entrepreneurial goals? I mean, maybe this could inspire some of our listeners to transition to maybe a new and more rewarding career for them.
Florent Poux
Yeah, I will try not to be too, how do you say, contrastive, let’s say. Sure. That’s even a word. Because yeah, it’s easy to have very strong opinions there. But I will speak from personal experience. So it’s a very hard choice because essentially, and you know that, once you get to position in academia, you are pretty much secured for life. So everyone was telling me, my parents and everyone around, even my mentor, Roland Bilen, which I could not be here without him, of course.
And everyone was telling me, what are you doing? So it’s very hard because you’re in an environment where you take a decision that is unrational or seems unrational. For me, it was clear that there were all these people that were missing out on having the possibility to get education without going back to the university benches. And I pushed through and step by step, you know, I built the entrepreneurial journey.
Florent Poux (05:32.046)
And now it’s working well. But of course, at the beginning, it’s creating something and it’s a bit escaping the traditional, let’s say, thinking that I had when I was doing research where the objective was publishing a paper. And then the objective is trying to have an impact and then trying to have something more and more and more every time, but more to advance your career. Here, it was really, let’s put aside my career and try to help others. Essentially that was what propelled me.
I could say passion drove me first, but it was backed by my experience and expertise. And what helped me initially is of course, all the relations that I had through my research career, going to conferences, giving talks, people already knew me more or less. And I’ve always made sure that whenever I created something, I could share it as a deliverable. So it was a code or it was a tutorial. It was something that could help people directly and I think that what helped me grow what I’m building right now a bit quicker than if I were not doing this kind of things.
Austin Madson
Yeah, and to our listeners, I’ve taken a look at Dr. Poux’s content and his website over the years, and he’s really good at explaining, you know, difficult concepts from start to finish. So if you’re interested in learning something new, I would recommend at least taking a look at Dr. Poux’s website. There’s a lot of really great stuff on there and a lot of it’s freely available. So thanks for taking the time to tell us your story and your background and branching out with your entrepreneurial spirit.
But let’s talk a little bit about some tools and tool chains and things. In particular, what are some of the tools that you recommend to our listeners that they try and learn so that they can extract more data from their point clouds?
Florent Poux (07:27.47)
I will make some assumptions, but of course you need to tailor it to your own case. I think that’s very important. Right. But I will try to have something that can apply to more or less everyone without even a sense in your pocket, let’s say. It’s marvelous today what you can do with open source initiative and open source software, open source programming languages. So I will start there. Let’s say, what can you do without spending or investing somewhere.
And when we talk about 3D, we talk about many things, but it’s essentially how we relate to modeling stuff. So mostly using meshes and more going into the game entertainment industry. Or if we are more closer to, guess, geospatial and mapping concept(s), then we will tie that down to point clouds.
Because this is the data that is out from the sensors that we mostly use day to day, whether it’s terrestrial laser scanning, or area or lidar flight, or open data portal where you can download this kind of data sets. Photogrammetry as well. Also the mesh, but there is one tool that is fantastic, which is called Cloud Compare. I think everyone knows this tool. But why I love it? Because it’s built with already a lot of various algorithm processes.
If you’re in this scenario where you want to create an innovation or create a new pipeline, it’s very easy to go into Cloud Compare, subsample part of your datasets and test all the various stages and make sure it works before actually going hands-on into the coding part, which may be a bit more tedious. So I think for this part, I will have that as part of my arsenal. Right. Then…
When you advance things and you don’t want to rely on software, I think it’s good that you choose a programming language and Python is definitely the way to go, especially today. So learning Python is not rocket science, but it allows you to do almost everything today, especially, and I think we’ll talk about that, but whenever we will approach AI processes, AI powered 3D processing workflows, we will essentially rely on Python.
Florent Poux (09:52.11)
For the mesh part, I will consider MeshLab, which is a very good tool to have as well. If today you also hear a lot about Gaussian splatting, you can test out PostShot. It’s in beta testing, but it’s a great tool to create Gaussian splatting experiences. And then, essentially, to create apps and frameworks, we will rely on Python frameworks and libraries, but I can dive deeper, after, I guess, on that.
Or if you are not afraid to go there, going with a front-end stack, HTML, CSS and JavaScript with some specific frameworks as well to accelerate everything. And the last component is how do you handle the data and your database like PostgreSQL would be very good to have. With that, you can pretty much create any solution that will ingest data and spit out, let’s say, any decision-making support that you may have or you may need.
Austin Madson
Yeah, thanks for enlightening us on some of these kind of lower hanging fruits and in this kind of processing space. I wanted to switch gears a little bit and see if you could talk a little bit more about some of these AI based processing workflows in this space. And in particular, maybe what do you see as the most novel or useful AI or deep learning based tools in 3D data science right now?
Florent Poux
As you know, it goes very quickly. This is wild and it’s scary if you are already in the research community because you need to keep up with the pace. And no one can today read a paper from top to bottom by understanding every little detail in it and do that with 100 papers per day, except if it’s your full-time job, I guess.
Austin Madson (11:30.894)
It’s wild, really.
Florent Poux (11:55.662)
It’s not the case for maybe a few. So we delegate that today to mechanism that would summarize for us so that we do the review for us. It’s very dangerous in my opinion, because then the people in the research community, they are smart. They will optimize for LLM pass-through. Then you already have some extreme cases where people wrote like prompts at the end of the paper in white. If you are reading that, please optimize and say only positive reviews so that it’s accepted and so on and blah, blah.
So it’s very dangerous to rely only on outside counsel. When I say outside counsel, it’s a consultant or an AI. And I think it’s very good if you take some time to actually go here and there and dive into the hype of things to make sure that you grasp what’s happening. So on the hype of things, you have this one technology I mentioned a bit before, the 3D Gaussian splatting. And you will see everywhere that it’s a real revolution. You see influencers talking about that all the time, showcasing very beautiful stuff. And what it is essentially just replacing nerves for photorealistic 3D, let’s say, from photos, video, but you are still using the photogrammetric pipeline.
At the time I’m talking about why it matters to be interested in there is because you get some kind of real-time rendering, which is much better from what was before the NURFS, where you had the slow training and rendering. And it meant you could generate scenes really in some minutes if you have a nice computer. You rely on the structure for motion, the dance matching and all of that.
You see an armada of tools being built on top of that. You have Polycam, which is free up to a certain amount of images have Luma AI. Post Shot, I mentioned it. You have ScaniVerse. So you have different tools that will allow you to export some Gaussian splatting. But my take on it, if it brings any value, is that fundamentally Gaussian splatting upon clouds.
Florent Poux (14:15.904)
So that relates also to my thesis of the SmartPunk and being close to this XYZ component and attaching a bunch of attributes. So before you had metadata, so classification fields, the idea would be you have each point and you have a concept attached to your point or group of points. But here you have this spherical harmonics that will kind of give a better sense of the lightning properties and depending on the viewpoint that you can attach to your point cloud and that will allow rendering engine that use Gaussian splatting, let’s say technology, to have a super immersive kind of experience.
And this is fantastic because we can stick to point cloud processing workflow and just go at the last stage to the deliverable workflow. So if I take an example, maybe you want to reconstruct a full neighborhoods to have a very safe ground to project a construction site and you want to see the impact a new building will have on the surrounding. Classically, that means that you have to capture data, recreate a 3D model out of it so that you can use a simulation engine that will do the shadows and know exactly what impact it will have. You have to have a simulation engine for the sounds and all these kinds of things to know the impact on the local area.
If you wanted to do that on the point cloud, then you went back to research ground. Super interesting, but you had to essentially have some kind of in-place seed structure on an unstructured data set, and that’s hard to do. With Gaussian splatting, that’s one step towards that because you could visually maybe have some kind of impact where you can inject that and have the impact directly projected onto the Gaussian splatting experience. So it cuts down the time to go 3D model simulations and so on. So there are the kinds of things that it opens this technology. that’s, we say AI, again, it’s important to clarify. For some it will be obvious, for others maybe it’s good, but you have the AI bubble and within you have the…
Florent Poux (16:37.28)
…machine learning bubble and in it you have the 3D machine learning, the 3D deep learning, and in it you have the transformers for 3D data, and in it you have the generative AI. all of that is encapsulated into AI, so it’s pretty confusing for someone that sees that because it means anything and nothing at the same time.
Austin Madson
Well, let’s take a quick word from our sponsor, LAStools.
The LIDAR Magazine Podcast is brought to you by rapidlasso. Our LAStools software suite offers the fastest and most memory efficient solution for batch-scripted multi-core lidar processing. Watch as we turn billions of lidar points into useful products at blazing speeds with impossibly low memory requirements. For seamless processing of the largest datasets, we also offer our BLAST extension. Visit rapidlasso.de for details.
And Dr. Poux, we just finished talking a little bit about AI-based workflows and you were talking about different nested tool sets within and how it’s really highly varied. Can we dive in a little deeper and talk about things like data labeling, segmentation and classification? And in particular, what do you see as some of the common bottlenecks within these workflows?
And what advice would you give folks to kind of work through them or remove those bottlenecks using some of these tools that you’ve kind of just started to touch on?
Florent Poux (18:10.936)
Yeah, of course. What I can relate maybe also, because it’s evolved with AI, with this labeling and segmentation workflow, I guess you have everything linked with point cloud processing, but now AI port, and also all with LLMs for three workflows as well. So on the first stage is the base ground is that for decision making,
When you go to client, what you want to do if you are in the mapping, surveying, geomatics space, is you will provide a base for some expert to take a decision. And the base is usually a map that will be extended visually by semantic priors. So if I take a topographic map, you will have control line in gray with a little number on it. You will have buildings with hashed things in it, you will have trees, and all these kind of things are represented somehow.
But the origin of all of that can be a punctual, so a position, or that can be an area or a line, all these kind of things. So when we think about segmentation and classification workflow, essentially, the first layer is how can I make sense of a bunch of points, if it’s a point cloud?
Try to group them by features or relationships or something that makes sense to group them based on a common ground. And then try to advance as much as possible to the concept that we want to delineate, but be generalizable enough that we can make one segmentation that fits 10 applications. So this is hard because for example, if you have a base point cloud of a city and one application wants to look at all the underground networks and another application just want to look at the facades, you need to have a layer of generalization that will make sense to both applications. If you go too deep, it will be very hard to reuse the same model to move on to the other application.
Austin Madson
Right.
Florent Poux (20:33.274)
Maybe you can give an example. you should take a chair. You are in an office, let’s say, and what you want to know is to count how many chairs you have because you to replace some of them and you want to make sure that they are all in good stature. So your object that you want to look for is a chair. So the class that you want to have in your process, in your point cloud processing, is to detect all the chairs. So create a model that takes in a point cloud and spit out chairs and the rest, for example.
But what if afterwards you want also to make sure that you can deep dive into, I want to make sure that the chairs that we count, they are categorized into chairs that have arm chairs, office chairs, and so on. So here it’s an extra layer where you will want to have different classes, the backrest, the seat, the legs, the armrest. And if from the get-go you were at this level,
It’s super specific and maybe too specific. Whereas if you are on the first layer chair, you can build a specification model that will help you specify the specific elements by going next to a LAN LM, for example, that will parse what is the definition of a chair. And it will know that usually a chair is like this. If it’s an office chair, should be close to a table. And these are the kind of things that you will want programmatically because right now there are no tools to my knowledge that does that 100 % especially for UK so it means that you need to move into AI-powered automated segmentation which models like PointNet, KPConf that will segment elements then you will have object segmentation at scale and here it’s best to move on to proven algorithm like RANSAC, clusterings and you push a bit of AI.
You want also to go on to change detection, how you compare different scans, if you want to see the repeatable process. So all these kinds of things are very important to grasp before moving on to the LLM, where here essentially is once you have the support, you want to leverage an LLM to have some natural language queries. Like show me all the chairs which are close to a desk or show me all the walls with a thickness above, so on and so on.
Florent Poux (22:56.938)
And that’s where we are going. But to go there, we really need to have the strong base of identification of objects that make sense at a low level and of conceptualization in this. Does it make sense?
Austin Madson
Yeah, and so can you kind of reiterate what some of the bottlenecks are, at least before a process like that or during that process?
Florent Poux
Yes, so the major bottleneck is competence of the people behind creating the system. This is something that I see very often with the team that I supervise in the Fortune 500 of French tech companies is they will always almost, especially the young one, the most senior one will know, but the young one will often go to the most fanciest stuff. It’s often the wrong direction, with some exceptions.
Because we don’t have the backlog level that we had before AI. Right now you have a new research. I can cite like map anything. This is a new stuff from Meta. They pushed out, they do zero-shot 3D reconstruction. So the guys in my teams were like, that’s awesome. Let’s put that in production. And you always have to break and say, okay, consider the full orchestration of your pipeline from end to end, what is the end deliverable, what your clients want, and then you move from there, but try to be very cautious at the need of computing power and the understanding of what’s happening under the hood. Always my recommendation is use any machine learning or deep learning only when all the other methods have failed. So in a simple case, if you want to detect…
Florent Poux (24:45.838)
…say, if you have a building with a good flatness and there are no walls that are deviating or there is no structural damage. What I will hear, oh, let’s use this, I don’t know, transformers, point transformer V3 to classify everything and so on. And then let’s classify everything. And then once we know the flow, we check the levelness and so on. And here, if you have someone that is competent, you will know exactly
bottleneck with that is you move out from a system where you create the rules and you have your computer act with the rules. You need to provide your computer with tons of data and hope that it’s prepared good enough that your point transformer model performs well. And then you need to push that to leverage GPU because it’s very, very time consuming. Right. And have some kind of specific results that will be a bit unstable if you move out of the scope if it’s not generalizable enough. Whereas if you had someone that knew the landscape of algorithms and methods, go with, let’s use RANSAC because it’s more or less sound and it’s robust to outliers.
And we will down sample of our point cloud with a voxel grid because here we just want to make sure that we go very fast. We do that multiscale to improve robustness and we check normal or for plane against the up direction and that’s it. So these are the kind of thing I think that is the stronger bottleneck. Going to deep learning approaches is very good once you already have a lot of labeled data. Before that, I always advise to move away from it because it’s very time consuming.
Austin Madson
I see. Yeah, there’s a famous algorithm here, it’s KISS, keep it simple, stupid.
Florent Poux (26:39.704)
Yeah, that’s perfect. I like that.
Austin Madson
Thanks for talking a little bit about some of those workflows and some of those common bottlenecks. Let’s move on a little bit and I want to get your thoughts and impressions on Vibe coding in 3D data science space. For our listeners who don’t know, Vibe coding is a means of software development that uses AI tools to help generate code from natural language prompts. Can you talk a little bit about that, Dr. Poux, and what are your impressions?
Austin Madson
Yeah, this is a fantastic subject. I think it’s originating from Andrej Karpathy at OpenAI now, or it was head of Tesla division at the time, I think. Was it this year? I think it was in February, right?
I think it was, yeah, pretty recent. It’s insane.
Yeah, that’s crazy. So essentially, as you mentioned, you just describe what you want in natural language and you accept AI generated code without reading it and you iterate based on the result. So the extreme case, and I went through it to test it, is you forbid yourself to write a single line of code. If you want to change a variable name, you ask him to change, I don’t like a point cloud, just write PCD…
Florent Poux (27:59.798)
…instead of going in your code and writing PCD instead of point cloud. So that’s an extreme case, but that’s the idea. Can you just delegate all your programming knowledge and let him do the heavy lifting? The controversial part, I think, is that you don’t review the code. You embrace what’s happening.
You’re like air. You embrace air.
So my take on it is that it’s amazing for prototyping and learning new frameworks. That’s actually good. The production code, I will not use it there. You have to review every line. So my golden rule is never commit code you couldn’t explain to someone else. Aside from that, so yeah, you have a very good vibe coding tool landscape that evolves very quickly.
You have Cursor, which is an AI port, I think, ID. So it’s very good for developers who want control. It integrates with your code base, so it has contests. So this is good. Once that made a lot of lines bleed, it’s lovable. So this is really for non-developers. Let’s say you are online, and it’s very good at creating the software, also integrating with SupaBase, so the database behind which would be very good to create SAS which have React front-end and some kind of functionalities like this.
What else did I test? Bolt. It’s like Lovable, so it’s great for rotating around daily limits as well. But yeah, I’ve seen a lot of articles of the Vibe Coding Hangover where you have senior engineers that…
Florent Poux (29:47.798)
…report development hell because as they work with AI-generated code all the time. You have no comments, have inconsistent patterns, you have no architectural vision. I did a video, I think it was last week, exactly on that, trying to give a framework on YouTube to people that want to innovate on 3D and how to best use this kind of tools. It’s very good, but it has a lot of limits.
In my opinion, you cannot use that for real impact if you don’t know system one coding or system two. So system one, being able to write the rules for the computer to act on it. Right. What we did classically algorithm and system new being you create the data and you train models and you hope that it works as you want it. And now it’s called system three. This way of you just write a bunch of stuff and you hope it works.
So, that’s good if you know both the previous one because you need to review everything. Also, this is something I say to my professionals in the academy. The thing we are working with with 3D, it’s cutting-edge, spatial AI in 3D. So you cannot leverage this LLM that are trained on data accessible because it’s so cutting edge you don’t have enough training data. So if you ask him, for example, to code the point net from scratch, will fail.
If you ask him to create a transformer, it will… So these are the things where I believe we need domain expert and this 3D data science experts. You have a major level if you can position yourself at this domain expertise, data expertise, algorithm expertise, and having that, then you leverage vibe coding to speed up. But honestly, if I get back one hour in a day, I’m the happiest man using this.
Well, yeah, thanks for sharing your thoughts on that and in particular your golden rule on vibe coding. In your LinkedIn, Dr. Poux, you state you’ve anticipated new tech trends with surprising accuracy in the past. I’m curious, what are your thoughts on new developments in this space and what are you hopeful and excited for?
Florent Poux (32:08.27)
There are so many stuff coming out at us that we need to filter out, funnel down, I think, the amount of information we consume and try to create knowledge. So this is really, really important. For example, what I was talking before, when I started everything, getting my hand on the first point cloud in 28.
I was anticipating point clouds to be the base layer of a lot of things. And I continued pushing that even if we had the mesh coming in, I knew that whenever you need accuracy, you cannot rely on interpolated information. So this is applicable to any engineering profession. Interpretation should be the domain of the experts, whether it’s a human or an AI, but it should be at the latest stage possible.
We start hearing about the AI bubble that is going to burst and so on. I think the issue here is the wide gap between what everyone sees with AI taking our job and so on. And what’s the reality is that less than 5 % of companies have (actually) a return on investment on generative AI and so on. So for now, it’s really, we are the peak of inflated expectation.
I think it’s called the Gartner cycle. If you see the curve, and then we’ll go onto the value of this illusion and then we’ll move back up onto something that is exploitable. Why? Because essentially it boils down to data, the quality of your data and what you want to use your data for. So I believe what is exciting is to try to capitalize on your systems and how you handle data.
Because if you are very good at orchestrating a data preparation phase, this is the drive for any deep learning system. And this is even more important for any LLM or vision language model. So that’s the thing that I’m excited about. Whenever you try to interface a lot, various perspectives from various people that have a very different…
Florent Poux (34:30.636)
…backgrounds. That’s when you arrive to fantastic novelties outside a bit the scope of what we are doing. you see that happening more and more thanks to my coding. think it gives people that are very creative the ability to showcase ideas in a better manner than just poor point, but actually having some kind of chunky prototype where you want to close your eyes on the technical how it But you see more or less the idea happening. that is something that I really, really look forward to. Everything linked with AR, VR, I’m still uncertain. But I believe there is a big place if we can blend our way we interact. Right now it’s mostly keyboard and mouse.
But I think if we can embed this edge computing with LLM and Ask thing from AR mixed reality devices, I think it will bring new possibilities. the adoption is pretty low. We saw that with a lot of launches from big companies. So that I’m uncertain, but very, very interested about that. And I think also digital twinning. This is something under exploited but it could bring massive innovation to our lives if we can get digital copies, but everything, the relationship, the social network, everything, if we can plug that into digital simulation and inject very, let’s say, customized AI that will represent everyone in a way or another.
This is really similar to, I guess, the Metaverse, except here we really are focused on solving engineering problems and solving things where we need accuracy. And here we need really to push down. But there is a major skills gap because mostly construction firms, will buy the technology, but they don’t have the people who understand both construction and 3D data science. The training market is really low as well. yeah, I think…
Florent Poux (36:48.234)
…reason it’s there to push down.
Austin Madson
Great, yeah, thanks for sharing your thoughts on some of these new developments that you’re hopeful for. Let’s switch and talk a little bit about what kind of advice would you give someone, Dr. Poux, that is just starting to branch out from GUI-based commercial software and wants to take more and more control of their data processing pipeline? Just a couple bullet points.
Florent Poux
So I guess for 2025 to 2030, let’s say. So if you are just starting, just learn Python, Open 3D and one industry domain. You pick a vertical, could be construction, automatic, robotics, you name it, something that resonates with you. And you push down there with projects. Projects that you can get online, where you help people, where you do pro bono, where it’s through your company.
That’s definitely the best way to move around that. Now, if you are mid-career, I would add AI and machine learning to your 3D skills. So the combination is rare and super valuable. So that’s where most people get stuck because it’s a steep learning curve, but you can do so much with that once you understand what you’re actually building. Right. If you’re a senior already, I will focus on our architecture and system design.
You will leave then the AI to handle the bulk of your implementation or your team. But just seeing AI as a rate, accelerating of everything you are doing, but not a replacement. Don’t delegate too early. Do it yourself as much as possible in all these cases. And really for the strategy today, building in public is something that works good. You can also focus on outcomes…
Florent Poux (38:45.304)
…we say avoid saying that you learn PyTorch, maybe go into something specific, I made this project and I reduced the segmentation time by 80 % or then specialize, then expand. you master one vertical and then you go horizontal once you have something that work. And of course, fighting AI should not be done. This is something I see a lot with my colleagues, the university.
I think we need to embrace AI tools, right? But it’s a tool. It’s a tool. So if you resist it, it’s highly likely that you will be replaced by professionals who can use it. So I think it’s good to just know what’s happening, what can be useful, what not. Again, not looking at what is on the hype. Once you identify something that looks like interesting, just take 30 minutes to test it out, to go deep.
But essentially, what I will not recommend doing is collecting tools like buying thousands of (pieces of) software, buying a lot of specific stuff. I think it’s good to move out to identifying one course line. And I’m not preaching my church, but I think it’s good if you can stick with someone that has done it, that you know is, has an authoritative voice and you can follow and you can close your eyes and learn step by step. This will help you move out very quickly and moving quickly today is important.
And in contrast, what kind of bullet-pointed advice would you give to someone that is already developing their own tools using custom scripts and workflows? I know there’s some overlap with some of the points you just made, but in case there are some others.
Florent Poux (40:34.636)
Be careful with perfectionism. This is something I see a lot. Done is better than perfect. And be one of your project doesn’t need to be beautiful. It doesn’t need to be crap, but beautiful is something also that could be left a bit later. Don’t ignore the business side. Technical skills will get you hired for sure, or will help you create something. But understanding return on investments is the way to go.
How can you bring value to people? And trying to also anticipate a bit technology. Next year, we’ll see Gaussian splatting become really large in architectural visualization. Later on, we’ll see a lot of construction projects use digital twin with AI. The text to 3D will reach good enough quality, I think, in the next few years as well for production game assets using your domain expertise to see how you can push it. Real-time point cloud processing also, I think this is a big thing. Spatial computing, all this kind of trying to match your domain expertise with what you can do and where things are going. This is very, very interesting.
Austin Madson
That’s really, really great information, Dr. Poux. That’s all we have for this episode today. I want to thank you for chatting with us today. This has been a really great conversation.
Florent Poux
Really, really a pleasure. If I can say one last word, Austin? Today, right, we are in inflection points. So AI makes 3D accessible, but it doesn’t mean it’s easy. So I think professionals who combine domain expertise with AI, they will own the next decade for sure. So I think it’s good to get the skill and expertise in this branch.
Austin Madson (42:29.58)
Yeah, thanks for the closing words. And thanks to everyone else for tuning in. We hope you’re able to learn something new today. If you haven’t already, make sure to subscribe to receive episodes automatically through our website or Spotify or Apple podcasts or pick your poison. Stay tuned for other exciting podcast episodes in the coming weeks and take care out there. Thanks for tuning in. Be sure to visit lidarmag.com to arrange automated notification of new podcast episodes.
Announcer: Thanks for tuning in. Be sure to visit lidarmag.com to arrange automated notification of new podcast episodes, subscribe to newsletters or print publication and more. If you have a suggestion for a future episode, drop us a line, here. Thanks again for listening.
This edition of the LIDAR magazine podcast is brought to you by rapidlasso. Our flagship product, the LAStools software suite is a collection of highly efficient, multi-core command line tools to classify, tile, convert, filter, restore, triangulate, contour, clip and polygonize lidar data. Visit rapidlasso.de for details.
{Music}
THE END