#26 – Raul Bravo

In this episode, Austin catches up with Raul Bravo, a prolific inventor holding more than 70 international patent families. As president and founder of Outsight, Raul explains how his firm’s technologies transform streams of lidar data into real-time 3D spatial intelligence. The conversation delves into the analytical capabilities of dynamic 3D databases and explores the technical challenges of orchestrating thousands of laser scanners into a cohesive sensing system.

Episode Transcript

#26 –  Raul Bravo

December 22nd, 2025

{Music}

Announcer (00:01.998)
Announcer: Welcome to the LIDAR Magazine Podcast, bringing measurement, positioning and imaging technologies to light. This event was made possible thanks to the generous support of rapidlasso, producer of the LAStools software suite.

Austin Madson
Hello and welcome to the LIDAR Magazine podcast series. My name is Austin Madson and I’m an Associate Editor at LIDAR Magazine. Thanks for tuning in as we continue our journey exploring the many different applications of lidar remote sensing. Today we’re happy to have the opportunity to chat with Raul Bravo, the President and Founder of Outsight. Raul Bravo is a technology entrepreneur and the founder and president of Outsight, a spatial AI company powered by 3D lidar.

Outsight enables major airports, cities, industrial sites, and tourism venues to continuously understand and predict the movement of people and vehicles in real time, anonymously and at a massive scale. Raul has a background in telecom engineering from UPC in Barcelona and an MBA from the College of Engineers in Paris. Raul has spent a couple of decades creating and scaling deep tech companies, including leading a mobile robotics firm from inception to IPO.

Raul is an inventor with more than 70 international patent families and has been recognized by the MIT Technology Review as one of the top 10 innovators under 35. So let’s go ahead and dive right in here. Thanks again for joining us today, Raul.

Raul Bravo (01:37)
Hi, nice to meet you.

Austin Madson (01:39)
Great, yeah. So I want to start a bit by having you talk about your background and how you came to be involved in spatial AI, right? So it looks like you got your career start working in telecom engineering and then at some point decided to get an MBA. So how does your trajectory lead us to where you are right now?

Raul Bravo (01:59)
Yeah, well, MIT under 35, unfortunately, that was many years ago. So it’s not so young now. Yeah, no worries. I would say the main two topics that characterize my trajectory is entrepreneurship and technology. So I’ve been creating tech companies since 14 years old. Wow. if we focus a little bit more on the subject today, which is lidar. I would say the last 20 years, I’ve been working with lidar related tech. And you pay attention, some of your audience may say, yeah, but 20 years ago there was no lidar. Of course there was, of course there was.

It was not so much known as the self-driving car made it known. It was not so, people were not talking so much about, but it was already the first, I would say, approach to lidar. And that was something we were using in the mobile robotics company I created, as I said, close to 20 years ago, in which we were using lidar to make these mobile robots to be fully free of moving around without wires, without reflectors, et cetera.

And that was thanks to lidar. And that’s how we started more than 20 years ago. And after this first experience, we grow and increase our experience with this technology as the 3D lidar industry was being created. So we were there when the first companies like Velodyne, like Quanergy were created.

Many years later, and Ousters, Innoviz, etc. But we were really at the very beginning as the software or algorithm experts in this new, relatively new kind of data.

Austin Madson (04:13)
Yeah, that’s great. So that kind of leads us into talking a little bit about outside. Can you chat about, what is it? What makes it unique? Why do you think, you know, security, crowd management, logistics, et cetera, is best served by lidar?

Raul Bravo (04:30)
Yeah, good question because as I said, our experience was significant in mobile robotics, also in automotive, etc. But outside is focused in from a different angle, different perspective, which is not equipping the vehicles themselves or the robots themselves, but the infrastructures. And as you mentioned, that means what? That means equipping all airports, train stations, venues or whatever with lidar as the enabling technology in real time in order to capture every single movement of every single person or vehicle in real time as I said at scale.

At scale means two things, means hundreds of millions of people per year, precise 280 million people per year. And simultaneously, means at any moment of time, you may have tens of thousands of people in an airport terminal easily. So it makes a lot of people getting close to each other, moving, interacting, et cetera. So that’s one aspect of scale, the number of people and the density.

And the other aspect of scale is size in terms of surface. A typical airport may easily have hundreds of thousands of square meters, which means many lidars to be deployed, hundreds of them or thousands. And if you have ever worked with lidar data, imagine working with hundreds of lidars or thousands, simultaneously delivering data in real time.

You need to calibrate them, you need to synchronize them. If you don’t do that correctly, you are now going to be able to track every single person very precisely. And that’s exactly what we do. That’s how we file some 70 patents. And that’s how we are leading this space, this particular application of lidar.

Austin Madson (06:45)
Yeah, it’s a really interesting place to be, riding the growth of 3D and lidar and implementing these ⁓ 3D-based algorithms. I’m curious, what are some other specific use cases? You touched on a little bit of things like airports and train stations and other kind of transit-based hubs. What are some other specific use cases that this technology can be used for?

Raul Bravo (07:12)
Well, in fact, to be precise is that when we equipped, for example, an airport, ⁓ between the airport, you have dozens of use cases. Because let’s just get back one second on what we are doing to be clear. are digitizing movement. That’s what we are doing in real time. So that’s why I was saying that we are tracking and understanding the position of every single person, but this continuously meaning that we know not only how many people are there, but we know that every single person, it’s a unique person, a unique individual.

We are following him or her or the object over the whole trajectory or the full journey. And that’s what enables so many use cases. Because if you know where every single person is at any moment, in the whole airport or train station or factory, then lots of use cases open up. For example, you may want to measure what is the wait time at a certain place. You want to know if this person is waiting somewhere. How long is this person dwelling in this space? Are they entering in a retail store or not? Are they buying or not? Because we know that they are buying or not because we know where they are and what they are doing.

Are they interacting with other people? Are they using the resources of the premises of the infrastructure as they should or how long, how many, et cetera, et cetera. So it is opening all new world of spatial data. And because it’s unprecedented from the perspective of scale and granularity, we are really just touching the point of the iceberg because it is just the beginning. But I could mention more precise use cases if you want.

Austin Madson (09:21)
Yeah, so it’s really interesting, all the analytical questions you can ask of the data once you have all these large-scale tracking and unique IDs in place, right? Because you can, even thinking about someone buying gum in an airport shop, if they spend greater than 15 seconds where the register is, then you can assume that they bought something. And it’s really wild that this data exists, that you can ask these analytical questions like that.

Raul Bravo (09:49)
Another similar example, if someone is buying at check-in, I would say in a first class traveler counter, we are giving him an attribute of first class traveler. Which means that when he’s going to the duty free, you know that the profile of this person is different from others or you may even understand that a certain percentage of first class traveler have a (profile). So your display or your ad display, example, etc. When you start figuring out that this granular and at scale data is now available, then the use cases are almost endless.

Austin Madson (10:34)
Right, yeah, it brings up another interesting point, which I’d like to touch on too, is really some of the differences between these kind of 3D lidar-based surveillance and spatial intelligence methods, and really how they compare to traditional camera-based methods, and what are some of the pros and cons of using 3D lidar.

Raul Bravo (10:54)
Yeah, well, we typically don’t compare so much cameras with slider in the sense that we just don’t think one is better than the other. Sure. We think that they are good tools for different problems. You can try to eat a sub with a fork. Yeah, it could kind of work. It’s going to be very hard. But if you want to eat a soup, you would better use a spoon. But it doesn’t make a lot of sense to say it is a spoon better than a fork.

It’s just different tools for different means or different purposes. So in the case of cameras and lidar–lidar is a 3D sensor. So if the problem you want to solve is related to where in the world the objects are, how far they are, how quicker the moving, et cetera. Then you have a 3D problem and you need a 3D sensor. If your problem is understanding if this is a person, it’s a woman or a man or recognizing a face ID, et cetera, you cannot use lidar for that.

It’s not the right tool, then you need cameras. But using a 2D sensor to understand a 3D world, it’s just not using the right tool. Years ago, the main reason for using the wrong tool for the right problem, it was cost of lidar, was availability, was reliability. This is not the case anymore. And it’s not even lidar from the cost perspective, for example. It is not comparable to cameras. In infrastructure projects, lidar is cheaper than cameras. so many people know that. It is cheaper.

And why it is cheaper? It is not because one lidar is cheaper than one camera. Even if lidar has really decreased in price fantastically. It is not only because of that. It’s because when you look at the problem to be solved, which is understanding what’s happening in a real world, devices, infrastructure, what is important is how many sensors do I need to cover a certain surface?

And our experience, real-world experience with the biggest airports in the world in any continent is that you need between three to 10 times more cameras per square meter compared to lidar. And when you know how infrastructure cost is to install any kind of hardware in an airport or in a rail station or wherever, wiring it, networking, processing, maintenance, etc.

The cost of deploying more hardware than needed is much, much higher than the delta between the cost of one sensor compared to the other. So that means that today deploying lidar in infrastructure context is not a issue. What is the problem you want to solve? If it is a 3D problem, use a 3D sensor. If it’s a 2D image problem, classification, understanding colors, then don’t use lidar and use camera.

Austin Madson (14:37)
Yeah, thanks for clarifying. The last couple of days to have been thinking a little bit about your company and how you set these systems up. And you kind of touched on this. I’m curious, at least if for only my curiosity, what is the approach to set up a system like this in a large space? You had said that in some locations, you’re putting up multiple thousands of sensors.

So for example, do you start with a BIM? And do you have an internal algorithm to determine where to place the lidars, based on the laser scanner’s specifications like range and field of view? And then do you imply some sort of extrinsics to kind of tie it into a local coordinate system? I’m just really curious about the whole setup process.

Raul Bravo (15:19)
Yeah, you’re right. It’s a good question because it’s a critical step in any project at scale. I’m not talking about low scale demos where you can do more or less things approximately. When you are deploying enterprise-grade solutions, ⁓ you need to be precise. And something that is specific to lidar, which is different from commerce, is that lidar has many different designs, there are many different ways to do a lidar.

And as you mentioned some criteria like field of view, range, etc. You have a diversity of sensors in lidar that you don’t have in cameras. A camera from one manufacturer to the other, it’s very similar. It’s the same principles, I would say. In lidar you have many options. You have 360 degrees, either you have a narrow field of view, you have different wavelengths, you have different ranges, etc. So, the question of which lidar should I use, where and how many, it is not simple. Because, for example, you can very, very easily get to situations that are not, I would say, spontaneously the one you may think.

For example, you may get to the completion that it is better to use short range lidars, several of them, instead of big long range lidar. Because sometimes having a multi-angle view of the same situation is more valuable than seeing far away, for example, etc. So you have so many options that you need two things. First, you need to be agnostic to the hardware because there are some manufacturers that are extremely good at some kind of lidars. But when you are very good in something, in general, you’re not the best in other things. And you have very complimentary offerings from different manufacturers. So first, you need to be agnostic.

If you are not agnostic, you are leaving on the table 45 % over cost or more because you are not using the right sensors at the right place. So that’s the first thing. Second thing is you need tools, as you already mentioned a little bit, to simulate the environment and the placement of these hardware sensors before deploying, because you need to evaluate that before. And we were the first in the market to deploy a 3D multi-vendor simulator. The industry has been now making this happening more and more.

We’re happy that the industry is touching up on that. But we were clearly the first to say, you cannot do that by hand. You cannot do that approximately. There are too many variables. So you need a professional tool to do that. And that’s what we developed with 120 something different lidars.

You see the diversity. We are compatible with all of them and there is no standard, as you may know, and there is no standard on the data format, on the etc. So it’s something we don’t want our customers to deal with. So we are dealing with this. This is transparent for our customers and the result is the optimum coverage, the optimum cost and the optimum performance at the end of the day for the customer.

Austin Madson (19:13)
Yeah, it’s really interesting, the whole setup process. It’s quite the undertaking, especially for these larger major airport hubs where you have thousands of scanners in place. Another question that is kind of in line with that previous question is, do you guys employ edge computing, or are you sending all of this data to a central processor on site? I mean, as you alluded to, Raul, if you’re using disparate scanners, they’re all coming in at different rates with different binary data types and you have to parse them differently. It seems like it’s just such a mess. So how do you deal with all of that?

Raul Bravo (19:48)
Yeah, just take into account that a standard lidar data throughput would be equivalent to some, I would say, more or less a hundred people looking at a Netflix movie simultaneously. So, one lidar. So, that means tremendous amounts of data. It is not realistic to send all this data through the internet or through a bandwidth to the cloud. It would not be very smart. Why? Because remember that we are interested in how things are moving. But when you look at a certain scene, most of the points of the lidar are hitting static elements.

So it doesn’t make any sense to send all these points to the cloud to just remove them, et cetera, et So the architecture, which is becoming a little bit the standard in the industry, is to process on the edge most of the, I would say, heavy lifting and low level algorithms. And then sending to the cloud only metadata that is what constitutes the analytics that the customer is going to be interested in.

Because the customer is not interested in the position of every single person, it is not actionable for your business. What is interesting is in the analytics out of that. So how many people are waiting, how long, what are the KPIs, the density, the different KPIs. But having it on the cloud, it’s what allows this to be easy to share, easy to collaborate as a team, to access the dashboards, the 3D view, etc. So that’s hybrid architecture between the edge and the cloud.

Austin Madson (21:48)
Yes, it’s really clever to just filter out the static points and not even deal with them. It’s a smart way to go about it. I want to leave some space here for a quick word from our sponsor, LAStools.

The LIDAR Magazine Podcast is brought to you by rapidlasso. Our LAStools software suite offers the fastest and most memory efficient solution for batch-scripted multi-core lidar processing. Watch as we turn billions of lidar points into useful products at blazing speeds with impossibly low memory requirements. For seamless processing of the largest datasets, we also offer our BLAST extension. Visit rapidlasso.de for details.

So, Raul, we’ve been having a great conversation so far about your company systems and setting them up and all the complexities that come with that. I want to talk a bit about margins of error. You had mentioned earlier about tracking people and objects. And so when I was coming up with some questions to ask you with respect to of accuracies and things, I thought about edge cases like a dad carrying a baby in a baby carrier or a mom pushing a stroller. Do you monitor objects like a stroller like that and give it a different type of ID and attribute or yeah, I just want to open the discussion on this.

Raul Bravo (23:12)
Yeah, it’s a very good question because you may be surprised of how many weird things happen in our airport. Because, I mean, we are not in innovation exploration anymore. We are really at production in many different airports. And that means, as you mentioned, dealing with edge cases that we just were unable to anticipate. Right.

And that’s one of the things, by the way, that makes a difference between companies like Outsight that has deployed at real production at scale and conceptual or demo level tracking, et cetera. Because tracking when people moving with a lidar, that’s not complicated. What is complicated is all these cases, as you mentioned, yes, people hiring to each other, who is what, you know, because it’s not only knowing that there are two people, but also when they separate, you really want to know who was whom because there is a lot of value on having the continuous trajectory.

So you don’t really want to lose this track. And as you mentioned, people are using… Imagine an airport in holidays in the peak season. People have luggage, they have trolleys, they have… There are more and more also wheelchairs and other kind of, I would say, or vehicles that are not only pedestrians moving by walking. So there are many use cases like that. People sleeping on the ground or moving, etc. There are many, many worse things.

And you don’t only have the ambition of tracking the people, but also understanding their behavior, because this is where the value is. You even need to be more accurate. For example, when we are saying that someone is waiting; that means a lot of things. That means that we have understood that these people is in a queue. This person. And that’s not so easy because what defines a queue?

How do you make the difference between someone which is actually waiting and someone which is just close by, but is not waiting? So you need to go much further, I would say, than just the position. You need to be able to understand their behavior and the difference between different behaviors and interactions. So yes, you’re right. This is where the wheel is in these details. And that makes a whole difference. I wouldn’t even say between a solution that works relatively well and a solution that works really well is that if you don’t solve these edge cases, you don’t have a solution.

Customers don’t want that. Because the performance, even if we may consider that 85, 90 % performance is a good one, it is not considered as a good performance for an end user. Because that means for them that they are missing thousands of passengers or behaviors or information.

Austin Madson (26:28)
Right, yeah. And it’s really interesting your example of someone taking a nap on a floor, right? Because at that point, those 3D returns become static. And if you’re not paying attention to that object was moving before, then you would no longer monitor that space because those points are now static.

Raul Bravo (26:45)
Yeah, there are many, many cases like that. That’s why if you need to deploy a professional solution that is important for your business, you need to make sure that you’re working with the right company making professional solutions.

Austin Madson (27:03)
Yeah, I wonder if any of the reflectivity information can be useful at all.

Raul Bravo (27:10)
I mean, there is information on that. A reflector is not just a higher intensity. In many cases, it can be really easily discriminated from everything else. So there are many situations in which you can take advantage of this fact. So, yeah, indeed, it is not only range that is important, it’s also the reflectivity value.

Austin Madson (27:35)
Well, let’s kind of switch gears and talk about an interesting thing that happened, must have been a month or two ago at the Louvre. There was a famous heist that made its way around the internet. And there’s been some talk about surveillance methods and how they underperformed. I’m curious to hear your thoughts on how your spatial 3D AI analytics could have changed that. And just what are your general thoughts on high level security in general for this kind of technology?

Raul Bravo (28:06)
You may need to be patient a little bit, but in some weeks we are going to announce that ⁓ a very known place in Paris, highly touristic place in Paris has deployed with us a lidar solution for different reasons. So yes, your question is very relevant. There are many reasons why you may leverage lidar or 3D data in general, instead of cameras or in addition to cameras.

Because you may need still cameras to understand who these persons are to classify exactly what I said before, you need cameras to do what they are very good at. But understanding how people are moving around the place or in the place, etc. This is not the right use of cameras. in these examples, lidar brings a lot of advantage. The first is said for the case of airports and cloud monitoring, it’s exactly the same thing.

It’s that if you really understand where people are and how they are interacting with the physical world, really not approximately, not with projections as you may do with cameras, et cetera, but really with native 3D data, that changes everything. And there is also another factor which is important. If we take the example of the rule is that when you are using cameras, you are highly limited, at least in Europe, in many countries, what you can record on the street because of privacy for everyone in the street.

So you are kind of limiting yourself to some geographical or geometric constraints because of privacy. That’s not the case with slider. You’re not capturing images, so you don’t know who these persons are, but you know their position, you know their behavior, their trajectory, their interactions, etc., etc.

Are these normal, abnormal? Then you have the real data to do that. Which is not possible if you are just looking at the camera at the last minute. Because, you know, if you take this example of Lou, they just took some minutes for the whole process of steering the object. So you cannot detect that at the last minute because then it is too late.

So that’s one aspect in which lidar can be very helpful because you can really understand much better and sooner, I would say. It is not only sooner in the sense of looking at farther away from the area you want to protect, but the other way around is also important.

Once there is an intruder, and you also want to know exactly what this person has been doing and where in the whole place. There is an example of one of our customers in airports. For example, airports you have many areas that you can only access if you have the right authorization, the accessibility. So if someone enters in an area that is not authorized to enter.

The problem is not only to intercept this person. This is part of the problem. The second part is once I have intercepted the people, what he has done in this time, where he has been going. And we can say to you or the user immediately that he has, for example, went close to a trash bin.

So if you want to look at if he has found something, don’t look at thousand trash bins, etc. Look at this one, only this one, because we know what he has been doing. That’s just an example. But ⁓ the key point is always the same. Once you know what people are doing in the real world in 3D, highly precisely accurate precision, then that changes everything, including in security applications.

Austin Madson (32:36)
Yeah, I think some really interesting examples there. we’ll be on the lookout in the next couple of weeks for that announcement. So let’s switch gears a bit. I have a few more things to talk about as we wind down here, kind of bigger picture things. I’m curious to get your thoughts, Raul, on what are some of the areas for growth and barriers of this kind of 3D spatial intelligence sub-industry that you’re leading.

Raul Bravo (33:05)
Well, some moments you’re happy to be leaving a space, et cetera, but in reality, the market is so big, the opportunities are so big that it is better if there is a critical mass of players that are proposing solutions to the industry. And some customers are going to prefer some solutions or the other. Some solutions are going to be better adapted to some context, et cetera.

But we are still lacking this critical mass of professional solutions, I would say. So there are many demo level solutions. So we still need some time for these solutions to mature and having many segments covered, I would say, because there are so many things to do. So one thing preventing this market to grow quicker is clearly the software. As you saw, yes, lidar is the enabling sensor, you cannot capture this precise 3D data without lidar, but that’s just the beginning.

And the value can only be delivered if you are really building a software stack on top of that. So that’s what it prevents in the market. So this is going to change, of course, in the years to come. And we are going to see more and more lidar in so many use cases and applications.

But the key point here compared to traditional, I would say, applications of lidar is real time. That’s the key word. We are interested only in what’s happening now and how movement is evolving, which is very different from other use cases such as mapping or whatever, which is going to also, but it’s a totally different world.

Austin Madson (34:57)
Right, yeah. Thanks for shedding some light on that. And I think after our conversation today, our listeners understand some of those complexities with real-time processing and tracking and things. Raul, what advice would you give someone that is thinking of starting a company in the 3D AI space or even working for a company in this space? What kind of general advice would you give to someone like that?

Raul Bravo (35:24)
We have been talking about lidar because it is the subject of your podcast, et cetera. You know, customers don’t mind about we using lidar or any hardware. They only care about the problems we are solving for them. So as in any company, the beginning is just the problems you want to solve, making sure that is not being in love, I would say, with technology, or with a solution because it is more the problem that is important.

So that’s why we don’t define our servers as a lidar company or as a software for lidar. We use lidar and we do think that there is no equivalent technology for now. It may happen that in some years, imagine lidar gets to precise 3D real-time data, et cetera.

It may happen. We don’t care. I would say what we care about is the problems we want to solve in the real world. This real world is 3D, so you need 3D layer. So that’s my advice would be to focus on the business problems you want to solve and not so much about the technology.

Austin Madson (36:47)
Yeah, that’s great advice. It’s been an interesting chat the last 30, 40 minutes or so. I know I learned a lot. I hope everyone listening was able to learn a lot a bit about this space. That’s all we have for this episode. I really want to step back and thank Raul again for chatting with us today. So thanks again, Raul, for joining us. And thanks to everyone for tuning in.

{Music}

Announcer: Thanks for tuning in. Be sure to visit lidarmag.com to arrange automated notification of new podcast episodes, subscribe to newsletters or print publication and more. If you have a suggestion for a future episode, drop us a line, here. Thanks again for listening.

This edition of the LIDAR magazine podcast is brought to you by rapidlasso, our flagship product. The LAStools software suite is a collection of highly efficient, multi-core command line tools to classify, tile, convert, filter, restore, triangulate, contour, clip and polygonize lidar data. Visit rapidlasso.de for details.

{Music}

THE END