> a stuck mirror
This is one of the advantages of using an array of low power lasers rather than steering a single high power laser. The array physically doesn't have a failure mode where the power gets concentrated in a single direction. Anyway, theoretically, you would hope that class 1 eye-safe lidars should be eye safe even at point blank range, meaning that even if the beam gets stuck pointing into your eye, it would still be more or less safe.
> 20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory.
In the article, I point out a small nuance: If you have many lidars around, the beams from each 905 nm lidar will be focused to a different spot on your retina, and you are no worse off than if there was a single lidar. But if there are many 1550 nm lidars around, their beams will have a cumulative effect at heating up your cornea, potentially exceeding the safety threshold.
Also, if a lidar is eye-safe at point blank range, when you have multiple cars tens of meters away, laser beam divergence already starts to reduce the intensity, not to mention that when the lidars are scanning properly, the probability of all of them pointing in the same spot is almost impossible.
By the way, the Waymo Laser Bear Honeycomb is the bumper lidar (940 nm iirc) and not the big 1550 nm unit that was on the Chrysler Pacificas. The newer Jaguar I-Pace cars don't have the 1550 nm lidar at all but have a much bigger and higher performance spinning lidar.
Ok, but now the software can cause the failure. Not sure if that's much of a relief.
https://en.wikipedia.org/wiki/Beamforming
It is possible for the array to produce a concentrated beam into one direction. The software determines when that happens and in what direction.
Detect the mirror being stuck and shut the beam off. Easy.
Hint: how bad would it be if the MCU in your gas heating boiler latched up and wouldn't shut the burner off? How is this mitigated?
> Additionally, no LIDAR manufacturer publishes beam-failure shutoff latency. Most are >50ms, which can be long enough for permanent injury
So yes, a mirror trip reset is probably a good start. But would I trust someone's vision to this alone?
Nope, nothing as complicated as that. You're close with the watchdog timer.
The solenoid is driven by a charge pump, which is capacitively coupled to the output of the controller. The controller toggles the gas grant output on and off a couple of times a second, and it doesn't matter if it sticks high or low - if there's no pulses the charge pump with "go flat" after about a second and drop the solenoid out.
Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Same way we used to do for electron beam scanning.
>> Do the same thing. If a sensor at the edge of the LIDAR's scan misses a scan, kill the beam.
Sounds like a great plan, but I question the "about a second" timing; the GP post calculates that "about a second" is between 4X and 10X the time required to cause damage. So, how fast do these things scan/cycle across their field of view? Could this be solved by speeding up the cycle, or would that overly compromise the image? Change the scan pattern, or insert more check-points in the pattern?
One would hope there would be more regulation around this.
Just look at the comments of article you posted, with sock puppet accounts being actively hostile towards anyone asking questions.
I also know how the tech industry makes decisions about safety and responsibility (hahaha...). And I have seen some of the recent changes that automakers have somehow slipped past safety regulators. So it seems foolish to trust any of them on this safety issue.
Do we all have to move to rural areas, if we want to be able to go outside without wearing laser safety goggles?
[0] https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=76...
One could go further, and have an integrated system where the headset shows video feed from cameras around the car. You could almost get a 3rd person view of your own car like in video games.
To date most class-1 lasers have also been hidden/enclosed I think (and there is class 1M for limited medical use), so I'm not convinced that the limits for long-term daily exposure have been properly studied.
Until I see 3rd party studies otherwise, I plan to treat vehicle lidar no different than laser pointers and avoid looking directly at them. If/when cars become common enough that this is too hard to do, maybe I'll purchase NIR blocking glasses (though most ones I found have an ugly green tint, I wonder if it's possible to make the frequency cutoff sharp enough that it doesn't filter out visible reds).
I realize it's not easily possible to prove the negative, but when you're exposing the public the burden must be on the company to be transparent and rigorous. And from what I see it's difficult to even find certification documents for the lidars used in commercial self-driving vehicles, possibly because everything is proprietary and trade secret.
Thomas Midgley even organised an event for reporters where he poured pure tetraethyl lead on his hands and inhale its fumes for around a minute to show how safe it was. "I could do this every day without getting any health problems", he claimed. Once the reporters left, he needed a lay-down to recover.
Ouster uses (or at least used to use, not sure if they still do) 840 nm. Much higher quantum efficiency for standard silicon receivers, without having to play games with stressed silicon and stuff; but also much better focusing by the retina, so lower power permitted.
Social media is full of little clips of lidar systems burning out camera pixels, and I'm sure big proponents of the tech have paid people off over eye injuries at this point. There've probably been a ton of injuries that just got written off as random environmental hazards, "must have looked at the sun" etc.
It's nuts that this stuff gets deployed.
the hurdle to full autonomous driving was basically jumped by Tesla this year.
Tesla doesn't have driverless operations anywhere, and their Austin fleet consists of <30 vehicles with full time safety drivers that have a far worse safety record than Waymo vehicles.It's not nothing, but it's a long way from being a complete system (let alone the obviously superior one).
Tesla are on their final stretch now and can basically manufacture the entire Waymo fleet in Robotaxis in a week.
Waymo is operating at a much larger scale across a huge range of conditions with hardware that's generations behind their latest and still performing better.
I wrote a whole paragraph, then realised that "relative speeds" is the sum of opposing speeds, ie. two cars going in the opposite direction at 50km/h each make up a relative speed of 100km/h.
Why can't you place them further away from each other using an additional optical system (i.e. a mirror) and adjusting for the additional distance in software?
Edit: There's basically three approaches to this problem that I'm aware of. Number one is to push the cross-talk below the noise floor -- your suggestion helps with this. Number two is to do noise cancellation by measuring your cross-talk and deleting it from the signal. Number three is to make the cross-talk signal distinct from a real reflection (e.g. by modulating the pulses so that there's low correlation between an in-flight pulse and a being-fired pulse). In practice, all three work nicely together; getting the cross-talk noise below saturation allows cancellation to leave the signal in place, and reduced correlation means that the imperfections of the cancellation still get cleaned up later in the pipeline.
So many great lines:
- "We tried to find the smoothest thing in the frame but the smoothest thing turned out to be the sky"
- "We had it adapt to rough terrain by having me drive the car and it learned from my driving. Granted, it drives like a German now."
- "Nobody tells you their sensor error rate so we had to drive the car around and have the car learn the error probabilities"
- "Nobody needs to tell you this but Stanford students are amazing"
- "A lot of the people who entered are what I would call: 'car nuts' "
Baraja selling point was AFAIK that they used a integrated swept laser source (they typically have lower coherence but you can work around that in DSP).
Recently got a Waymo for the first time to take my kids and I from one hotel to another in Phoenix.
- Car pulls up
- I walk up to the trunk as I have a suitcase
- Out of habit, I go to open the trunk by pressing the button under the "handle" (didn't realize you have to unlock the car via the app first)
- My hand moves by the rear trunk laser that is spinning and "whacks" my hand.
Not a big deal but seems like an interesting design choice to place a motorized spinning device right next to where people are going to be reaching to open the trunk.
Likewise with the big spinning lidar on top, which was covered in the older Chrysler Pacificas but externally spinning in the newer Jaguar I-Paces.
[1] https://commons.wikimedia.org/wiki/File:Waymo_self-driving_c...
But humans have no lidar technology. We rely almost solely on sight for driving (and a tiny bit on sound I guess). Hence in principle it should be possible for cars to do so too. My question is this: at what point, if at all, will self-driving get good enough to make automotive lidar redundant? Or will it always be able to make the self-driving 1% better than just cameras?
And proprioception. If I'm driving in snowy conditions, I'm definitely paying attention to whether the wheels are slipping, the car is sliding, the steering wheel suddenly feels slack, etc. combined with memorized knowledge of the road.
However, that's ... not great. It requires a lot of active engagement from the driver and gets tiring fast.
Self-driving can be way better than this.
GPS with dead reckoning tells the car exactly where it is relative to a memorized maps of the road--it won't miss a curve in a whiteout condition because it doesn't need to see the curve--that's a really big deal and gets you huge improvements over humans. Radar/lidar will detect a stopped car in front of you long before your sight will. And a computer system won't get tired after driving in stressful conditions for a half hour. etc.
The MKBHD YouTube video where he shows his phone camera has burned out pixels from lidar equipped car reviews is revealing (if I recall correctly, he proceeds to show it live). I don't want that pointed at my eye.
I love lidar from an engineering / capability perspective. But I grew up with the "don't look in a laser!" warnings everywhere even on super low power units... and it's weird that those have somehow gone away. :P
Over the last 2 days I drove from Greenville, SC to Raleigh, NC (4-5 hours) and back with self driving the entire way. Traffic, Charlotte, navigating parking lots to pull into a super charger. The only place I took over was the conference center parking lot for the Secure Carolina's Conference.
It drives at least as well or better than me in almost all cases...and I'm a pretty confident driver.
I say all that to say this...I can't imagine lidar improving on what I'm already seeing that much. Diminishing returns would be the biggest concern from a standpoint of cost justification. The fact that this type of technology exists in a vehicle as affordable as the Model 3 is mind blowing.
To wit: Plenty of other tesla owners in a similar position as you, probably similarly praised the system, until it slammed them into a wall, car, or other obstacle, killing them.
Autopilot kills loads of people but my understanding is that autopilot is the dumb driver assist while FSD is the one that tries to solve general purpose driving.
Has FSD really only killed 2 people? FSD has driven 6 billion miles and the human driver death rate is 10 per billion so it has killed 2 where "as good as human" would mean 60. That seems really good tbh.
EDIT: and it looks like "deactivate before collision" doesn't work as a cheat, NHTSA requires reporting if it was active at any time within 30 seconds of the crash: https://www.nhtsa.gov/laws-regulations/standing-general-orde...
My biggest gripe with FSD is typically that it's too safe in a few situations where I would have gone a little sooner at an intersection.
EDIT: https://www.tesla.com/fsd/safety
Miles Driven Before Major Collision
699,000 - US Avg
972,000 - Teslas Driven Manually (no active safety features)
2.3 mil - Tesla Driven Manually (active safety features)
5.1 mil - Tesla Driven with FSD (supervised)
Miles Driven Before Minor Collision
229,000 - US Avg
308,000 - Teslas Driven Manually (no active safety features)
741,000 - Tesla Driven Manually (active safety features)
1.5 mil - Tesla Driven with FSD (supervised)
I don't know the answer to any of these but it seems like the camera based approach has some advantages to it as well. Doesn't seem that cut and dry.
I guess we will see soon.
There are tons of people suing Tesla over FSD killing people and every Robotaxi needs a "safety driver" ready to take over at all times and even with this they drive much worse than Waymo. You need more accurate data.
By 2018, if you listen to certain circa-2015 full self-driving technologists.
As far as Tesla, time will tell. I ride their robotaxis daily and see them performing better than Waymo, but it's obviously meaningless until we see accident stats after they remove safety monitors.
I've seen this claimed a lot but never have gotten a definitive answer.
Is this like "overall better but hard to pinpoint" or "this maneuver is smoother than Waymo" or something in between?
Would love to hear experiences with them since they're so limited currently.
Crowd: https://www.youtube.com/watch?v=3DWz1TD-VZg
Negotiation: https://www.youtube.com/shorts/NxloAweI6nU
And it is certain that in India they use sound sound for echolocation.
Agreed, but there are still really good human drivers, who still operate on sight alone. It's more about the upper bound, not the human average, that can be achieved with only sight.
The second and third place companies in terms of the number of deployed robotaxis are both subsidiaries of large Chinese Internet platforms, and both of them are also leaders in providing geospatial data and navigation in China. Neither operates camera-only vehicles.
I think a future where cameras are more eye like would be a big leap forward especially in bad weather - give them proper eyelids, refined tears, rotating ability, actual lenses to refocus at different distances, etc.
https://tech.yahoo.com/transportation/articles/volvo-ends-re...
Interference between LIDARs can be a problem, mostly with the continuous-wave emitters. Pulsed emitters are unlikely to collide in time, especially if you put some random jitter in the pulse timing to prevent it. The radar people figured this out decades ago.
For pulsed emitters, indeed adding random jitter in the timing would avoid the problem of multiple lidars being synced up and firing at the same time. For some SPAD sensors, it's common to emit a train of multiple pulses to make a single measurement. Adding random jitter between them is a known and useful trick to mitigate interference. But in fact it isn't super accurate to say that interference is a problem for continuous-wave emitters either. Coherent FMCW lidar are typically quite robust against interference by, say, using randomized chirp patterns.
This seems like it will be a growing problem with increased autonomy on the roads
I'm not aware of the inner workings of automotive lidar, but I can't imagine building one that didn't work that way.
Lidar is flawed at the foundational level. There's a reason no living creature on earth evolved it.
[1] https://lidarmag.com/2011/05/21/velodyne-donates-lidar-and-r...
There are two wavelengths of interest used:
The failure mode of these LIDARs can be akin to a weapon. A stuck mirror or frozen phased array turns into a continuous-wave pencil beam. A 1550 nm LIDAR leaking 1W continuous will raise corneal temperature >5C in 100ms. The threshold for cataract creation is only 4C rise in temp. A 905 nm Class 1 system stuck in one pixel gives 10 mW continuous on retina, capable of creating a lesion in 250ms or less.20 cars at an intersection = 20 overlapping scanners, meaning even if each meets single-device Class 1, linear addition could offer your retina a 20x dose enough to push into Class 3B territory. The current regs (IEC 60825-1:2014) assume single-source exposure. There is no standard for multi-source, multi-axis, moving-platform overlay.
Additionally, no LIDAR manufacturer publishes beam-failure shutoff latency. Most are >50ms, which can be long enough for permanent injury