Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only. Lidar using cars are going to replace at least the ones that don't make use of this obvious answer to obstacle detection challenges.
As I understand, lidars don't work well in rain/snow/fog. So in the real world, where you have limited resources (research and production investment, people talent, AI training time and dataset breadth, power consumption) that you could redistribute between two systems (vision and lidar), but one of the systems would contradict the other in dangerous driving conditions — it's smarter to just max out vision and ignore lidar altogether.
Why does this matter? You have to slow down in rain/snow/fog anyway, so only having cameras available doesn't hurt you all that much. But then in clear weather lidar can only help.
> lidars don't work well in rain/snow/fog.
Neither do cameras, or eyeballs.
When it's not safe to drive, it's not safe to drive.
I've been in zero-road-speed whiteout conditions several times. The only move to make is to the side of the road without getting stuck, and turning on your flashers.
Low-light cameras would not have worked. Sonar would not have worked. Infrared would not have worked.
Limited resources? Billions per year are being thrown at the base technology. We have the capital deployed to exhaust every path ten times over.
The Swiss cheese model would like to disagree.
This is silly. Cameras are cheap. Have both. Sensors that sense differently in different conditions is not an exotic new problem. The kalman filter has existed for about a billion years and machine learning filters do an even better job.
The reasoning is cynical but sound. If the system uses only the sensing modes people have, it will make the mistakes people do. If a jury thinks "well I could have done that either!" You win. It doesn't matter if your system has fewer accidents if some of the failure modes are different than human ones, because the jury will think "how could it not figure that out?"
Until a lawyer points out other cars see that. My car already has various sensors and in manual driving sounds alarms if there is a danger I seem not to have noticed. (There are false alarms - but most of the type I did notice and probably should have left more safety margin even though I wouldn't hit it)
also regulators gather srastics and if cars with something do better they will mandate it.
I don't think that's the reasoning.
The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.
Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.
Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams. Thats mainly a software problem and companies like Waymo have done it, but Tesla was having trouble with it earlier, and if you don’t do it right, your self driving system can be less reliable.
> Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
And, less excusable, ignorant of how incredible human eyes are compared to small sensor cameras. In particular high DR in low light, with fast motion. Every photographer knows this.
There certainly is a pretty on going miscalculation regarding human intelligence, and consrquentially, empathy.
IMHO not using lidars sounds like a premature optimisation and a complication, with a level of hubris.
This is a difficult problem to solve and perhaps a pragmatic approach was/is to make your life as simple as possible to help get to a fully working solution, even if more expensive, then you can improve cost and optimise.
Eh, I think ‘miscalculation’ might be giving too much credit about good intentions.
He wanted (needed?) to get on the hype train for self driving to pump up the stock price, knew that at the time there was zero chance they could sell it at the price point lidar required at the time - or even effective other sensors (like radar) - and sold it anyway at the price point that people would buy it at, even though it was not plausibly going to ever work at the level that was being promised.
There is a word for that. But I’m sure there are many lawyers that will say it was ‘mere fluffery’ or the like. And I’m sure he’ll get away with it, because more than enough people are complicit in the mess.
Miscalculation assumes there was a mistake somewhere, but near as I can tell, it is playing out as any reasonable person expected it too, given what was known at the time.
I think Musk is really not as smart as he thinks he is and this specific thing was probably an earnest mistake. Lots of other fraudulent stuff going on though of course!
[dead]
This is a new and flawed rational that I haven't heard before. Tesla cameras are worse (lower resolution, sensitivity, and dynamic range) than human eyes and don't have "ears" (microphones).
Very recent issue with Waymo https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-.... This is 17 years after they bet the farm on LIDAR, with no signs its ever going to be cost effective or that it's better than multiple cameras, with millisecond reaction 360 degrees, that never gets tired, drunk, distracted, and also has other cheaper sensors and NN trained on Billions or real world data.
There is also a report from the same flooding in LA of a Waymo driving into a flooded road and getting stuck.
They might have flipped a switch after that, causing this.
Tesla does not handle rain well either. This is not a LIDAR problem, it is a problem with self driving cars in general.
[deleted]
That's an example of it failing safe. I'd rather it did that than drive me into a sinkhole because it thought it was a puddle.
Ok so Waymo is useless in the rain then, kind of limiting. But at least that 0.000000000001% times it actually is a sinkhole you won't damage the bumper.
I'd rather a Waymo be useless in the rain rather than a Tesla be actively dangerous and likely to kill me.
Tesla ""autopilot"" fatalities: 65
Waymo fatalities: 0
Autopilot isn’t full self driving (FSD), most cars these ship with smart cruise control (what autopilot basically is). Do you have fatality statistics for FSD?
If we are just talking about smart cruise control, most cars are using cameras and radar, not lidar yet. But Tesla is special since it doesn’t even use radar for its smart cruise control implementation, so that could make it less safe than other new cars with smart cruise control, but Autopilot was never competing with Waymo.
Dude that's not a 'puddle' as the article claims, that's a body of water that it's not even visually obvious whether it's safe to drive through. Maybe I'm a bad driver but I'd hesitate to drive through that in a small car either.
>A vehicle got stuck trying to figure out an obstacle so sensors with less information are better than sensors with more information.
Pretty hard to do if your whole selling point is ‘better and safer than human’ however?
Yea, even in the case they could match human level stereo depth perception with AI, why would they say "no" to superhuman lidar capabilities. Cost could be a somewhat acceptable answer if there wouldn't be problems with the camera only approach but there are still examples of silly failures of it.
And if I remember correctly they also removed their other superhuman radar in their newer models, the one which in certain conditions was capable of sensing multiple cars ahead by bouncing the signal below other cars.
considering cameras can create reliable enough distance measurements AND also handle all the color reception needed for legally driving roads it was always a ridiculous idea by a certain set of people that lidar is necessary.
No, cameras cannot create reliable distance measurements in real-world conditions. Parallax is not a great way to measure distance for fast, unpredictably moving objects (such as cars on the road). And dirt or misalignment can significantly reduce accuracy compared to lab conditions.
Note that humans do not rely strictly on our eyes as cameras to measure distances. There is a huge amount of inference about the world based on our internal world models that goes into vision. For example, if you put is in a false-perspective or otherwise highly artifical environment, our visual acuity goes down significantly; conversely, people with a single eye (so no parallax-based measurement ability) still have quite decent depth perception compared to what you'd naively expect. Not to mention, our eyes are kept very clean, and maintain their alignment to a very high degree of precision.
Stereo cameras are useless against repeating patterns. They easily match neighboring copies. And there are lots of repeating or repeating-like patterns that computers aren't smart enough to handle.
You can solve this by adding an emitter next to the camera that does something useful, be it just beaconing lights or noise patterns or phase synced laser pulses. And those "active cameras" are what everyone call LIDARs.
There are tons of evidence showing that cameras are alone are not safe enough and even Tesla has realized that removing lidar to save cost was a mistake.
Just say Tesla, why censor yourself.
I have a suspicion here on HN. When criticizing big tech, especially Google and FB, at a certain time of the day a specific cohort comes online and downvotes. Suspiciously, that is a time when one could conclude, that now people in the US start working or come online. Either fanboys, employees or an organized group of users trying to silence big tech criticism.
I have no proof of course and it might be coincidence, or just difference of mindset between US citizens and Europe citizens. It happened a few times already and to me looks sus.
But if they actually read and not just ctrl+f <company name>, then of course not writing the company name, but hinting at it in an obvious way is no more helpful either.
I have seen this happening multiple times, some to fairly reasonable comments with a just tiny negative tone.
There is also flagging abuse which effectively kills the comment /post.
It's been my experience that hn and reddit have a very high overlap in audience these days. The jerrybreakseverything crowd. Anything anti-tesla, anti-grok, is applauded.
> Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only
Human eyes do not have distance information, either, but derive it well enough from spatial (by ‘comparing’ inputs from 2 eyes) or temporal parallax (by ‘comparing’ inputs from one eye at different points in time) to drive cars.
One can also argue that detecting absolute distance isn’t necessary to drive a car. Time to-contact may be more useful. Even only detecting “change in bearing” can be sufficient to avoid collision (https://eoceanic.com/sailing/tips/27/179/how_to_tell_if_you_...)
Having said that, LiDAR works better than vision in mild fog, and if it’s possible to add a decent absolute distance sensor for little extra cost, why wouldn’t you?
Human/animal vision uses way more than parallax to judge distances and bearings - it uses a world model that evolved over millions of years to model the environment. That's why we can get excellent 3D images from a 2D screen, and also why our depth perception can be easily tricked with objects of unexpected size. Put a human or animal in an abstract environment with no shadows and no familiar objects, and you'll see that depth perception based solely on parallax is actually very bad.
Human eyes are much better than cameras at dealing with dynamic range. They’re also attached to a super-computer which has been continuously trained for many years to determine distances and classify objects.
I don’t like the comparison between humans and humans. Humans don’t travel around at 100mph in packs of other humans. Why not use every sensor type at our disposal if it gives us more info to make decisions? Yes I understand it’s more complicated, but we figure stuff out.
Let me know when you have a camera package with human eye equivalency.
I'll preface by saying lidar should be used with autonomous vehicles.
Individual cameras don't have distance information, but you can easily calibrate a system of cameras to give you distance information. Your eyes do this already, albeit not quantitatively. The quantitative part comes from math our brains aren't setup to do in real time.
It was cost wasn't it?
If this lowers Lidar costs, and Tesla has spent all this time refining the camara technology. Now have both.
Use both.
The mind salivates at the idea of sub-$100 and soon after sub-$10 Lidar. We could build spatial awareness into damn near everything. It'll be a cambrian explosion of autonomous robots.
RIP to every single camera in existence if that happens. Lidar is awful with damaging camera lenses.
I had to look this up, because I had never heard of it. How could a lens be damaged by infrared lasers?
It turns out it’s the sensors that are easily damaged by high powered lidar lasers.
There is complains that some Volvo cars damaged iPhone cameras. It’s not even clear if Apple takes those under warranty. We’ve seen car review YouTubers that got their iPhone camera sensors damaged captured (by a second camera) while reviewing
One such review where Marques shows how it happened to his phone
One highlight from the video, he says most cameras are fine, it's just iphones that don't have a very good IR filter. Which sounds correct, in my experience most cameras have pretty substantial IR filters that have to be removed if you want to photograph IR.
I also wonder if the smaller sensor size on phones contributes, since the energy is being focused onto a smaller spot.
Either way, for that to happen he was filming the LIDAR while active, for a decent amount of time, from right next to the car. I assume under normal conditions it wouldn't be running constantly while the vehicle is stationary?
If this is true, the eyes are no better. Especially as it can't be seen, who will look awsy? And at night, with open irises?
There was someone who had his eyes damaged by sitting next to a heater.
> The biggest concern is not photographic cameras but rather the video cameras mounted on autonomous cars to gather crucial information the cars need to drive themselves.
So they don't care if that breaks my phone camera? Wtf?
The Epstein classes argument is: If youre not my property, why should We care?
Is there any deeper study on long term effects regarding retinal damage?
I would imagine, even with safe dosages, there would be some form of cumulative effect in terms of retinal phototoxicity.
More so if we consider the scenario that this becomes a standard COTS feature in cars and we are walking around a city centre with a fleet of hundreds of thousands of these laser sources.
Some lidar units simply use the wavelength that the human eye is opaque to.
The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.
> human eye is opaque to
Absorbing the laser isn't necessarily any good. Very hypothetically it could lead to cataracts.
Sun emits much stronger IR, near-IR, UV
Absolutely, and is a major cause of cataracts. Somewhat near 100% of people with lenses in their eyes will get cataracts eventually if they are ever exposed to unfiltered sunlight.
And staring directly at the sun is not recommended.
I suspect we can't quantify human eye-damage enough to easily rule-out chronic effects... until it's too late for the patient.
iPhones have had lidar for years, have cameras been affected?
Other cameras. When the lidar laser points at the camera sensor.
There are already very good sub-$100 lidars, especially for 2D since they were made en masse for vacuum cleaners. E.g. the LD19 or STL-19P as they're calling it now for some reason. You need to pair them with serious compute to run AMCL with them, plus actuation (though ST3215s are cheap and easy to integrate now too) and control for that actuation which also wants its own compute, plus a battery, etc. the costs quickly add up. Robotics is expensive regardless of how cheap components get.
RIP to humans under authoritarian regimes?
And, I guess, even more advanced surveillance.
I think we’re well past the point where mass surveillance was a technical challenge. Mass oppression through autonomous violence however…
Even back when Snowden was current news, we'd reached the point where laser microphones could cover every window in London for a bill of materials* less than the annual budget of London's police force.
* I have no way to estimate installation costs, but smartphones show that manufacturing at this scale doesn't need to increase total cost 10x more than the B.o.M.
LIDAR would be preferrable to cameras when it comes to privacy actually
People saying LIDARs can't recognize colors or LIDARs can't take pictures don't know what they are talking about.
They're just fancy cameras with synced flashes. Not Star Trek material-informational converting transporters. Sometimes they rotate, sometimes not. Often monochrome, but that's where Bayer color filters come in. There's nothing fundamentally privacy preserving or anything about LIDARs.
I don't think it makes a difference. Dense lidar goes you more information than 2d colour imagery.
There are SLAM cameras that only select "interesting" points, which are privacy preserving. They are also very low power.
I’d definitely feel much better if most cameras in the world were replaced by LIDAR. I feel like it would be much tougher to have a flawless facial recognition program with LIDAR alone
Who needs facial recognition if you can identify people based on gait?
Gait recognition is almost entirely hype. Sure it works to tell the difference between n = 10 people but so what, you can tell the difference between a group of 10 people by what kind of shoes they are wearing.
Judicial systems where a 6% error rate is deemed way too high to lead to a conviction.
Then you combine it with some other technique, eg tracking daily routes of individuals, to lower the error rate. You only need a handful of bits to distinguish all inhabitants of the average city. But imho that error rate would likely be low enough for some judge to authorize more invasive surveillance of suspects thus identified.
The minute internet became widespread it was game over.
Pros and cons. :/
It'll never happen, but we need a bill of rights for privacy. The laypeople aren't well-versed or pained enough to ask for this, and big interest donors oppose it.
Maybe the EU and states like California will pioneer something here, though?
Edit: in general, I'm far more excited by cheap lidar tech than I am afraid of the downsides. We just need to be vigilant.
The EU already has. GDPR and the AI Act puts a lot of limits on what you can do in the open space, although it doesn't always go far enough.
I'd say the numbers listed here prove the GPs point of poor enforcement. The largest fine is roughly 0.97% of Meta's 2023 revenue, the equivalent of a $600 fine for somebody making 60k / year. It's a tiny-tiny cost of doing business at best, definitely not a deterrent, given Meta's blatant disregard for GDPR since then.
> the equivalent of a $600 fine for somebody making 60k / year
I don't know about you, but on that income I would certainly not brush off such a fine as a "cost of doing business". Would it cause me financial trouble, or would it force me to sacrifice other expenses? Absolutely not. But would I feel frustrated at having to pay it, feel stupid for my mistake, and do my best to avoid it in the future? Absolutely yes.
My bad, a better analogy would be a dealer making 60k / year selling drugs, gets caught by police and is fined $600. I wouldn’t expect them to change much.
Fair enough. In that sense I do see value in the analogy.
Would you still do your best to avoid it if that involved taking a pay cut of more than $600/year?
1% of Meta's global revenue is a tiny-tiny cost of doing business? At that point, I think I can stop even trying to argue here. It's a massive fine any way you put it. Especially when you consider the ceiling hasn't been reached and non compliance is more and more costly by design.
Their net profit was $60billion in 2024. This is peanuts. It can fluctuate by multiples of this fine in a month, depending on whether or not they've had a bad or good month, nevermind year. This pretty much is just a cost of doing business.
It's not even 1% of their annual revenue, let alone the entire multi year period they've been in breach before and since. It's nothing to them.
The interesting part is that it keeps going up. You seem to believe we have somehow reached a cap where Meta can just expense it as a cost of doing business. That's not how European law works. The fine maximum is far higher and repeated non compliance keeps making the fines higher and higher. It's a ladder not a sizing precedent.
Maximum GDPR fine is 4% of global revenue in the previous year. If a company has 30% profit margin then they can, in theory, treat is as a cost of doing business, indefinitely.
pretty pathetic, but people keep insisting you can regulate capital
Humanity has never known a world without surveillance. Responsibility cannot exist without being watched. Primitive tribes lived under the constant eye of the group, and agricultural eras relied on the strict oversight of the clan. Modern states simply adopted new tools for an ancient necessity. A society without monitoring is a society without accountability, which only leads to the Hobbesian trap of endless conflict.
Mass surveillance is a relatively recent development. Dense urban civilizations are not. And yet their denizens have not historically devolved into a “nasty, brutish, and short” existence. In fact, cities have been centers of culture and learning throughout history. How does this square with your theory?
The 19th century was the true cradle of mass surveillance. Civil registration, property tracking, and institutionalized police forces provided the systemic oversight required to manage dense urban life. These administrative tools served as the analogue version of digital monitoring to ensure every citizen remained known and categorized. Cities thrived as centers of culture only because these new forms of visibility prevented the Hobbesian collapse that anonymity would have otherwise triggered.
And what about all of the previous ~40-50 centuries where cities were centers of learning and art and not Hobbesian hell holes? Ur is slightly older than the 19th century, I believe.
And note that there is evidence for cities of tens of thousands of inhabitants from 3000 BCE, while Rome reached 1 000 000 residents by 1CE. Again, without becoming some Hobbesian nightmare.
None of those things are remotely comparable to the surveillance we're talking about. There's a world of difference between, "My city knows who owns what properties and also we have a police force", and "Western intelligence agencies scoop up every bit of data they can grab about anyone on the planet and store it forever"
In my country it wasn't until the late 19th century that someone had the balls to stop going to church on Sunday. It was a huge scandal at the time but it all worked out in the end.
Humans have always done mass surveillance on eachother. You don't need technology for that.
At no point in time before this era was it possible for a random bureaucrat to have a reasonably comprehensive list of everyone in a country who attended church yesterday.
Scale matters.
That's an incredibly bullshit argument to defend the indefensible.
Your reaction actually proves the point. Aggression thrives in anonymous spaces because the lack of oversight removes the weight of accountability. When people feel unobserved, they quickly abandon the social friction that once held tribes and clans together. You are essentially providing a live demonstration of why a society without any form of monitoring inevitably slides into the Hobbesian trap.
I don't think a random internet comment proves anything about society at large.
People don't hesitate to be aggressive even when they're not anonymous and there's a threat of accountability - see, all crime, or people just acting shitty toward others.
Mass surveillance does not cause everyone to magically get along.
History shows that whenever surveillance gaps appear, chaos follows. The explosion of crime during early urbanization was the specific catalyst for the creation of modern police forces because traditional social bonds had failed to provide oversight in growing cities. Japan maintains its safety through a deep-rooted culture of mutual neighborhood monitoring that leaves little room for anonymity. Even China successfully quelled the violent crime waves of its early economic boom by implementing a sophisticated surveillance network.
Police forces nor "neighborhood monitoring" are equivalent to mass surveillance though.
Anyway I'm curious why - despite having less anonymity than at any point in history, at least from the perspective of law enforcement - we still see high crime rates, from fraud to murders?
This is a reduction to absurdity. Those old societies you cite didn't actively surveil with the goal of micromanaging people's daily lives the way that modern ones do.
Rural surveillance was far more suffocating because every single action was subject to the community gaze. This is exactly why classic literature frames the journey to the city as a liberation from the crushing weight of the village eye. The idea of the peaceful countryside is a modern utopian fantasy that ignores how ancient clans dictated every aspect of life including marriage and death. Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management. Ancient society did not just monitor people; it owned their entire existence through inescapable social visibility.
"It was always shit everywhere" is revisionist history born out of the fantasy of statists looking to justify the modern (administrative) enforcement state.
While the lack of anonymity in small towns certainly puts a damper on one's ability to deviate too far from social norms, the list of things and subject that could get you subjected to government violence without creating a victimized party was infinity shorter. Things that get state or state deputized enforcers on your case today were matters of "yeah that's distasteful, he'll have to settle that with god" or it would come back to bite you when something happened 150+yr ago because society did not have the surplus to justify paying nearly as manny people to go around looking for deviance that could be leveraged to extract money. These people had way more practical day to day freedom to run and better their lives than we do now, if constrained by the fact that they had substantially less wealth to leverage to that effect.
> Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management
And they almost exclusively deal in things that historical societies didn't even bother to regulate.
You're beyond delusional if you think running afoul of HOA is worse than running afoul of the local, state or federal government. Yeah they can screech and send you scary letter with scary numbers but they don't get the buddy treatment from courts that "real" governments do (to the great injustice of their victims) and their procedural avenues for screwing their victims on multiple axis are way more limited.
Seriously, go get in a pissing match with a municipality over just where the line for "requires permit" is and get back to me. Unless you want to do something that is more than petty cosmetic stuff and unambiguously in violation of the rules a HOA is a paper tiger for the most part (not to say that they don't suck).
Are we sure these things aren’t damaging our eyes? It’s lasers shooting all over the place right?
When designed, built, installed and calibrated correctly, the power and wavelengths used are not considered harmful to humans.
Interestingly, there have been people in the LIDAR industry predicting costs like this for many years. I heard numbers like $250 per vehicle back in 2012 [1]
Of course, ambitious pricing like this is all about economies of scale - sensors that are used in production vehicles are ordered by the million, and that lowers the costs massively. When the huge orders didn't materialise, the economies of scale and low prices didn't materialise either.
Also 'Luminar Technologies, a prominent U.S. lidar manufacturer, filed for Chapter 11 bankruptcy in December 2025' LIDAR is useful in a small set of scenarios (calibration and validation) but do not bet the farm on it or make it the centre piece of your sensor suite.
Also, MicroVision, the company in OP's article bought the IP from Luminar. This feels like a circular venture capital scam. Luminar originally went public via SPAC and made a bunch of people very wealthy before ultimately failing.
This is very wrong.
LIDAR scanners have revolutionized surveying by
enabling rapid, high-precision 3D mapping of terrain and infrastructure, capturing millions of data points per second. LIDAR can penetrate dense vegetation, allowing accurate, ground-level, mapping in forested or obstructed areas. Drone mounted LIDAR has become very popular. Tripod mounted LIDAR scanners are very commonly used on construction sites. Handhels LIDAR scanners can map the inside of buildings with incredible accuracy. This is very commonly used to create digital twins of factories.
And none of this is on the order of magnitude that consumer automotive would have.
The EU requires every new car to have Autonomos Emergency Braking. If LiDAR becomes cheaper than radar, this is a potential market of millions.
Lidar is critical for any autonomous vehicle. It turns out a very accurate 3D point cloud of the environment is very useful for self driving. Crazy, I know.
Useful but not at all required.
Camera + radar is sufficient for driving, and camera+ USS is fine for parking.
Radar is just cheaper than the number of cameras and compute, it's also not really a strict requirement.
Look at how the current cars fuck up, it's mostly navigation, context understanding, and tight manoeuvres. Lidar gives you very little in these areas
All of the actually WORKING self driving systems use LIDAR. This is not a coincidence.
I work with programs approaching L3+ from L2, with the requirement that the system works for 99% of roads (not tesla before people start fixating on that).
We find that the cases where lidar really helps are in gathering training data, parking, and if focused enough some long distance precision.
None of these have been instrumental in a final product; personally I suspect that many of the cars including lidar use it for data collection and edge cases more than as part of the driving perception model.
Accidents are not normal driving situations but edge cases.
Waymo is the best current autonomous driving system and Waymo uses LIDAR. This is because LIDAR is an incredibly effective sensor for accurate range data. Vision and Radar range data is much less accurate and reliable.
Waymo used LIDAR in the realtime control loop. It combines LiDAR, camera, and radar data in real time to build a 3D representation of the environment, which is constantly updated.
I fundamentally don't trust any level 4 system that doesn't use LIDAR
Like Waymo? (https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting the farm on LIDAR the solution fails to navigate a puddle. Sorry but they bet on the wrong technology, Tesla has overtaken them with multi camera and NN solution.
> Tesla has overtaken them with multi camera and NN solution.
Let me guess, you heard this from Elon?
Your conclusion from a single incident is a bad inference. One vehicle getting confused by a puddle (likely a sensor fusion edge case or mapping artifact, not a fundamental LIDAR failure) doesn't indict the technology. Tesla's cameras have produces vastly more failures.
Waymo has driven tens of millions of autonomous miles with a serious injury/fatality rate dramatically lower than human drivers. The actual data shows the technology works. Tesla FSD still requires active driver supervision and is not legally or technically a robotaxi system. Comparing them as if they're at parity is wrong.
LIDAR gives direct metric depth with no inference required. Camera-only systems must infer depth from 2D images using neural networks, which introduces failure modes LIDAR doesn't have. Radar is very valuable when LIDAR and cameras give ambiguous data.
What metrics has Telsa overtaken Waymo? Deployed robotaxi revenue miles? No. Disengagement rates? No published comparable data. Safety per mile in driverless operation? No.
A Tesla wouldn't stop for a puddle. Also its not locked to a small geofenced area (people have driven coast to coast without a single intervention on FSD including parking spot to parking spot) when I can buy a Waymo vehicle that does this then Waymo would have caught up with Tesla.
Wow, so it can cope with driving on the highway. That's the easy part.
Your puddle example is utterly irrelevant. Tesla's are notorious for phantom breaking. Robotaxis are very much locked to tiny geofenced areas. Some even shaped like a penis because Musk is such a child.
"people have driven coast to coast without a single intervention on FSD including parking spot to parking spot"
I find this claim very dubious. Prove it. Teslas never drive empty for a very good reason.
Err they have lots of Model Ys in Austin as Robotaxis right now with no drivers. I guess this is also 'dubious'. Look it's clear you have a huge bias I would urge you to read up on https://grokipedia.com/page/List_of_fallacies otherwise your emotional responses will blind you to reality.
'MicroVision says its sensor could one day break the $100 barrier'. When an article says one day, read not in the next decade.
Around a decade ago the nascent LIDAR industry boomed and dozens of startups emerged out of nowhere all racing to make cheap automotive grade LIDAR, and here we are.
Of course MicroVisiom is only claiming their LIDAR to be suitable for advanced driver assist, but ADAS encompasses a wide array of capabilities: basically everything between cruise control and robotaxis, so there's no definition of how much LIDAR you need to do the job, just however much you feel like. Tesla feels like none at all.
Microvision has been saying that from half a decade, products? Nowhere to be found.
I wonder if this could be adapted to the vtuber market. Saw a vtuber body tracker being marketed at $11k recently.
> laser pulses
> phased-array
I'm not well versed into RF physics. I had the feeling that light-wave coherency in lasers had to be created at a single source (or amplified as it passes by). That's the first time I hear about phased-array lasers.
Can someone knowledgeable chime in on this?
The beam is split and re-emitted in multiple points. By controlling the optical length (refractive index, or just the length of the waveguide by using optical junctions) of the path that leads to each emitter, the phase can be adjusted.
In practice, this can be done with phase change materials (heat/cool materials to change their index), or micro ring resonators (to divert light from one wave guide to another).
The beam then self-interferes, and the resulting interference pattern (constructive/destructive depending on the direction) are used to modulate the beam orientation.
You are right that a single source is needed, though I imagine that you can also use a laser source and shine it at another "pumped" material to have it emit more coherent light.
I've been thinking about possible use-cases for this technology besides LIDAR,. Point to point laser communication could be an interesting application: satellite-to-satellite communication, or drone-to-drone in high-EMI settings (battlefield with jammers). This would make mounting laser designators on small drones a lot easier. Here you go, free startup ideas ;)
I think about it like a series of waves in a pool. One end has wave generators (the lasers) spaced appropriately such that resulting waves hitting the other end interfere just right and create a unified wavefront (same phase, amplitude, frequency).
NB: just my layman's understanding
In principle, as the sibling comment says, you could measure just the phase difference on the receiver end. The trick is that it's much harder for light frequencies than radar. I'm non even sure we can measure the phase etc of a light beam, and if we could, the Nyquist frequency is incredibly high - 2x frequency takes us to PHz frequencies.
There might be something cute you can do with interference patterns but no idea about that. We do sort of similar things with astronomic observations.
A phased array is an antenna composed of multiple smaller antennas within the same plane that can constructively/destructively aim its radio beam within any direction it is facing. I'm no radio engineer but I think it works via an interference pattern being strongest in the direction you want the beam aimed. This is mostly used in radar arrays though I suppose it could work with light too since it is also a wave.
Not an expert, but main challenges with laser coherency are present when shaping the output using multiple transmitters.
For lidar you transmit a pulse from a single source and receive its reflection at multiple points. Mentioning phased array with lidar almost always means receiving.
Interesting to see the cost curve drop ... this always changes the market.
I have been watching the sensor space for a while. Cheap LIDAR units could open up weird DIY uses and not just cars. ALSO regulatory and mapping integration will matter. I tried to work with public datasets and it's messy. The hardware is only one part! BUT it's exciting to see multiple vendors in the space. Competition might push vendors to refine the software stack as well as the hardware. HOWEVER I'm keeping an eye on how these systems handle edge cases in bad weather. I don't think we have seen enough data yet...
> Cheap LIDAR units could open up weird DIY uses and not just cars.
Interestingly, there are already some comparatively cheap LIDAR units on the market.
In the automotive market, ideally you need a 200m+ range (or whatever the stopping distance of your vehicle is) and you need to operate in bright direct sunlight (good luck making an eye-safe laser that doesn't get washed out by the sun) and you need more than one scanning plane (for when the car goes over bumps).
On the other hand, for indoor robotics where a 10m range is enough and there's much less direct sunlight? Your local robotics stockist probably already has something <$400
Neato from San Diego has developed a $30 (indoor, parallax based) LIDAR about 20 years ago, for their vacuum cleaners [1].
Later, improved units based on the same principle became ubiquitous in Chinese robot vacuums [2]. Such LIDARs, and similarly looking more conventional time-of-flight units are sold for anywhere between $20-$200, depending on the details of the design.
Sounds like the quality isn't all that great but LD06 sensors look like they're about $20 and someone who works on libraries about this suggested the STL27L which seems to be about $160 and here's an outdoor scan from it: https://sketchfab.com/3d-models/pidar-scan-240901-0647-7997b...
Not sure if the ld06 is a scanner like this or if it's just a line (like you'd use for a cheaper robot vac).
[dead]
@dang .... do these comments seem organic to you? old accounts with almost zero karma going out of their way to use the same verbiage to compliment waymo 18 minutes after an article gets posted? .... dead internet at work.
Please don't post like this. If you suspect something, please email us (hn@ycombinator.com) with links to specific comments. The guidelines are clear abut this:
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Anytime a Tesla or Elon related article is posted it gets a barrage of negative comments usually FUD like. Any neutral or positive comment gets downvoted heavily. Bit suspicious to say the least, very clear pattern, they are not doing it very well should be a bit more nuanced.
There is no evidence of any such organised campaign. The critical comments we see against that company and person are generally from known, established HN users, and align with frequently-expressed sentiments among the general public. And the complaint is just as often made that "anything remotely critical" about that company and person is flagged. If posts about the topic are being downvoted and flagged, it's mostly because that person and company are in the news so frequently that most commentary about them is repetitive, sensationalist and uninteresting, and thus off topic for HN.
What a great website. Thanks for the data! And good work
Or everyone is just tired of tesla and their stubborn camera only tech that will fail in higher autonomy cases?
No no it's the cabal...
Could be lurkers triggered
There are laser measurers sold for a few buck on Temu. Robot vacuums sold for few hundred dollars have Lidars that map out the room in a seconds.
Is there any actual technical reason why automobile Lidar be expensive? Just combine visual processing with single point sampler that will feed points of interest and accurate model of the surroundings will be built.
Most spinning robovac LIDARs are 2D. Most solid state robovac LIDARs are like 8x8 array of laser pointers.
Automotive LIDARs are like, 128x64[px] for production models or 1920x1080[px] for experimental models with GbE and/or HDMI-equivalents-of-industry outputs. Totally different technologies.
Probably one factor is range. The article talks about 200-300m range, a robot vacuum has maybe 10m best case?
Is the 1 cm spec 1σ (or less) or worst-case? It’s a safety-critical application.
I know that automotive parts of the standard requirement to withstand 80°C (or 120°C for military use). A robot vacuum working in a living room can probably be made cheaper because it does not have to face as harsh environments?
Also, range is probably a factor. In a living room, you probably need something like 20m max. You car should "see" farther.
Sure, these are the assumptions but silicon is silicon, copper is copper and solder is solder. They don't use easy melting electronics in vacuums and hardened stuff in cars, the tech is about the same unless it is supposed to work in highly radioactive environment. The plastics are different but car interiors are full of plastics, so its unlikely that the costs of temperature resistant plastics needed for this is more than a cupholder.
As for the range, again pretty powerful lasers are sold for sub 10SUD prices on retail. I am sure that there must be higher calibration and precision requirements as the distance increase but is it really order of magnitudes higher? 120 meters laser measurer with 1cm accuracy is 15 Euros on Temu and that thing has an LCD screen and a battery as a handheld device. How much distance do you actually need?
Not only that but vibrations play a big part as well, especially on ICE vehicles.
Vibrations are surely an issue with electromechanical systems but hardly with electronics. There are plenty of cheap electronic accessories for cars and you can observe that those keep functioning for years.
Please keep politics out of it.
ICE = internal combustion engine
to add to the rest of the comments, a reliability standard also adds on cost. The scale is different, but compare a car bolt vs manned space mission craft's bolt.
Radar is extremely expensive, and lifar is just below that.
Glad to see someone lowering the cost of this technology, and hope to see lots of engineers using this tech as a result.
We might even see a boom in LIDAR tech as a result
What makes you say radar is extremely expensive? Virtually every car from the last decade has at least one, many have two or more. They’re barely more than a PCB and a radar ASIC.
If you want to compete with LIDAR, you need high resolution 4D (range, velocity, azimuth, and height) RADAR. Those are usually phased arrays with expensive phase sensitive electronics, and behind that a chip that can do a lot of Fourier transforms very quickly.
The cheap RADAR devices you're talking about usually only output range and velocity, sometimes for a handful of rather large azimuth slices. That doesn't compete with LIDAR at all.
Below is one of the comments poster to original article, reading it makes me think that most of the whole article has been regurgitated by some AI:
>"This misleading article contains numerous factual errors regarding automotive lidar. Here are the most glaring:
There are multiple manufacturers, including Hesai, that use mechanical means for at least one scan axis and are already sold for a fraction of the "$10k - $20k" price noted by the author. Luminar itself built this class of scanners before going bankrupt.
Per Microvision's own website, the Movia-S does not use a phased array and also does not have a range anywhere near 200m.
Velodyne and Luminar do not even exist as companies anymore. Both have gone bankrupt and been acquired by competitors."
Is this Human safe at these volumes? There was a time you could get your feet sized by putting them into an X-ray box at the shoe store. Removed from stores once the harm was known.
Well, the energy levels used in those devices should be miniscule, and the wavelengths used are well studies. The problem with x-rays - was lack of studies on health effects, and regulations on those effects. I think, since that time, we've studies radiation (be it light, rf or other parts of spectrum) much more.
There is indeed a possibility that we're overlooking some bio-electromagnetic interaction effects; for instance now there is some evidence that led lights might not be harmless - but again, it's not the they affect biological structures somehow, but the lack of spectral components has some effects. It is an interesting topic to research. But, the lidar "should" be safe
What is this author even doing with these numbers?
can I buy it on digikey yet?
How could I buy one?
It might, but comma.ai proves that lidar is red herring, which is further supported by the fact that Waymo are able to drive vision-only if necessary.
> comma.ai proves that lidar is red herring
I mean it doesn't. If you actually look at it comma.ai proves that level two doesn't require lidar. Thats not the same as full speed safe autonomy.
whilst it is possible to drive vision only (assuming the right array of cameras (ie not the way tesla have done it) lidar gives you a low latency source of depth that can correct vision mistakes. Its also much less energy intensive to work out if an object is dangerous, and on a collision course.
To do that in vision, you need to work out what the object is (ie is it a shadow) then you have to triangulate it. That requires continuous camera calibration, and is all that easy. If you have a depth "prior" ie, yes its real, yes its large and yes its going to collide, its much much more simple to use vision to work out what to do.
It's fair to point out that comma.ai is SAE level two system, however it's not geofenced at all, which is an SAE level 5 requirement. But really that brings up the fact that SAE's levels aren't the right ones, merely the ones they chose to define since they're the standards body. A better set of levels are the seven I go into more detail about on my blog.
As far as distinguishing shadows on the road, that's what radar is for. Shadows on the road as seen by the vision system don't show up on radar as something the vehicle will run into.
Your autonomy scale is pretty arbitrary and encodes assumptions about the underlying technology and environments the vehicle is supposed to implement and operate in.
The SAE autonomy scale is about dividing responsibility between the driver and the assistance system. The lowest revel represents full responsibility on the driver and the highest level represents full responsibility on the system.
If there is a geofenced transportation system like the Vegas loop and the cars can drive without a human driver, then that is a level 5 system. By the way, geofencing is not an "SAE level 5" requirement. Geofencing is a tool to make it easier to reach requirements by reducing the scope of what full autonomy represents.
I saw a Waymo in Seattle, today. If Waymo can get Seattle right, that gives me a lot of confidence that their stack is very capable of difficult road conditions.
Note: I have not had the pleasure of riding in one yet, but from what my friend in SJ says, it’s very convenient and confidence-inspiring.
I have had the pleasure of riding a few times in SanFrancisco.
The drive was delightful and felt really safe. It handled the SF terrain, traffic and mixed traffic like trams very well.
I wouldnt trust a self driving tesla ( or any camera only systems) though!
I took the Waymo from San Jose airport to home on the peninsula. It took the 101 highway back for the most part, driving very conservatively at 65-55 mph, and in the right most lane. It still has a few quirks though. When there aren't any cars around it will speed up to 65 mph, but at on-ramps, it will slow down to 55 and then speed up once past. It will get stuck behind slow drivers being in the right most lane and patiently follow them a few car length behind them. On the plus side, the lidar stack field of view as shown on the internal display seems to see pretty far down the highway.
Tesla doesnt have Lidar?
No. They don't even have radar, camera is all you need, as per Elon.
Even more fury-inducing, they don't even have ultrasonic parking sensors on cars that have ultrasonic parking sensors. They disabled them to move to a vision-only stack that is no where near as accurate or as good and which categorically cannot tell a difference in ground truth has occurred in its blind spot. But hey, all _people_ need are two cameras, right?
Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo) without touching the wheel from parking spot to parking spot everyday. Have you tried it?
Maybe because of the multiple investigations Tesla has currently due to crashes, deaths, injuries, etc. all caused by "whoops our cameras were fooled by some glare/fog and accelerated into a truck/pole"
Those are mainly autopilot which people conflate with FSD, and high percentage are human caused accidents (auto pilot requires full attention and driver is liable).
Why does Tesla ship a feature called "autopilot" which kills you if you use it instead of "FSD"?
Autopilot is Tesla’s brand name for adaptive cruise control with lane centering. This is a common feature available on a wide range of vehicles from nearly every major manufacturer, though marketed under different names (e.g., ProPilot, BlueCruise).
Drivers can and do misuse adaptive cruise control systems, sometimes with fatal consequences. Memes aside, there is no strong evidence that fatal misuse occurs more frequently by owners of Tesla cars than with comparable systems from other brands.
This perception reflects the Baader–Meinhof phenomenon, more commonly known as the frequency illusion. Nobody is collecting statistics for other brands, so it’s assumed the phenomenon doesn’t occur.
A similar pattern occurred with media coverage of EV fires. Except in this case, good statistics exist which prove the opposite: ICE vehicles catch fire more often than EVs.
> Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo)
I own a Tesla and paid about $10K for the full self driving capability a few years ago. Yeah, I would not trust a Tesla to drive me from airport to my house. There is a reason Tesla is still stuck at level 2 autonomy certification and not 3, 4 or 5.
I would agree for most Teslas on the road. However, the very latest (HW4) cars are significantly better at FSD where I would nearly trust it now. Most of those older (pre-2025?) cars will not have their hardware upgraded so they'll still have FSD that drives like an idiot!
Because it is not real autonomous driving? Being liable for software that you can neither verify nor trust is THE dealbreaker. Once Tesla says "We are liable for all accidents with FSD" with higher level autonomous driving this game changes. But Waymo is just way more reliable.
will Musk backtrack on the whole CV enough, that's how humans do it if price becomes this low?
To be fair, Musk was only parroting what Karpathy was telling him so you should ask him how self driving cars are supposed to work with CV only.
[dead]
[dead]
Oh hell yeah, we can finally stop the braindead attempts to make a safe self-driving car with just cameras.
They might not use them for autopilot, but maybe for some emergency braking stuff, when everything else failed.
Is there anyone using only cameras except Tesla?
Xpeng, Wayne, aiMotive to name three. Probably many others, who claim to use LIDAR but don’t actually rely on it. Because LIDAR is perceived as a prerequisite for autonomous safety, admitting to not needing it is a bad PR move — for now.
There is a massive technical difference between Vision first but with LiDAR redundancy vs No LiDAR at all that is Tesla approach. Those are not the same architecture. So claiming XPeng, Waymo, or aiMotive validate Tesla is technically misleading.
XPeng system is sensor fusion. It is not camera only. Waymo is even clearer. For them LiDAR is not optional. aiMotive has now started to market camera only, but its experimental, no production deployments.
Nope...
Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
Humans also don't have wheels, but we build objects with wheels. It is as if we can build objects that don't resemble humans for specific purposes. Crazy...
> Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world.
My understanding is that cyber cabs still need safety drivers to operate, is that not the case?
Yes, but they are useless, they can't steer, hence why they have more accidents than humans per driven miles.
They have no steering wheel or pedals so no
> Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world.
Wait what? when did they actually enter mass production?
> I mean humans have Lidar sensors
Real time slam is actually pretty good, the hard part is reliable object detection using just vision. Tesla's forward facing cameras are effectively monocular, which means that its much much harder to get depth (its not impossible but moving objects are much more difficult to observe if you only have cameras aligned on the same plane with no real parallax)
Ultimately Musk is right, you probably don't need lidar to drive safely. but its far more simple and easier to do if you have Lidar. Its also safer. Musk said "lidars are a crutch", not because he is some sort of genius, Its obvious that SLAM only driving is the way forward since the mid 00's (of not earlier). The reason he said it is because he thought he could save money not having lidar. The problem for him is that he didn't do the research to see how far away proper machine perception is to account for the last 1% in accuracy needed to make vision only safe and reliable.
This is a weirdly tired counterpoint that Elon and Elonstans like to bandy about as if it's an apples to apples comparison. Humans have a weirdly ultra-high-dynamic-range binocular vision system mounted on an advanced ptz/swivel gimbal that allows for a great degree of freedom of movement, parallax effects, and a complex heuristic system for analyzing vision data.
The Tesla FSD system has... well, sure, a few more cameras, but they're low resolution, and in inconveniently fixed locations.
My alley has an occlusion at the corner where it connects to the main road: a very tall, very ample bush that basically makes it impossible to authoritatively check oncoming traffic to my left. I, a human, can determine that if I see the light flicker even slightly as it filters through the bushes, that the path is not clear: a car is likely causing that very slight change in light. My Tesla has no clue at all that that's happening. And worse, the perpendicular camera responsible for checking cross-traffic is mounted _behind my head_ on the b-pillar, in a fixed location that means that without nosing my car _into_ the travel lane, there is literally no way for it to be sure the path is clear.
This edge case is navigated near-perfectly by Waymo, since its roof-mounted lidar can see above and beyond the bush and determine that the path is clear. And to hit back on the "Tesla is making cheaper cars that can drive autonomously anywhere in the world": I mean, they still aren't? Not authoritatively. Not authoritatively enough that they aren't seeing all sorts of interventions in the few "driverless" trials they're doing in Austin. Not authoritatively enough when I have my Tesla FSD to glory. It works well enough on the fat part of the bell curve, but those edges will get you, and a vision only system means that it is extremely brittle in certain conditions and with certain failure modes, that a lidar/radar backup help _enhance_.
Moreover, Waymo has brought lidar development in-house, they're working to dramatically reduce their vehicle platform cost by reducing some redundant sensors, and they can now simulate a ground truth model of an absurd number of edge cases and odd scenarios, as well as simulate different conditions for real-world locations in parallel with their new world modeling systems.
None of which reads to me as "not going well for Waymo." Waymo completes over 450,000 fully autonomous rides per week right now. They're dramatically lowering their own barriers to new cities/geographies/conditions, and they're pushing down the cost per unit substantially. Yeah, it won't get to be as cheap as Tesla owning the entire means of production, but I'm still extremely bullish on Waymo being the frontrunner for autonomous driving for the foreseeable future.
Waymos are still making lots of errors that a human wouldn't (Stopping in middle of a road due to a puddle was a recent one https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting on LIDAR, I think Tesla is ahead now in most respects. It's could be wrong though we will probably know by the end of this year.
> I think Tesla is ahead now in most respects
Do you actually own a Tesla? I do. With FSD. And let me assure you, you are very wrong.
Humans cannot drive safely. Human drivers kill someone every 26 seconds. Waymos have never killed a person.
Part of that is that humans are distractible, and their performance can be degraded in many ways, and that silicon thinks faster than meat.
But part of it is the sensor suite. Look at Waymo vs Tesla robotaxi accident rates.
> Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
Humans don't have wheels and cannot go 70MPH. Humans also don't have rear view cameras and cannot process video feeds from 8 cameras simultaneously. The point of these machines is to be better than humans for transportation. If adding LIDAR means that these vehicles can see better than humans and avoid accidents that humans do get into, then I for one want them in my vehicle.
I don't understand what you're saying.
Stereo based depth mapping is kind of bad, especially so if it is not IR assisted. The quality you get from Lidar out of the box is crazy good in comparison.
What you can do is train a model using both the camera and Lidar data to produce a good disparity and depth map but this just means you're using more Lidar not less.
>It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
This feels like a highly misleading claim that might technically be true in the sense that there are less restrictions, but a reduction in restrictions doesn't imply an increase in capability.
The comment about Waymo seems to be particularly myopic. Waymo has self driving technology and is operating as a financially successful business. There is no conceivable situation where the mere existence of competition with almost the same capabilities would shake that up. Why isn't it companies like Uber, who have significantly fallen behind, that are in trouble?
>Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only. Lidar using cars are going to replace at least the ones that don't make use of this obvious answer to obstacle detection challenges.
As I understand, lidars don't work well in rain/snow/fog. So in the real world, where you have limited resources (research and production investment, people talent, AI training time and dataset breadth, power consumption) that you could redistribute between two systems (vision and lidar), but one of the systems would contradict the other in dangerous driving conditions — it's smarter to just max out vision and ignore lidar altogether.
Why does this matter? You have to slow down in rain/snow/fog anyway, so only having cameras available doesn't hurt you all that much. But then in clear weather lidar can only help.
> lidars don't work well in rain/snow/fog.
Neither do cameras, or eyeballs.
When it's not safe to drive, it's not safe to drive.
I've been in zero-road-speed whiteout conditions several times. The only move to make is to the side of the road without getting stuck, and turning on your flashers.
Low-light cameras would not have worked. Sonar would not have worked. Infrared would not have worked.
Limited resources? Billions per year are being thrown at the base technology. We have the capital deployed to exhaust every path ten times over.
The Swiss cheese model would like to disagree.
This is silly. Cameras are cheap. Have both. Sensors that sense differently in different conditions is not an exotic new problem. The kalman filter has existed for about a billion years and machine learning filters do an even better job.
The reasoning is cynical but sound. If the system uses only the sensing modes people have, it will make the mistakes people do. If a jury thinks "well I could have done that either!" You win. It doesn't matter if your system has fewer accidents if some of the failure modes are different than human ones, because the jury will think "how could it not figure that out?"
Until a lawyer points out other cars see that. My car already has various sensors and in manual driving sounds alarms if there is a danger I seem not to have noticed. (There are false alarms - but most of the type I did notice and probably should have left more safety margin even though I wouldn't hit it)
also regulators gather srastics and if cars with something do better they will mandate it.
I don't think that's the reasoning.
The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.
Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.
Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams. Thats mainly a software problem and companies like Waymo have done it, but Tesla was having trouble with it earlier, and if you don’t do it right, your self driving system can be less reliable.
> Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.
And, less excusable, ignorant of how incredible human eyes are compared to small sensor cameras. In particular high DR in low light, with fast motion. Every photographer knows this.
There certainly is a pretty on going miscalculation regarding human intelligence, and consrquentially, empathy.
IMHO not using lidars sounds like a premature optimisation and a complication, with a level of hubris.
This is a difficult problem to solve and perhaps a pragmatic approach was/is to make your life as simple as possible to help get to a fully working solution, even if more expensive, then you can improve cost and optimise.
Eh, I think ‘miscalculation’ might be giving too much credit about good intentions.
He wanted (needed?) to get on the hype train for self driving to pump up the stock price, knew that at the time there was zero chance they could sell it at the price point lidar required at the time - or even effective other sensors (like radar) - and sold it anyway at the price point that people would buy it at, even though it was not plausibly going to ever work at the level that was being promised.
There is a word for that. But I’m sure there are many lawyers that will say it was ‘mere fluffery’ or the like. And I’m sure he’ll get away with it, because more than enough people are complicit in the mess.
Miscalculation assumes there was a mistake somewhere, but near as I can tell, it is playing out as any reasonable person expected it too, given what was known at the time.
I think Musk is really not as smart as he thinks he is and this specific thing was probably an earnest mistake. Lots of other fraudulent stuff going on though of course!
[dead]
This is a new and flawed rational that I haven't heard before. Tesla cameras are worse (lower resolution, sensitivity, and dynamic range) than human eyes and don't have "ears" (microphones).
Very recent issue with Waymo https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-.... This is 17 years after they bet the farm on LIDAR, with no signs its ever going to be cost effective or that it's better than multiple cameras, with millisecond reaction 360 degrees, that never gets tired, drunk, distracted, and also has other cheaper sensors and NN trained on Billions or real world data.
There is also a report from the same flooding in LA of a Waymo driving into a flooded road and getting stuck.
They might have flipped a switch after that, causing this.
Tesla does not handle rain well either. This is not a LIDAR problem, it is a problem with self driving cars in general.
That's an example of it failing safe. I'd rather it did that than drive me into a sinkhole because it thought it was a puddle.
Ok so Waymo is useless in the rain then, kind of limiting. But at least that 0.000000000001% times it actually is a sinkhole you won't damage the bumper.
I'd rather a Waymo be useless in the rain rather than a Tesla be actively dangerous and likely to kill me.
Tesla ""autopilot"" fatalities: 65
Waymo fatalities: 0
Autopilot isn’t full self driving (FSD), most cars these ship with smart cruise control (what autopilot basically is). Do you have fatality statistics for FSD?
If we are just talking about smart cruise control, most cars are using cameras and radar, not lidar yet. But Tesla is special since it doesn’t even use radar for its smart cruise control implementation, so that could make it less safe than other new cars with smart cruise control, but Autopilot was never competing with Waymo.
Dude that's not a 'puddle' as the article claims, that's a body of water that it's not even visually obvious whether it's safe to drive through. Maybe I'm a bad driver but I'd hesitate to drive through that in a small car either.
>A vehicle got stuck trying to figure out an obstacle so sensors with less information are better than sensors with more information.
Pretty hard to do if your whole selling point is ‘better and safer than human’ however?
Yea, even in the case they could match human level stereo depth perception with AI, why would they say "no" to superhuman lidar capabilities. Cost could be a somewhat acceptable answer if there wouldn't be problems with the camera only approach but there are still examples of silly failures of it. And if I remember correctly they also removed their other superhuman radar in their newer models, the one which in certain conditions was capable of sensing multiple cars ahead by bouncing the signal below other cars.
considering cameras can create reliable enough distance measurements AND also handle all the color reception needed for legally driving roads it was always a ridiculous idea by a certain set of people that lidar is necessary.
No, cameras cannot create reliable distance measurements in real-world conditions. Parallax is not a great way to measure distance for fast, unpredictably moving objects (such as cars on the road). And dirt or misalignment can significantly reduce accuracy compared to lab conditions.
Note that humans do not rely strictly on our eyes as cameras to measure distances. There is a huge amount of inference about the world based on our internal world models that goes into vision. For example, if you put is in a false-perspective or otherwise highly artifical environment, our visual acuity goes down significantly; conversely, people with a single eye (so no parallax-based measurement ability) still have quite decent depth perception compared to what you'd naively expect. Not to mention, our eyes are kept very clean, and maintain their alignment to a very high degree of precision.
Stereo cameras are useless against repeating patterns. They easily match neighboring copies. And there are lots of repeating or repeating-like patterns that computers aren't smart enough to handle.
You can solve this by adding an emitter next to the camera that does something useful, be it just beaconing lights or noise patterns or phase synced laser pulses. And those "active cameras" are what everyone call LIDARs.
There are tons of evidence showing that cameras are alone are not safe enough and even Tesla has realized that removing lidar to save cost was a mistake.
Just say Tesla, why censor yourself.
I have a suspicion here on HN. When criticizing big tech, especially Google and FB, at a certain time of the day a specific cohort comes online and downvotes. Suspiciously, that is a time when one could conclude, that now people in the US start working or come online. Either fanboys, employees or an organized group of users trying to silence big tech criticism.
I have no proof of course and it might be coincidence, or just difference of mindset between US citizens and Europe citizens. It happened a few times already and to me looks sus.
But if they actually read and not just ctrl+f <company name>, then of course not writing the company name, but hinting at it in an obvious way is no more helpful either.
I have seen this happening multiple times, some to fairly reasonable comments with a just tiny negative tone.
There is also flagging abuse which effectively kills the comment /post.
It's been my experience that hn and reddit have a very high overlap in audience these days. The jerrybreakseverything crowd. Anything anti-tesla, anti-grok, is applauded.
> Since lidar has distance information and cameras do not, it was always a ridiculous idea by a certain company to use cameras only
Human eyes do not have distance information, either, but derive it well enough from spatial (by ‘comparing’ inputs from 2 eyes) or temporal parallax (by ‘comparing’ inputs from one eye at different points in time) to drive cars.
One can also argue that detecting absolute distance isn’t necessary to drive a car. Time to-contact may be more useful. Even only detecting “change in bearing” can be sufficient to avoid collision (https://eoceanic.com/sailing/tips/27/179/how_to_tell_if_you_...)
Having said that, LiDAR works better than vision in mild fog, and if it’s possible to add a decent absolute distance sensor for little extra cost, why wouldn’t you?
Human/animal vision uses way more than parallax to judge distances and bearings - it uses a world model that evolved over millions of years to model the environment. That's why we can get excellent 3D images from a 2D screen, and also why our depth perception can be easily tricked with objects of unexpected size. Put a human or animal in an abstract environment with no shadows and no familiar objects, and you'll see that depth perception based solely on parallax is actually very bad.
Human eyes are much better than cameras at dealing with dynamic range. They’re also attached to a super-computer which has been continuously trained for many years to determine distances and classify objects.
I don’t like the comparison between humans and humans. Humans don’t travel around at 100mph in packs of other humans. Why not use every sensor type at our disposal if it gives us more info to make decisions? Yes I understand it’s more complicated, but we figure stuff out.
Let me know when you have a camera package with human eye equivalency.
I'll preface by saying lidar should be used with autonomous vehicles.
Individual cameras don't have distance information, but you can easily calibrate a system of cameras to give you distance information. Your eyes do this already, albeit not quantitatively. The quantitative part comes from math our brains aren't setup to do in real time.
It was cost wasn't it?
If this lowers Lidar costs, and Tesla has spent all this time refining the camara technology. Now have both.
Use both.
The mind salivates at the idea of sub-$100 and soon after sub-$10 Lidar. We could build spatial awareness into damn near everything. It'll be a cambrian explosion of autonomous robots.
RIP to every single camera in existence if that happens. Lidar is awful with damaging camera lenses.
I had to look this up, because I had never heard of it. How could a lens be damaged by infrared lasers?
It turns out it’s the sensors that are easily damaged by high powered lidar lasers.
https://spectrum.ieee.org/amp/keeping-lidars-from-zapping-ca...
There is complains that some Volvo cars damaged iPhone cameras. It’s not even clear if Apple takes those under warranty. We’ve seen car review YouTubers that got their iPhone camera sensors damaged captured (by a second camera) while reviewing
One such review where Marques shows how it happened to his phone
https://youtube.com/shorts/oeHtfMFdzIY?si=cANJDT5BLfdd9ZUT
One highlight from the video, he says most cameras are fine, it's just iphones that don't have a very good IR filter. Which sounds correct, in my experience most cameras have pretty substantial IR filters that have to be removed if you want to photograph IR.
I also wonder if the smaller sensor size on phones contributes, since the energy is being focused onto a smaller spot.
Either way, for that to happen he was filming the LIDAR while active, for a decent amount of time, from right next to the car. I assume under normal conditions it wouldn't be running constantly while the vehicle is stationary?
If this is true, the eyes are no better. Especially as it can't be seen, who will look awsy? And at night, with open irises?
There was someone who had his eyes damaged by sitting next to a heater.
> The biggest concern is not photographic cameras but rather the video cameras mounted on autonomous cars to gather crucial information the cars need to drive themselves.
So they don't care if that breaks my phone camera? Wtf?
The Epstein classes argument is: If youre not my property, why should We care?
Is there any deeper study on long term effects regarding retinal damage?
I would imagine, even with safe dosages, there would be some form of cumulative effect in terms of retinal phototoxicity.
More so if we consider the scenario that this becomes a standard COTS feature in cars and we are walking around a city centre with a fleet of hundreds of thousands of these laser sources.
Some lidar units simply use the wavelength that the human eye is opaque to.
The grandparent comment is about camera lenses with little to no near infrared cutoff filter. Some older iPhones were like that and that was the original breaking story.
> human eye is opaque to
Absorbing the laser isn't necessarily any good. Very hypothetically it could lead to cataracts.
Sun emits much stronger IR, near-IR, UV
Absolutely, and is a major cause of cataracts. Somewhat near 100% of people with lenses in their eyes will get cataracts eventually if they are ever exposed to unfiltered sunlight.
And staring directly at the sun is not recommended.
I suspect we can't quantify human eye-damage enough to easily rule-out chronic effects... until it's too late for the patient.
iPhones have had lidar for years, have cameras been affected?
Other cameras. When the lidar laser points at the camera sensor.
Could be a gain for privacy ;-)
TIL!
Thanks! What a headache
What? Please explain!
Sensor damage
https://youtube.com/shorts/oeHtfMFdzIY?si=hpLBgqom_kHVPuhL
There are already very good sub-$100 lidars, especially for 2D since they were made en masse for vacuum cleaners. E.g. the LD19 or STL-19P as they're calling it now for some reason. You need to pair them with serious compute to run AMCL with them, plus actuation (though ST3215s are cheap and easy to integrate now too) and control for that actuation which also wants its own compute, plus a battery, etc. the costs quickly add up. Robotics is expensive regardless of how cheap components get.
RIP to humans under authoritarian regimes?
And, I guess, even more advanced surveillance.
I think we’re well past the point where mass surveillance was a technical challenge. Mass oppression through autonomous violence however…
Even back when Snowden was current news, we'd reached the point where laser microphones could cover every window in London for a bill of materials* less than the annual budget of London's police force.
* I have no way to estimate installation costs, but smartphones show that manufacturing at this scale doesn't need to increase total cost 10x more than the B.o.M.
https://en.wikipedia.org/wiki/Optimus_(robot)
LIDAR would be preferrable to cameras when it comes to privacy actually
People saying LIDARs can't recognize colors or LIDARs can't take pictures don't know what they are talking about.
They're just fancy cameras with synced flashes. Not Star Trek material-informational converting transporters. Sometimes they rotate, sometimes not. Often monochrome, but that's where Bayer color filters come in. There's nothing fundamentally privacy preserving or anything about LIDARs.
I don't think it makes a difference. Dense lidar goes you more information than 2d colour imagery.
There are SLAM cameras that only select "interesting" points, which are privacy preserving. They are also very low power.
I’d definitely feel much better if most cameras in the world were replaced by LIDAR. I feel like it would be much tougher to have a flawless facial recognition program with LIDAR alone
Who needs facial recognition if you can identify people based on gait?
Gait recognition is almost entirely hype. Sure it works to tell the difference between n = 10 people but so what, you can tell the difference between a group of 10 people by what kind of shoes they are wearing.
Judicial systems where a 6% error rate is deemed way too high to lead to a conviction.
Then you combine it with some other technique, eg tracking daily routes of individuals, to lower the error rate. You only need a handful of bits to distinguish all inhabitants of the average city. But imho that error rate would likely be low enough for some judge to authorize more invasive surveillance of suspects thus identified.
The minute internet became widespread it was game over.
Pros and cons. :/
It'll never happen, but we need a bill of rights for privacy. The laypeople aren't well-versed or pained enough to ask for this, and big interest donors oppose it.
Maybe the EU and states like California will pioneer something here, though?
Edit: in general, I'm far more excited by cheap lidar tech than I am afraid of the downsides. We just need to be vigilant.
The EU already has. GDPR and the AI Act puts a lot of limits on what you can do in the open space, although it doesn't always go far enough.
And barely gets enforced
https://en.wikipedia.org/wiki/GDPR_fines_and_notices
Top 5 fines:
1 - Meta - Ireland - €1.2 billion
2 - Amazon Europe - Luxembourg - €746 millions
3 - WhatsApp - Ireland - €225 millions
4 - British Airway - UK - £183 millions
5 - Google - France - €60 millions
I wish every law barely got enforced this way.
I'd say the numbers listed here prove the GPs point of poor enforcement. The largest fine is roughly 0.97% of Meta's 2023 revenue, the equivalent of a $600 fine for somebody making 60k / year. It's a tiny-tiny cost of doing business at best, definitely not a deterrent, given Meta's blatant disregard for GDPR since then.
> the equivalent of a $600 fine for somebody making 60k / year
I don't know about you, but on that income I would certainly not brush off such a fine as a "cost of doing business". Would it cause me financial trouble, or would it force me to sacrifice other expenses? Absolutely not. But would I feel frustrated at having to pay it, feel stupid for my mistake, and do my best to avoid it in the future? Absolutely yes.
My bad, a better analogy would be a dealer making 60k / year selling drugs, gets caught by police and is fined $600. I wouldn’t expect them to change much.
Fair enough. In that sense I do see value in the analogy.
Would you still do your best to avoid it if that involved taking a pay cut of more than $600/year?
1% of Meta's global revenue is a tiny-tiny cost of doing business? At that point, I think I can stop even trying to argue here. It's a massive fine any way you put it. Especially when you consider the ceiling hasn't been reached and non compliance is more and more costly by design.
Their net profit was $60billion in 2024. This is peanuts. It can fluctuate by multiples of this fine in a month, depending on whether or not they've had a bad or good month, nevermind year. This pretty much is just a cost of doing business.
It's not even 1% of their annual revenue, let alone the entire multi year period they've been in breach before and since. It's nothing to them.
The interesting part is that it keeps going up. You seem to believe we have somehow reached a cap where Meta can just expense it as a cost of doing business. That's not how European law works. The fine maximum is far higher and repeated non compliance keeps making the fines higher and higher. It's a ladder not a sizing precedent.
Maximum GDPR fine is 4% of global revenue in the previous year. If a company has 30% profit margin then they can, in theory, treat is as a cost of doing business, indefinitely.
pretty pathetic, but people keep insisting you can regulate capital
Humanity has never known a world without surveillance. Responsibility cannot exist without being watched. Primitive tribes lived under the constant eye of the group, and agricultural eras relied on the strict oversight of the clan. Modern states simply adopted new tools for an ancient necessity. A society without monitoring is a society without accountability, which only leads to the Hobbesian trap of endless conflict.
Mass surveillance is a relatively recent development. Dense urban civilizations are not. And yet their denizens have not historically devolved into a “nasty, brutish, and short” existence. In fact, cities have been centers of culture and learning throughout history. How does this square with your theory?
The 19th century was the true cradle of mass surveillance. Civil registration, property tracking, and institutionalized police forces provided the systemic oversight required to manage dense urban life. These administrative tools served as the analogue version of digital monitoring to ensure every citizen remained known and categorized. Cities thrived as centers of culture only because these new forms of visibility prevented the Hobbesian collapse that anonymity would have otherwise triggered.
And what about all of the previous ~40-50 centuries where cities were centers of learning and art and not Hobbesian hell holes? Ur is slightly older than the 19th century, I believe.
And note that there is evidence for cities of tens of thousands of inhabitants from 3000 BCE, while Rome reached 1 000 000 residents by 1CE. Again, without becoming some Hobbesian nightmare.
None of those things are remotely comparable to the surveillance we're talking about. There's a world of difference between, "My city knows who owns what properties and also we have a police force", and "Western intelligence agencies scoop up every bit of data they can grab about anyone on the planet and store it forever"
In my country it wasn't until the late 19th century that someone had the balls to stop going to church on Sunday. It was a huge scandal at the time but it all worked out in the end.
Humans have always done mass surveillance on eachother. You don't need technology for that.
At no point in time before this era was it possible for a random bureaucrat to have a reasonably comprehensive list of everyone in a country who attended church yesterday.
Scale matters.
That's an incredibly bullshit argument to defend the indefensible.
Your reaction actually proves the point. Aggression thrives in anonymous spaces because the lack of oversight removes the weight of accountability. When people feel unobserved, they quickly abandon the social friction that once held tribes and clans together. You are essentially providing a live demonstration of why a society without any form of monitoring inevitably slides into the Hobbesian trap.
I don't think a random internet comment proves anything about society at large.
People don't hesitate to be aggressive even when they're not anonymous and there's a threat of accountability - see, all crime, or people just acting shitty toward others.
Mass surveillance does not cause everyone to magically get along.
History shows that whenever surveillance gaps appear, chaos follows. The explosion of crime during early urbanization was the specific catalyst for the creation of modern police forces because traditional social bonds had failed to provide oversight in growing cities. Japan maintains its safety through a deep-rooted culture of mutual neighborhood monitoring that leaves little room for anonymity. Even China successfully quelled the violent crime waves of its early economic boom by implementing a sophisticated surveillance network.
Police forces nor "neighborhood monitoring" are equivalent to mass surveillance though.
Anyway I'm curious why - despite having less anonymity than at any point in history, at least from the perspective of law enforcement - we still see high crime rates, from fraud to murders?
This is a reduction to absurdity. Those old societies you cite didn't actively surveil with the goal of micromanaging people's daily lives the way that modern ones do.
Rural surveillance was far more suffocating because every single action was subject to the community gaze. This is exactly why classic literature frames the journey to the city as a liberation from the crushing weight of the village eye. The idea of the peaceful countryside is a modern utopian fantasy that ignores how ancient clans dictated every aspect of life including marriage and death. Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management. Ancient society did not just monitor people; it owned their entire existence through inescapable social visibility.
"It was always shit everywhere" is revisionist history born out of the fantasy of statists looking to justify the modern (administrative) enforcement state.
While the lack of anonymity in small towns certainly puts a damper on one's ability to deviate too far from social norms, the list of things and subject that could get you subjected to government violence without creating a victimized party was infinity shorter. Things that get state or state deputized enforcers on your case today were matters of "yeah that's distasteful, he'll have to settle that with god" or it would come back to bite you when something happened 150+yr ago because society did not have the surplus to justify paying nearly as manny people to go around looking for deviance that could be leveraged to extract money. These people had way more practical day to day freedom to run and better their lives than we do now, if constrained by the fact that they had substantially less wealth to leverage to that effect.
> Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management
And they almost exclusively deal in things that historical societies didn't even bother to regulate.
You're beyond delusional if you think running afoul of HOA is worse than running afoul of the local, state or federal government. Yeah they can screech and send you scary letter with scary numbers but they don't get the buddy treatment from courts that "real" governments do (to the great injustice of their victims) and their procedural avenues for screwing their victims on multiple axis are way more limited.
Seriously, go get in a pissing match with a municipality over just where the line for "requires permit" is and get back to me. Unless you want to do something that is more than petty cosmetic stuff and unambiguously in violation of the rules a HOA is a paper tiger for the most part (not to say that they don't suck).
Are we sure these things aren’t damaging our eyes? It’s lasers shooting all over the place right?
When designed, built, installed and calibrated correctly, the power and wavelengths used are not considered harmful to humans.
Interestingly, there have been people in the LIDAR industry predicting costs like this for many years. I heard numbers like $250 per vehicle back in 2012 [1]
Of course, ambitious pricing like this is all about economies of scale - sensors that are used in production vehicles are ordered by the million, and that lowers the costs massively. When the huge orders didn't materialise, the economies of scale and low prices didn't materialise either.
[1] https://web.archive.org/web/20161013165833/http://content.us...
Also 'Luminar Technologies, a prominent U.S. lidar manufacturer, filed for Chapter 11 bankruptcy in December 2025' LIDAR is useful in a small set of scenarios (calibration and validation) but do not bet the farm on it or make it the centre piece of your sensor suite.
Also, MicroVision, the company in OP's article bought the IP from Luminar. This feels like a circular venture capital scam. Luminar originally went public via SPAC and made a bunch of people very wealthy before ultimately failing.
The same Luminar from the Mark Rober video?
https://www.forbes.com/sites/bradtempleton/2025/03/17/youtub...
This is very wrong. LIDAR scanners have revolutionized surveying by enabling rapid, high-precision 3D mapping of terrain and infrastructure, capturing millions of data points per second. LIDAR can penetrate dense vegetation, allowing accurate, ground-level, mapping in forested or obstructed areas. Drone mounted LIDAR has become very popular. Tripod mounted LIDAR scanners are very commonly used on construction sites. Handhels LIDAR scanners can map the inside of buildings with incredible accuracy. This is very commonly used to create digital twins of factories.
And none of this is on the order of magnitude that consumer automotive would have.
The EU requires every new car to have Autonomos Emergency Braking. If LiDAR becomes cheaper than radar, this is a potential market of millions.
Lidar is critical for any autonomous vehicle. It turns out a very accurate 3D point cloud of the environment is very useful for self driving. Crazy, I know.
Useful but not at all required. Camera + radar is sufficient for driving, and camera+ USS is fine for parking.
Radar is just cheaper than the number of cameras and compute, it's also not really a strict requirement.
Look at how the current cars fuck up, it's mostly navigation, context understanding, and tight manoeuvres. Lidar gives you very little in these areas
All of the actually WORKING self driving systems use LIDAR. This is not a coincidence.
I work with programs approaching L3+ from L2, with the requirement that the system works for 99% of roads (not tesla before people start fixating on that).
We find that the cases where lidar really helps are in gathering training data, parking, and if focused enough some long distance precision.
None of these have been instrumental in a final product; personally I suspect that many of the cars including lidar use it for data collection and edge cases more than as part of the driving perception model.
Accidents are not normal driving situations but edge cases.
Waymo is the best current autonomous driving system and Waymo uses LIDAR. This is because LIDAR is an incredibly effective sensor for accurate range data. Vision and Radar range data is much less accurate and reliable.
Waymo used LIDAR in the realtime control loop. It combines LiDAR, camera, and radar data in real time to build a 3D representation of the environment, which is constantly updated.
I fundamentally don't trust any level 4 system that doesn't use LIDAR
Like Waymo? (https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting the farm on LIDAR the solution fails to navigate a puddle. Sorry but they bet on the wrong technology, Tesla has overtaken them with multi camera and NN solution.
> Tesla has overtaken them with multi camera and NN solution.
Let me guess, you heard this from Elon?
Your conclusion from a single incident is a bad inference. One vehicle getting confused by a puddle (likely a sensor fusion edge case or mapping artifact, not a fundamental LIDAR failure) doesn't indict the technology. Tesla's cameras have produces vastly more failures.
Waymo has driven tens of millions of autonomous miles with a serious injury/fatality rate dramatically lower than human drivers. The actual data shows the technology works. Tesla FSD still requires active driver supervision and is not legally or technically a robotaxi system. Comparing them as if they're at parity is wrong.
LIDAR gives direct metric depth with no inference required. Camera-only systems must infer depth from 2D images using neural networks, which introduces failure modes LIDAR doesn't have. Radar is very valuable when LIDAR and cameras give ambiguous data.
What metrics has Telsa overtaken Waymo? Deployed robotaxi revenue miles? No. Disengagement rates? No published comparable data. Safety per mile in driverless operation? No.
A Tesla wouldn't stop for a puddle. Also its not locked to a small geofenced area (people have driven coast to coast without a single intervention on FSD including parking spot to parking spot) when I can buy a Waymo vehicle that does this then Waymo would have caught up with Tesla.
Wow, so it can cope with driving on the highway. That's the easy part.
Your puddle example is utterly irrelevant. Tesla's are notorious for phantom breaking. Robotaxis are very much locked to tiny geofenced areas. Some even shaped like a penis because Musk is such a child.
"people have driven coast to coast without a single intervention on FSD including parking spot to parking spot"
I find this claim very dubious. Prove it. Teslas never drive empty for a very good reason.
Err they have lots of Model Ys in Austin as Robotaxis right now with no drivers. I guess this is also 'dubious'. Look it's clear you have a huge bias I would urge you to read up on https://grokipedia.com/page/List_of_fallacies otherwise your emotional responses will blind you to reality.
'MicroVision says its sensor could one day break the $100 barrier'. When an article says one day, read not in the next decade.
Around a decade ago the nascent LIDAR industry boomed and dozens of startups emerged out of nowhere all racing to make cheap automotive grade LIDAR, and here we are.
Of course MicroVisiom is only claiming their LIDAR to be suitable for advanced driver assist, but ADAS encompasses a wide array of capabilities: basically everything between cruise control and robotaxis, so there's no definition of how much LIDAR you need to do the job, just however much you feel like. Tesla feels like none at all.
Microvision has been saying that from half a decade, products? Nowhere to be found.
I wonder if this could be adapted to the vtuber market. Saw a vtuber body tracker being marketed at $11k recently.
> laser pulses
> phased-array
I'm not well versed into RF physics. I had the feeling that light-wave coherency in lasers had to be created at a single source (or amplified as it passes by). That's the first time I hear about phased-array lasers.
Can someone knowledgeable chime in on this?
The beam is split and re-emitted in multiple points. By controlling the optical length (refractive index, or just the length of the waveguide by using optical junctions) of the path that leads to each emitter, the phase can be adjusted.
In practice, this can be done with phase change materials (heat/cool materials to change their index), or micro ring resonators (to divert light from one wave guide to another).
The beam then self-interferes, and the resulting interference pattern (constructive/destructive depending on the direction) are used to modulate the beam orientation.
You are right that a single source is needed, though I imagine that you can also use a laser source and shine it at another "pumped" material to have it emit more coherent light.
I've been thinking about possible use-cases for this technology besides LIDAR,. Point to point laser communication could be an interesting application: satellite-to-satellite communication, or drone-to-drone in high-EMI settings (battlefield with jammers). This would make mounting laser designators on small drones a lot easier. Here you go, free startup ideas ;)
I think about it like a series of waves in a pool. One end has wave generators (the lasers) spaced appropriately such that resulting waves hitting the other end interfere just right and create a unified wavefront (same phase, amplitude, frequency).
NB: just my layman's understanding
In principle, as the sibling comment says, you could measure just the phase difference on the receiver end. The trick is that it's much harder for light frequencies than radar. I'm non even sure we can measure the phase etc of a light beam, and if we could, the Nyquist frequency is incredibly high - 2x frequency takes us to PHz frequencies.
There might be something cute you can do with interference patterns but no idea about that. We do sort of similar things with astronomic observations.
A phased array is an antenna composed of multiple smaller antennas within the same plane that can constructively/destructively aim its radio beam within any direction it is facing. I'm no radio engineer but I think it works via an interference pattern being strongest in the direction you want the beam aimed. This is mostly used in radar arrays though I suppose it could work with light too since it is also a wave.
Not an expert, but main challenges with laser coherency are present when shaping the output using multiple transmitters.
For lidar you transmit a pulse from a single source and receive its reflection at multiple points. Mentioning phased array with lidar almost always means receiving.
Interesting to see the cost curve drop ... this always changes the market.
I have been watching the sensor space for a while. Cheap LIDAR units could open up weird DIY uses and not just cars. ALSO regulatory and mapping integration will matter. I tried to work with public datasets and it's messy. The hardware is only one part! BUT it's exciting to see multiple vendors in the space. Competition might push vendors to refine the software stack as well as the hardware. HOWEVER I'm keeping an eye on how these systems handle edge cases in bad weather. I don't think we have seen enough data yet...
> Cheap LIDAR units could open up weird DIY uses and not just cars.
Interestingly, there are already some comparatively cheap LIDAR units on the market.
In the automotive market, ideally you need a 200m+ range (or whatever the stopping distance of your vehicle is) and you need to operate in bright direct sunlight (good luck making an eye-safe laser that doesn't get washed out by the sun) and you need more than one scanning plane (for when the car goes over bumps).
On the other hand, for indoor robotics where a 10m range is enough and there's much less direct sunlight? Your local robotics stockist probably already has something <$400
Neato from San Diego has developed a $30 (indoor, parallax based) LIDAR about 20 years ago, for their vacuum cleaners [1].
Later, improved units based on the same principle became ubiquitous in Chinese robot vacuums [2]. Such LIDARs, and similarly looking more conventional time-of-flight units are sold for anywhere between $20-$200, depending on the details of the design.
[1] https://scholar.google.com/scholar?q=%22A+Low-Cost+Laser+Dis... [2] https://github.com/kaiaai/awesome-2d-lidars/blob/main/README...
Sounds like the quality isn't all that great but LD06 sensors look like they're about $20 and someone who works on libraries about this suggested the STL27L which seems to be about $160 and here's an outdoor scan from it: https://sketchfab.com/3d-models/pidar-scan-240901-0647-7997b...
Not sure if the ld06 is a scanner like this or if it's just a line (like you'd use for a cheaper robot vac).
[dead]
@dang .... do these comments seem organic to you? old accounts with almost zero karma going out of their way to use the same verbiage to compliment waymo 18 minutes after an article gets posted? .... dead internet at work.
Please don't post like this. If you suspect something, please email us (hn@ycombinator.com) with links to specific comments. The guidelines are clear abut this:
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Anytime a Tesla or Elon related article is posted it gets a barrage of negative comments usually FUD like. Any neutral or positive comment gets downvoted heavily. Bit suspicious to say the least, very clear pattern, they are not doing it very well should be a bit more nuanced.
There is no evidence of any such organised campaign. The critical comments we see against that company and person are generally from known, established HN users, and align with frequently-expressed sentiments among the general public. And the complaint is just as often made that "anything remotely critical" about that company and person is flagged. If posts about the topic are being downvoted and flagged, it's mostly because that person and company are in the news so frequently that most commentary about them is repetitive, sensationalist and uninteresting, and thus off topic for HN.
What a great website. Thanks for the data! And good work
Or everyone is just tired of tesla and their stubborn camera only tech that will fail in higher autonomy cases?
No no it's the cabal...
Could be lurkers triggered
There are laser measurers sold for a few buck on Temu. Robot vacuums sold for few hundred dollars have Lidars that map out the room in a seconds.
Is there any actual technical reason why automobile Lidar be expensive? Just combine visual processing with single point sampler that will feed points of interest and accurate model of the surroundings will be built.
Most spinning robovac LIDARs are 2D. Most solid state robovac LIDARs are like 8x8 array of laser pointers.
Automotive LIDARs are like, 128x64[px] for production models or 1920x1080[px] for experimental models with GbE and/or HDMI-equivalents-of-industry outputs. Totally different technologies.
Probably one factor is range. The article talks about 200-300m range, a robot vacuum has maybe 10m best case?
For example this one has 120m range with 1cm accuracy and its 15 euros: https://www.temu.com/bg-en/-digital-laser-distance-meter-50m...
Is the 1 cm spec 1σ (or less) or worst-case? It’s a safety-critical application.
I know that automotive parts of the standard requirement to withstand 80°C (or 120°C for military use). A robot vacuum working in a living room can probably be made cheaper because it does not have to face as harsh environments?
Also, range is probably a factor. In a living room, you probably need something like 20m max. You car should "see" farther.
Sure, these are the assumptions but silicon is silicon, copper is copper and solder is solder. They don't use easy melting electronics in vacuums and hardened stuff in cars, the tech is about the same unless it is supposed to work in highly radioactive environment. The plastics are different but car interiors are full of plastics, so its unlikely that the costs of temperature resistant plastics needed for this is more than a cupholder.
As for the range, again pretty powerful lasers are sold for sub 10SUD prices on retail. I am sure that there must be higher calibration and precision requirements as the distance increase but is it really order of magnitudes higher? 120 meters laser measurer with 1cm accuracy is 15 Euros on Temu and that thing has an LCD screen and a battery as a handheld device. How much distance do you actually need?
Not only that but vibrations play a big part as well, especially on ICE vehicles.
Vibrations are surely an issue with electromechanical systems but hardly with electronics. There are plenty of cheap electronic accessories for cars and you can observe that those keep functioning for years.
Please keep politics out of it.
ICE = internal combustion engine
to add to the rest of the comments, a reliability standard also adds on cost. The scale is different, but compare a car bolt vs manned space mission craft's bolt.
Radar is extremely expensive, and lifar is just below that.
Glad to see someone lowering the cost of this technology, and hope to see lots of engineers using this tech as a result.
We might even see a boom in LIDAR tech as a result
What makes you say radar is extremely expensive? Virtually every car from the last decade has at least one, many have two or more. They’re barely more than a PCB and a radar ASIC.
If you want to compete with LIDAR, you need high resolution 4D (range, velocity, azimuth, and height) RADAR. Those are usually phased arrays with expensive phase sensitive electronics, and behind that a chip that can do a lot of Fourier transforms very quickly.
The cheap RADAR devices you're talking about usually only output range and velocity, sometimes for a handful of rather large azimuth slices. That doesn't compete with LIDAR at all.
Below is one of the comments poster to original article, reading it makes me think that most of the whole article has been regurgitated by some AI:
>"This misleading article contains numerous factual errors regarding automotive lidar. Here are the most glaring:
There are multiple manufacturers, including Hesai, that use mechanical means for at least one scan axis and are already sold for a fraction of the "$10k - $20k" price noted by the author. Luminar itself built this class of scanners before going bankrupt.
Per Microvision's own website, the Movia-S does not use a phased array and also does not have a range anywhere near 200m.
Velodyne and Luminar do not even exist as companies anymore. Both have gone bankrupt and been acquired by competitors."
Is this Human safe at these volumes? There was a time you could get your feet sized by putting them into an X-ray box at the shoe store. Removed from stores once the harm was known.
Well, the energy levels used in those devices should be miniscule, and the wavelengths used are well studies. The problem with x-rays - was lack of studies on health effects, and regulations on those effects. I think, since that time, we've studies radiation (be it light, rf or other parts of spectrum) much more. There is indeed a possibility that we're overlooking some bio-electromagnetic interaction effects; for instance now there is some evidence that led lights might not be harmless - but again, it's not the they affect biological structures somehow, but the lack of spectral components has some effects. It is an interesting topic to research. But, the lidar "should" be safe
What is this author even doing with these numbers?
can I buy it on digikey yet?
How could I buy one?
It might, but comma.ai proves that lidar is red herring, which is further supported by the fact that Waymo are able to drive vision-only if necessary.
> comma.ai proves that lidar is red herring
I mean it doesn't. If you actually look at it comma.ai proves that level two doesn't require lidar. Thats not the same as full speed safe autonomy.
whilst it is possible to drive vision only (assuming the right array of cameras (ie not the way tesla have done it) lidar gives you a low latency source of depth that can correct vision mistakes. Its also much less energy intensive to work out if an object is dangerous, and on a collision course.
To do that in vision, you need to work out what the object is (ie is it a shadow) then you have to triangulate it. That requires continuous camera calibration, and is all that easy. If you have a depth "prior" ie, yes its real, yes its large and yes its going to collide, its much much more simple to use vision to work out what to do.
It's fair to point out that comma.ai is SAE level two system, however it's not geofenced at all, which is an SAE level 5 requirement. But really that brings up the fact that SAE's levels aren't the right ones, merely the ones they chose to define since they're the standards body. A better set of levels are the seven I go into more detail about on my blog.
As far as distinguishing shadows on the road, that's what radar is for. Shadows on the road as seen by the vision system don't show up on radar as something the vehicle will run into.
Your autonomy scale is pretty arbitrary and encodes assumptions about the underlying technology and environments the vehicle is supposed to implement and operate in.
The SAE autonomy scale is about dividing responsibility between the driver and the assistance system. The lowest revel represents full responsibility on the driver and the highest level represents full responsibility on the system.
If there is a geofenced transportation system like the Vegas loop and the cars can drive without a human driver, then that is a level 5 system. By the way, geofencing is not an "SAE level 5" requirement. Geofencing is a tool to make it easier to reach requirements by reducing the scope of what full autonomy represents.
I saw a Waymo in Seattle, today. If Waymo can get Seattle right, that gives me a lot of confidence that their stack is very capable of difficult road conditions.
Note: I have not had the pleasure of riding in one yet, but from what my friend in SJ says, it’s very convenient and confidence-inspiring.
I have had the pleasure of riding a few times in SanFrancisco.
The drive was delightful and felt really safe. It handled the SF terrain, traffic and mixed traffic like trams very well.
I wouldnt trust a self driving tesla ( or any camera only systems) though!
I took the Waymo from San Jose airport to home on the peninsula. It took the 101 highway back for the most part, driving very conservatively at 65-55 mph, and in the right most lane. It still has a few quirks though. When there aren't any cars around it will speed up to 65 mph, but at on-ramps, it will slow down to 55 and then speed up once past. It will get stuck behind slow drivers being in the right most lane and patiently follow them a few car length behind them. On the plus side, the lidar stack field of view as shown on the internal display seems to see pretty far down the highway.
Tesla doesnt have Lidar?
No. They don't even have radar, camera is all you need, as per Elon.
Even more fury-inducing, they don't even have ultrasonic parking sensors on cars that have ultrasonic parking sensors. They disabled them to move to a vision-only stack that is no where near as accurate or as good and which categorically cannot tell a difference in ground truth has occurred in its blind spot. But hey, all _people_ need are two cameras, right?
hooboy, https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
That's wild!
Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo) without touching the wheel from parking spot to parking spot everyday. Have you tried it?
Maybe because of the multiple investigations Tesla has currently due to crashes, deaths, injuries, etc. all caused by "whoops our cameras were fooled by some glare/fog and accelerated into a truck/pole"
Those are mainly autopilot which people conflate with FSD, and high percentage are human caused accidents (auto pilot requires full attention and driver is liable).
Why does Tesla ship a feature called "autopilot" which kills you if you use it instead of "FSD"?
Autopilot is Tesla’s brand name for adaptive cruise control with lane centering. This is a common feature available on a wide range of vehicles from nearly every major manufacturer, though marketed under different names (e.g., ProPilot, BlueCruise).
Drivers can and do misuse adaptive cruise control systems, sometimes with fatal consequences. Memes aside, there is no strong evidence that fatal misuse occurs more frequently by owners of Tesla cars than with comparable systems from other brands.
This perception reflects the Baader–Meinhof phenomenon, more commonly known as the frequency illusion. Nobody is collecting statistics for other brands, so it’s assumed the phenomenon doesn’t occur.
A similar pattern occurred with media coverage of EV fires. Except in this case, good statistics exist which prove the opposite: ICE vehicles catch fire more often than EVs.
> Why wouldn't you trust a Telsa, millions of people let there Tesla drive them all over USA (not geofences like Waymo)
I own a Tesla and paid about $10K for the full self driving capability a few years ago. Yeah, I would not trust a Tesla to drive me from airport to my house. There is a reason Tesla is still stuck at level 2 autonomy certification and not 3, 4 or 5.
I would agree for most Teslas on the road. However, the very latest (HW4) cars are significantly better at FSD where I would nearly trust it now. Most of those older (pre-2025?) cars will not have their hardware upgraded so they'll still have FSD that drives like an idiot!
Because it is not real autonomous driving? Being liable for software that you can neither verify nor trust is THE dealbreaker. Once Tesla says "We are liable for all accidents with FSD" with higher level autonomous driving this game changes. But Waymo is just way more reliable.
will Musk backtrack on the whole CV enough, that's how humans do it if price becomes this low?
To be fair, Musk was only parroting what Karpathy was telling him so you should ask him how self driving cars are supposed to work with CV only.
[dead]
[dead]
Oh hell yeah, we can finally stop the braindead attempts to make a safe self-driving car with just cameras.
Tesla actually re-introduced radar sensors in HW4. https://www.teslarati.com/tesla-hardware-4-hd-radar-first-lo...
They might not use them for autopilot, but maybe for some emergency braking stuff, when everything else failed.
Is there anyone using only cameras except Tesla?
Xpeng, Wayne, aiMotive to name three. Probably many others, who claim to use LIDAR but don’t actually rely on it. Because LIDAR is perceived as a prerequisite for autonomous safety, admitting to not needing it is a bad PR move — for now.
There is a massive technical difference between Vision first but with LiDAR redundancy vs No LiDAR at all that is Tesla approach. Those are not the same architecture. So claiming XPeng, Waymo, or aiMotive validate Tesla is technically misleading.
XPeng system is sensor fusion. It is not camera only. Waymo is even clearer. For them LiDAR is not optional. aiMotive has now started to market camera only, but its experimental, no production deployments.
Nope...
Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
Humans also don't have wheels, but we build objects with wheels. It is as if we can build objects that don't resemble humans for specific purposes. Crazy...
> Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world.
My understanding is that cyber cabs still need safety drivers to operate, is that not the case?
Yes, but they are useless, they can't steer, hence why they have more accidents than humans per driven miles.
They have no steering wheel or pedals so no
> Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world.
Wait what? when did they actually enter mass production?
> I mean humans have Lidar sensors
Real time slam is actually pretty good, the hard part is reliable object detection using just vision. Tesla's forward facing cameras are effectively monocular, which means that its much much harder to get depth (its not impossible but moving objects are much more difficult to observe if you only have cameras aligned on the same plane with no real parallax)
Ultimately Musk is right, you probably don't need lidar to drive safely. but its far more simple and easier to do if you have Lidar. Its also safer. Musk said "lidars are a crutch", not because he is some sort of genius, Its obvious that SLAM only driving is the way forward since the mid 00's (of not earlier). The reason he said it is because he thought he could save money not having lidar. The problem for him is that he didn't do the research to see how far away proper machine perception is to account for the last 1% in accuracy needed to make vision only safe and reliable.
This is a weirdly tired counterpoint that Elon and Elonstans like to bandy about as if it's an apples to apples comparison. Humans have a weirdly ultra-high-dynamic-range binocular vision system mounted on an advanced ptz/swivel gimbal that allows for a great degree of freedom of movement, parallax effects, and a complex heuristic system for analyzing vision data.
The Tesla FSD system has... well, sure, a few more cameras, but they're low resolution, and in inconveniently fixed locations.
My alley has an occlusion at the corner where it connects to the main road: a very tall, very ample bush that basically makes it impossible to authoritatively check oncoming traffic to my left. I, a human, can determine that if I see the light flicker even slightly as it filters through the bushes, that the path is not clear: a car is likely causing that very slight change in light. My Tesla has no clue at all that that's happening. And worse, the perpendicular camera responsible for checking cross-traffic is mounted _behind my head_ on the b-pillar, in a fixed location that means that without nosing my car _into_ the travel lane, there is literally no way for it to be sure the path is clear.
This edge case is navigated near-perfectly by Waymo, since its roof-mounted lidar can see above and beyond the bush and determine that the path is clear. And to hit back on the "Tesla is making cheaper cars that can drive autonomously anywhere in the world": I mean, they still aren't? Not authoritatively. Not authoritatively enough that they aren't seeing all sorts of interventions in the few "driverless" trials they're doing in Austin. Not authoritatively enough when I have my Tesla FSD to glory. It works well enough on the fat part of the bell curve, but those edges will get you, and a vision only system means that it is extremely brittle in certain conditions and with certain failure modes, that a lidar/radar backup help _enhance_.
Moreover, Waymo has brought lidar development in-house, they're working to dramatically reduce their vehicle platform cost by reducing some redundant sensors, and they can now simulate a ground truth model of an absurd number of edge cases and odd scenarios, as well as simulate different conditions for real-world locations in parallel with their new world modeling systems.
None of which reads to me as "not going well for Waymo." Waymo completes over 450,000 fully autonomous rides per week right now. They're dramatically lowering their own barriers to new cities/geographies/conditions, and they're pushing down the cost per unit substantially. Yeah, it won't get to be as cheap as Tesla owning the entire means of production, but I'm still extremely bullish on Waymo being the frontrunner for autonomous driving for the foreseeable future.
Waymos are still making lots of errors that a human wouldn't (Stopping in middle of a road due to a puddle was a recent one https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-...) 17 years after betting on LIDAR, I think Tesla is ahead now in most respects. It's could be wrong though we will probably know by the end of this year.
> I think Tesla is ahead now in most respects
Do you actually own a Tesla? I do. With FSD. And let me assure you, you are very wrong.
Humans cannot drive safely. Human drivers kill someone every 26 seconds. Waymos have never killed a person.
Part of that is that humans are distractible, and their performance can be degraded in many ways, and that silicon thinks faster than meat.
But part of it is the sensor suite. Look at Waymo vs Tesla robotaxi accident rates.
> Yes, silly using just cameras, I mean humans have Lidar sensors, that why they can drive, why didn't new just copy that....oh wait.
Humans don't have wheels and cannot go 70MPH. Humans also don't have rear view cameras and cannot process video feeds from 8 cameras simultaneously. The point of these machines is to be better than humans for transportation. If adding LIDAR means that these vehicles can see better than humans and avoid accidents that humans do get into, then I for one want them in my vehicle.
I don't understand what you're saying.
Stereo based depth mapping is kind of bad, especially so if it is not IR assisted. The quality you get from Lidar out of the box is crazy good in comparison.
What you can do is train a model using both the camera and Lidar data to produce a good disparity and depth map but this just means you're using more Lidar not less.
>It all seriousness though, Tesla are producing cyber cabs now which are 10th the price of Waymo's and can drive autonomously anywhere in the world. I think we can see where this is going. (Hint: not well for Waymo)
This feels like a highly misleading claim that might technically be true in the sense that there are less restrictions, but a reduction in restrictions doesn't imply an increase in capability.
The comment about Waymo seems to be particularly myopic. Waymo has self driving technology and is operating as a financially successful business. There is no conceivable situation where the mere existence of competition with almost the same capabilities would shake that up. Why isn't it companies like Uber, who have significantly fallen behind, that are in trouble?
>Also the article is speculative 'MicroVision says its sensor could one day break the $100 barrier'. One day...
And so is the comment about Tesla cyber cabs.