Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given a good proportion of his success has rested on somehow simplifying or commodifying existing expensive technology (e.g. rockets, and lots of the technology needed to make them; EV batteries) it's surprising that Musk's response to lidar being (at the time) very expensive was to avoid it despite the additional challenges that this brought, rather than attempt to carve a moat by innovating and creating cheaper and better lidar.

> So it’s not a surprise to see the low end models with lidar.

They could be going for a Tesla-esque approach, in that by equipping every car in the fleet with lidar, they maximise the data captured to help train their models.



It's the same with his humanoid robot. Instead of building yet another useless hype machine, why not simply do vertical integration and build your own robot arms? You have a guaranteed customer (yourself) and once you have figured out the design, you can start selling to external customers.


Because making boring industrial machinery doesn't sustain a PE ratio of about 300. Only promising the world does that.


> why not simply do vertical integration and build your own robot arms?

Robot arms are neither a low-volume unique/high-cost market (SpaceX), nor a high-volume/high-margin business (Tesla). On top of that it's already a quite crowded space.


The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was. He seemed to have drank his own koolaid back then.

And if he still doesn’t realize and admit he is wrong then he is just plain dumb.

Pride is standing in the way of first principles.


I think there’s room for both points of view here. Going all in on visual processing means you can use it anywhere a person can go in any other technology, Optimus robots are just one example.

And he’s not wrong that roads and driving laws are all built around human visual processing.

The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.


Didn't waymo stop operating simply because they aren't as cavalier as Tesla, and they have much more to lose since they are actually self driving instead of just driver assistance? Was the lidar/vision difference actually significant?


The reports I’ve read said that some continued to attempt to navigate with the street lights out, but that the vehicles all have a remote confirmation where they try to call home to confirm what to do. That ended up self DDoSing Waymo causing vehicles to stop in the middle of the road and at intersections with their hazards on.

So to clarify, it wasn’t entirely a lidar problem it was an need to call home to navigate.


> Going all in on visual processing means you can use it anywhere a person can go in any other technology, Optimus robots are just one example.

Sure, and using lidar means you can use it anywhere a person can go in any other technology too.


> roads and driving laws are all built around human visual processing.

And people die all the time.

> The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.

Huh? Waymo is responsible for injury, so all their cars called home at the same time DOS themselves rather than kill someone.

Tesla makes no responsibility and does nothing.

I can’t see the logic the brings vision only as having anything to do lights out. At all.


> And people die all the time.

Yes... but people can only focus on one thing at a time. We don't have 360 vision. We have blind spots! We don't even know the exact speed of our car without looking away from the road momentarily! Vision based cars obviously don't have these issues. Just because some cars are 100% vision doesn't mean that it has to share all of the faults we have when driving.

That's not me in favour of one vs the other. I'm ambivalent and don't actually care. They can clearly both work.


> And people die all the time.

They do, but the rate is extremely low compared to the volume of drivers.

In 2024 in the US there were about 240 million licensed drivers and an estimated 39,345 fatalities, which is 0.016% of licensed drivers. Every single fatality is awful but the inverse of that number means that 99.984% of drivers were relatively safe in 2024.

Tesla provided statistics on the improvements from their safety features compared to the active population (https://www.tesla.com/fsd/safety) and the numbers are pretty dramatic.

Miles driven before a major collision

699,000 - US Average

972,000 - Tesla average (no safety features enabled)

2.3 million - Tesla (active safety features, manually driven)

5.1 million - Tesla FSD (supervised)

It's taking something that's already relatively safe and making it approximately 5-7 times safer using visual processing alone.

Maybe lidar can make it even better, but there's every reason to tout the success of what's in place so far.


No, you're making the mistake of taking Tesla's stats as comparable, which they are not.

Comparing the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions?

Or the accident stats that don't count an accident any collision without airbag deployment, regardless of injuries? Including accidents that were sufficiently serious that airbags could not or were unable to deploy?


The stats on the site break it into major and minor collisions. You can see the above link.

I have no doubt that there are ways to take issue with the stats. I'm sure we could look at accidents from 11pm - 6am compared to the volume of drivers on the road as well.

In aggregate, the stats are the stats though.


> And people die all the time.

Most of them cannot drive a car. People have crashes for so many reasons.


What Tesla self driving is that? The one with human drivers? I don't believe they have gotten their permits for self driving cars yet.


I wonder how much of their trouble comes from other failures in their plan (avoiding the use of pre-made maps and single city taxi services in favor of a system intended to drive in unseen cities) vs how much comes from vision. There are concerning failure modes from vision alone but it’s not clear that’s actually the reason for the failure. Waymo built an expensive safe system that is a taxi first and can only operate on certain areas, and then they ran reps on those areas for a decade.

Tesla specifically decided not to use the taxi-first approach, which does make sense since they want to sell cars. One of the first major failures of their approach was to start selling pre-orders for self driving. If they hadn’t, they would not have needed to promise it would work everywhere, and could have pivoted to single city taxi services like the other companies, or added lidar.

But certainly it all came from Musk’s hubris, first to set out to solve the self driving in all conditions using only vision, and then to start selling it before it was done, making it difficult to change paths once so much had been promised.


> And if he still doesn’t realize and admit he is wrong then he is just plain dumb.

The absolute genius made sure that he can't back out without making it bleedingly obvious that old cars can never be upgraded for a LIDAR-based stack. Right now he's avoiding a company-killing class action suit by stalling, hoping people will get rid of HW3 cars, (and you can add HW4 cars soon too) and pretending that those cars will be updated, but if you also need to have LIDAR sensors, you're massively screwed.


> The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was.

History is replete with smart people making bad decisions. Someone can be exceptionally smart (in some domains) and have made a bad decision.

> He seemed to have drank his own koolaid back then.

Indeed; but he was on a run of success, based on repeatedly succeeding deliberately against established expertise, so I imagine that Koolaid was pretty compelling.


> The ways in which Musk dug himself in when experts predicted

This had happened a load of times with him. It seemed to ramp up around paedo sub, and I wonder what went on with him at that time.


Behaviour that would be consistent with stimulant abuse.


To be frank, no one had a crystal ball back then, and stuff could go either way with uncertainty in both hardware and software capabilities. Sure Lidars were better even back then, but the bet was on catching up on them.

I hate Elon's personality and political activity as much as anyone, but it is clear from technical PoV that he did logical things. Actually, the fact that he was mistaken and still managed to not bankrupt Tesla is saying something about his skills.


Musk has for a long time now been convinced that all problems in this space are solvable via vision.

Same deal with his comments about how all anti-air military capability will be dominated by optical sensors.


Will there be major difference in ride experience when you take a Waymo vs Robotaxi?


Considering one requires a human babysitter and one doesn’t on top of the accident rates between them it should be an easy yes.


Fair. So in a sense, the lidar vs camera argument ultimately can be publicly assess/proven through human babysitter (regulation permit) and accident rates. or maybe user adoptions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: