This is yet another instance of the "what is the acceptable risk for introducing a new technology" problem.
New technologies tend to be rather unsafe when introduced: steam engines, cars, trains, airplanes, electricity, gas heating, all those things tended to kill the early adopters and some bystanders as well. Then they improve and the next generations get used to them and their inherent risks that diminish over time but rarely get down to pure 0.
We are much more risk-aware than our ancestors, though, because we no longer live in a world of ubiquitous premature death. With an average life expectancy exceeding 80 in many countries, we tend to be much more careful than our ancestors in times when life expectancy was 47. (As in the USA in the year 1900.)
I can understand that - I do not want to die under wheels of a robot gone wild any more than anyone else - but we should still find a balance, unless we want to stagnate indefinitely.
The risks of self driving cars are nothing compared to the risks of biotech, which is just coming of age. But biotech holds a lot of promises, too.
PS.: Frankly, my worst fear about self-driving cars is malware and ransomware, not honest mistakes in the code. Computer security is basically an oxymoron and some people are smart and evil at the same time.
This is less about the introduction of self driving cars as a new technology, and more about a company introducing a level 2 driving assistance feature and calling it “self driving”, when in technical terms, it never takes over driving responsibilities from the human driver.
The danger in this case is entirely manufactured. Level 2 driving assistance features are safe when they are not oversold.
The Tesla FSD Beta program requires drivers to understand this is a level 2 system. The human driver has to be in the driver's seat paying attention 100% of the time. If they move to the back seat they will be banned from the system.
Safety critical systems should never be labelled with conflicting statements regarding safe use. Disclaimers aren't a safe way to mitigate this. When given conflicting statements, people tend to psychologically cling to the statement that was the most strong, prominent, or reenforcing to their predisposed views. Also, the driver of a car is not necessarily the owner.
There is a long history of people dying because they were confused by conflicting information about safety systems meant to protect them. Safety engineering starts with consistent and clear messaging.
It's not 2012 anymore, and there are now a number of vehicles with level 2 driving assistance features on the market. Yet, Tesla is the only manufacturer consistently in the headlines when one of their users is peer-pressured into showing off features their car doesn't actually have.
I agree that the media unfairly singles out Tesla. I guess there is a lot of people looking for articles that reinforce their view that Teslas are unsafe.
I'm not sure what you're arguing in terms of acceptable risk. Biotech is incredibly regulated, specifically because the risks are so high, effectively there is very little acceptable risk. In biotech, a patient dying due to your drug is a Big Problem that will at best cause you to put a disclaimer on the package (see Black Box Warning) and at worst immediately end your drug's prospects. We can argue about trade-offs (if you've got terminal cancer, maybe a rare heart event is a worthwhile risk, probably less so if you have a rash), but this is exactly the way it should be.
Self driving cars are a nice luxury, especially in city driving, not something that radically improves our world. You get to read your phone instead of paying attention, and the trade-off is someone might get killed. It's like treating a rash with a drug that could give you a heart attack. That's a far cry from, "with this technology something that took days and $$$$ now takes hours and $" as was the case with all the older examples you listed.
If self driving cars were more like airplanes, I'd have a little more faith. Tesla's marketing BS doesn't inspire me with lots of faith.
> Self driving cars are a nice luxury, especially in city driving, not something that radically improves our world. You get to read your phone instead of paying attention, and the trade-off is someone might get killed. It's like treating a rash with a drug that could give you a heart attack. That's a far cry from, "with this technology something that took days and $$$$ now takes hours and $" as was the case with all the older examples you listed.
This is the opposite of the premise and the conclusion is the total opposite of the goal of self driving cars. A core premise of self driving cars is that they will be far safer than human-driven vehicles. 1.35 million people are killed on roadways every year globally. Saving over a million lives a year means a lot. The technology isn't quite there yet, but it is likely that it will be and the promise is quite real. It's not like Tesla's are killing people at a significantly higher rate than regular drivers with Autopilot - which does not seem to be true [1].
There are plenty of resources that demonstrate why those statistics can be misleading. Chief among them, not all miles are created equal. It’s like claiming autopilot in airplanes is significantly safer than ape-controlled aircraft. It’s partly true because apes control the hard parts (takeoff and landing) and leave the more easy parts to software
> It’s like claiming autopilot in airplanes is significantly safer than ape-controlled aircraft. It’s partly true because apes control the hard parts (takeoff and landing) and leave the more easy parts to software
That's because the term "autopilot" is badly named, both in cars and in aircraft. People just thing autopilot in aircraft means "plane flies itself" because they get on a jetliner and never see the pilot actually control it, leaving the impression that some combination of magic and electricity-infused rocks got them there instead of a human with assistive software.
Those statistics are pretty relevant when it comes to fatalities, which are more likely to occur at highway speeds. Total accidents, yes, self driving cars aren't generally operating in city traffic yet. Those same resources will note that the most common accident for Teslas is being rear-ended by another vehicle, which is also relevant.
Let's not lose sight of the fact that this technology is under active development, with a theoretical target being eliminating the vast majority of car-related deaths. Nobody is arguing that it's already superior, though it may already be close in certain circumstances.
That’s the thing though. Those statistics aren’t showing that level of rigor. They aren’t even saying “miles driven at highway speeds”, just “miles with autopilot engaged”.
They don’t control for things like vehicle age. They don’t control for safety features, or even for driver assisting software control. They aren’t comparing driving conditions. They aren’t comparing driving duration. There aren’t comparing driving speed. They aren’t even from the same datasets. Etc etc.
It’s not a rigorous study but used as if it’s evidence when it’s only slightly better than anecdotal
It'll probably be best to stick to U.S. statistics because it'll show the statistical difference between manual driving in a modern car and automated driving in a modern car, versus 'globally' which includes cars in countries that aren't designed with the same safety we have and will lag at least a decade behind the US in receiving level 3+ ADAS when it becomes available.
> You get to read your phone instead of paying attention, and the trade-off is someone might get killed.
You get to drive, and the trade-off is someone might get killed. Your comment almost makes me think you haven't driven a car before, because you would remember the dull terror of seeing your life flash before your eyes for the 40th time this year because some moron ran a red and slammed the brakes in the middle of the intersection you were about to cross.
Until recently motor vehicle accidents were a leading cause of death in the US. Saying that self driving would just be a luxury feature is truly a luxury position compared to those that have lost loved ones to drunk driving, speeding, snow, rain, new drivers, old drivers, blind drivers, and any other of the myriad of ways to get yourself killed on a road. All of which would disappear with level 5 self driving.
> That's a far cry from, "with this technology something that took days and $$$$ now takes hours and $"
Extrapolate the future and realize that once self driving is solved for one vehicle it's solved for all of them, and truck/bus/taxi driving as a profession will go bust. Without having to pay human drivers that also need breaks, pensions, health insurance etc. all these services can offer lower prices.
>All of which would disappear with level 5 self driving.
I think the post was about managing the risk that occurs before level 5 is reached. Assuming that it’s either on the immediate horizon or a foregone conclusion seems to be dismissive of those nascent risks
I drive a lot, thanks. If you can prove me level-5 or even very good level 4 autonomous driving, and that a computational driver makes radically fewer fatal mistakes than a human, then I'm with you. In other words, if you can satisfy a good regulatory regime like the say, airplanes or drugs, then great.
The thing is that we're not going to get there if we disallow anything in between 2 and 5 just because it can be a danger when used incorrectly. Level 3 by definition[0] is where the driver can look at their phone until the car/beeps at taps them to start driving again, and we know that system won't be able to tell when it needs to request human intervention perfectly 100% of the time, yet we need to get level 3 systems before anything above it.
Even autopilot in its current form has proven to be such an attractive nuisance for abuse that there is a market for those stupid steering wheel weights. How can you possibly not appreciate this problem?
Not the OP, and well, it's been a year and a half, but I regularly encountered unguided automotive cruise missiles in my morning drive down 237 in the bay area, with Tesla drivers being _particularly_ bad about sitting there playing with their phones. This is not a "life flashing before your eyes" situation, but it is a "this is very concerning" situation.
Perhaps I misunderstood the OP, but I didn't think they were talking about Tesla drivers paying no attention to the road and being absorbed on the phone, just their own driving experience of getting into near-accidents all the time.
"Self driving cars are a nice luxury, especially in city driving, not something that radically improves our world."
With truly autonomous vehicles, you can have a radically different logistics for goods, delivery services etc. You can also have specialized "sleeper cars" that get you to your destination overnight, fresh and ready.
Self driving cars can also park themselves somewhere out of sight and stop clogging inner cities.
I'd agree if it works as advertised. Level 5 or very close to it is the key. No system has shown that, much less Tesla's. In the meantime, doing a live experiment with 3,000+ pound machines moving at 30+ mph seems like a bad idea.
There seems to be a misunderstanding on what they're referring to. You with "self driving cars are a nice luxury" refers to the current iteration, while the main GP is talking about biotech and self driving cars in terms of the potential they hold, perhaps 15-30 years in the future.
We can't go straight from level 2 to level 4 without some real effort, and it's not exactly helpful when new level 2 systems can't even handle curves[0] or will continue to drive for you when you take your seatbelt off[1].
> You can also have specialized "sleeper cars" that get you to your destination overnight, fresh and ready.
Trains have had this capability for decades, and that is a very mature technology, with the upside of also carrying far more people at a time than a car-based system would.
I have taken them several times. Prague-Warsaw, Prague-Frankfurt, Vienna-Venezia, Prague-Tatras, Prague-Krakow.
First of all, trains shake a lot. I could at best take hourly naps, it was better than nothing, but far from optimal. The loud proclamations of station speakers whenever you stop somewhere do not help your sleep either.
Second, night trains are a paradise for opportunistic thieves. Yes, it is a solvable problem, but I haven't seen it solved.
I have no problem driving or biking among the LIDAR-based systems from Waymo and others in the SF Bay Area. I’ve seen them do stupid stuff, but it was always on the side of safety. On the other hand I’ve seen hands-free Tesla drivers reading books, using their phone, etc, which is both stupid and dangerous.
If they have that, why then do we see videos and news of people reading newspapers and sleeping behind the wheel of a Tesla?
The only feature I've heard is that if you don't engage with the AP by applying torque to the steering wheel it'll disable it until you come to a full stop and park the car.
>> This is yet another instance of the "what is the acceptable risk for introducing a new technology" problem.
That's a strawman. Just make the company fully liable for any injuries or deaths from accidents involving their cars. The market will figure it out from there. But that's not how it's being handled at all.
Tesla in legal disclaimers: "It's all on you, driver."
Tesla in FSD advertising of current state, not future goal: "The driver is only there for legal reasons. The car is driving itself". To be clear, that's a word for word quote from previous Tesla marketing (because I'm sure someone will chime in to say "Tesla doesn't do any advertising!").
Not sure the math works out here. Even if you have 100 deaths/y nationally assuming wide scale rollout (tens of millions of cars, billions of trips) which would be astonishing achievement that’s still $ billions of payouts every year (assuming ~10M payout per death). Yeah… not gonna happen
No it's not? It's about whether a specific company who has an extensive history of irresponsibility and killing people should be allowed to test. Tesla is far behind the competition and taking extensive risks to catch up.
There are companies who are much further along and take safety seriously.
Exactly. This is a problem with Tesla, not autonomous vehicles. There are lots of other autonomous vehicles being tested in San Francisco. In fact, it is a pretty popular place to test autonomous vehicles.
Life expectancy for a child. A white person reaching the age of twenty would have an average life expectancy into their 60s. Which is still lower than today, of course, but I'm not convinced that the difference supports a conclusion about differing attitudes toward risk.
Unless they were killed on the battlefield ... this kind of risk seems to have gone down too in developed states, and that is pretty significant. Europe in the first half of the 20th century, with exception of a few lucky countries, was one big war graveyard.
It just seems like this whole endeavor is solving the wrong problem. Building a self driving car is like making a faster horse. Individual cars on roads is incredibly inefficient and even EVs have a huge negative impact on health and air quality from tire and brake dust.
I think it should be obvious that the FSD thing is and has always been an attempt to boost Tesla's appeal and stock valuation. Right now it really seems like the company is playing chicken with regulators so that they have a plausible target of blame and an excuse for not releasing a feature they claimed was almost ready _years ago_ and sold to people which is quite obviously not even close to ready. It's a pretty common strategy in scams.
But it’s a problem that can be addressed at the individual level, rather than requiring large societal shifts like a move to more public transportation would. As much as I love the idea of easy-to-use public transport, the changes in US society needed to make that happen are unlikely to come in the next 5-10 years.
Self-driving cars can wait. Nothing of importance depends on it. But if it's regulated properly, and liability is where it should be, then introduction can come, in due time. Biotech is indeed a whole different can of worms.
> not honest mistakes in the code
It's not mistakes in the code, but oversight in the design that worries me.
I would argue that the total amount of time spent driving is huge and that as humans, with limited lifespans, we could mostly use that time for better purpose. (Not necessarily for work. Even Netflix would be better.)
I personally live in a country where being carless is feasible, and given that I hate driving, I am indeed carless. But I feel sorry for anyone in my situation who really does not have much choice and must spend X hours weekly behind the wheel.
Yeah transformations like that litterally take decades and require public approval. And there absolutly no reason to believe the US is moving in that direction.
So sure, if you want to sit around and think 'I wish I lived in Netherlands'thats fine, but people actually have to live where they do.
And even if you started the most massive program of public and bike infrastructure, it would still be worthwil to develop this technology.
Simpler in concept maybe. In practice, I am not so sure.
Just one example: In SV we voted for a tax increase in 2000 for a Bart Extension to San Jose. If we're (really) lucky, we might get it in 2030 after spending ~$10B. The total distance is ~20mi with less than 10 stations.
A question for you, and all self-driving safety proponents. If Tesla's FSD cars are so safe, why are they not taking the liability of the cars. Why have only a regular car insurance punted on to the owners? When the manufacturers of self-driving cars roll out liability of all owners, or taxi services with a monitoring driver with no liability, is the time when self-driving is actually available. Till then it is vaporware.
> why are they not taking the liability of the cars.
Because that's still incredibly costly. We as a society care greatly about a decrease in net risk, but that doesn't mean that net risk is small enough for one party to bear.
If they make driving safer, they could insure cheaper and make more profit than existing insurance companies. Or they could collaborate with insurers to reduce the insurance cost for Tesla drivers, e.g. by underwriting the costs of deadly accidents, and splitting the profit. 35000 deaths, that's a lot of dough. A money making machine like Tesla must surely have thought of that, if it is remotely true.
all our cars' hardware support some unspecified version of self driving software that doesn't exist; to use our version of self driving software you have to add this 10k piece, and while using this version of self driving software you'll have to be in control of the vehicle all the time.
So we should do absolutely nothing to improve the safety of cars (both for people inside and outside the vehicles) until magic self-driving cars provide the solution for all our problems?
There's plenty that can be done to make driving safer. Traffic calming measures. Stricter enforcement of traffic laws. Redesigning dangerous intersections. There's no need to wait for self-driving cars, and I suspect that you'll save far more lives by actually doing that stuff instead of hoping self-driving cars might solve the problem for you someday in the future.
Yep, and on another level of importance but still important within that level, untold hours of human idleness and boredom sitting behind the wheel. I think drudgery is not too strong a word.
This is, to a certain extent, a post advocating for regulation. Take the aviation example. It was certainly dangerous for early adopters when it was a nascent technology. But one of the reasons why the current level of aviation is relatively safe (they are one of the very few examples of five sigma quality) is because they are heavily regulated. Everything from maintenance to licensure to duty cycles are regulated.
I think one of the risks of the current paradigm is the inability of US government to effectively introduce new regulation legislature. The new method seems to be to put the onus on the industry to regulate themselves in lieu of having to create actual legislation, which can lead to perverse incentives. In the instances where this isn’t the case, we have to deal with regulatory capture or the revolving door of industry/government which creates its own skewed incentives.
Schedule and cost risk are always going to be present. I don’t know how you’ll fully mitigate that given humans struggle with adequately gauging risk and asymmetrical incentives in the face of low-probability events.
> but we should still find a balance, unless we want to stagnate indefinitely.
I don't think there is any risk of stagnation. Maybe we should work on why we need cars in the first place. We can simultaneously work on public transport, or virtual presence, etc.
"Self driving cars" are just a subset of "autonomous vehicles". Public transport would benefit greatly from driverless buses.
Bus drivers are a scarce resource. Where I live, there is a shortage of people willing to rise at 4 a.m., take responsibility for 40-50 lives at a time and, at the same time, have unpleasant interactions with members of the public (meeting a few asshole passengers everyday belongs to the job, unfortunately).
You're right. I guess I'm thinking that mass transport systems with dedicated/fixed routes can be designed in ways that allow for much safer operation, versus an open-ended system like FSD. e.g. no risk of pedestrians on train lines.
We could also introduce safety features that use the FSD hardware and software, but only to avoid accidents rather than drive the car. Once you have a system that you know does a better job than humans at avoiding accidents, it is safe to introduce a system that actually drives the car.
i wonder about the commercial feasibility of restricting auto pilot to specified zones, maybe even specified expressways similar to turnpikes. would this allow self driving technology the room to develop with a socially acceptable level of risk or would it condemn it to go the way of the concord?
New technologies tend to be rather unsafe when introduced: steam engines, cars, trains, airplanes, electricity, gas heating, all those things tended to kill the early adopters and some bystanders as well. Then they improve and the next generations get used to them and their inherent risks that diminish over time but rarely get down to pure 0.
We are much more risk-aware than our ancestors, though, because we no longer live in a world of ubiquitous premature death. With an average life expectancy exceeding 80 in many countries, we tend to be much more careful than our ancestors in times when life expectancy was 47. (As in the USA in the year 1900.)
I can understand that - I do not want to die under wheels of a robot gone wild any more than anyone else - but we should still find a balance, unless we want to stagnate indefinitely.
The risks of self driving cars are nothing compared to the risks of biotech, which is just coming of age. But biotech holds a lot of promises, too.
PS.: Frankly, my worst fear about self-driving cars is malware and ransomware, not honest mistakes in the code. Computer security is basically an oxymoron and some people are smart and evil at the same time.