November 1st, 2020 by Paul Fosse
In this article, I’m going to describe how the Tesla community is focusing on the latest beta software yet totally missing the biggest threat to Tesla’s goal of enabling a fleet of robotaxis.
I’m as excited as any other Tesla fans about the tremendous progress shown in the latest beta, and that is the main reason I’m buying a Tesla Model Y instead of the Ford Mustang Mach-E, but that isn’t what this article is about. Is this an article based on Navigant Research’s Leaderboard? The one that shows Waymo, GM Cruise, and Ford autonomous vehicles as leaders and 17 of the 20 companies on the chart far ahead of Tesla? Ye of little faith, I didn’t fall for that garbage report that must be based on some deeply flawed idea that companies that build a few expensive cars in a lab and test them in highly mapped and safe environments with teams of highly paid and trained engineers are the ones that will surely win the race to autonomy — not the upstart that puts a few cheap cameras, ultrasonic sensors, and radar on every car they make and ties it all together with a computer behind the glovebox. We covered this bad joke of a report much earlier here and here.
As a software engineer with 35 years of real-world experience building software and using it in corporate environments, I’ve found the keys to a successful software project are:
- Having a technical genius with a complete product vision in charge of the project and not falling for the fallacy that project management (while essential) is a substitute for brilliant developers.
- Choosing the correct problem to solve. For example, Microsoft Excel isn’t successful with accountants because Microsoft studies the generally accepted accounting principles (better know as GAAP) and attempts to build these standards into the product. It is successful because the company doesn’t. Microsoft just builds a tool that adds up numbers (and a whole lot more) and doesn’t try to do the accountant’s job. It just makes the accountant more productive by automating the repetitious tasks. (That was the right decision for when Excel was created. For the future, that might not be the right decision. Maybe now it should attempt to have more intelligence.)
Clearly, Tesla is in agreement with me on the first point. Tesla hires smart developers and gives them the resources they need to succeed.
On the second point, it is a delicate balance. Sometimes the general product is the right decision. The Apple iPhone is an example of a platform that wasn’t (and isn’t) a great phone but is a great platform for others to add value to. On the other hand, Uber has been very successful at not solving many problems, but at being laser-focused on solving one specific problem well. I am convinced that Tesla’s approach to solving self-driving by teaching a car to drive in the environment it sees (as humans do) will produce a more robust solution than the many companies that hand-code behaviors into their cars but don’t have any way of handling the millions of exceptions, other than hand-coding and testing each edge case.
Full Self Driving from a Public Policy Perspective
For the purposes of this article, I’m going to assume Tesla’s Full Self-Driving (FSD) beta is rolled out to the whole fleet in the US in late December and has some minor glitches but no major issues. I’m going to assume that it is tested by Tesla’s hundreds of thousands of customers throughout all of 2021 and performs exactly as planned.
I’m assuming it follows the same progression as Navigate on Autopilot last year. At first, the software must be corrected every few minutes, but as more real-world data is gathered and analyzed by the neural network, the system gets to the point of being able to take you to your destination 99.99% of the time without any help.
I’ll assume (now this a stretch) that all of those hundreds of thousands of customers will be responsible beta testers and will follow Tesla’s instructions of remaining vigilant and ready to take control at any time, throughout 2021, even as the system becomes almost flawless.
At the end of 2021, Tesla will be excited about its success and will undoubtedly roll out its robotaxi software for owners to use, but with the stipulation that a human has to still be in the driver’s seat and monitor (and be responsible for), every action FSD takes, and be ready to take over the car at any time.
I think this robotaxi rollout won’t be that successful for two reasons. Most Tesla owners are financially successful people with busy jobs and families and don’t have time to give people rides. Because of this, it won’t have the scale it needs to provide excellent service in most markets to its customers. So, commercially, it probably won’t be a big success.
But the alternative purpose is to get millions of people to download the Tesla rider app and try out a ride in a self-driving car. I think it will be successful at that objective and prepare the company for 2022, when some city or state (or maybe a country other than the US) will look at the data that Tesla has compiled and presented on its fleet of self-driving cars and give Tesla approval to use the cars on public roads in their jurisdiction with no human driver.
The data will undoubtedly show that Tesla’s FSD system is at least 10 times as safe as the average human driver. It might be a little better, but I don’t think it will be a lot better. Elon has been clear that if we as a society wait until the self-driving car is perfect or nearly perfect, that will take many years and millions of lives will be lost in auto accidents waiting for the car to reach this high standard. Elon feels strongly that it is morally wrong to let those thousands of people die and thousands of others to be injured when he has the technology to prevent those calamities.
Never one to avoid a tangent, this reminds me of some states’ (cough, California, cough) reaction to the COVID virus. Once the state locked down to “flatten the curve,” the goalposts were completely changed and the standard to reopen the economy may now be impossibly high. I’m talking about the cure being more deadly than the disease. That is what this article is about — the inability of the public to think rationally when they are manipulated by politicians and the media. I’m claiming that the lockdown of some states (California) may cause more deaths than the number of lives that are saved by the lockdown. Now, I could be right or wrong in that opinion, but that doesn’t matter for this article. What matters is the concept. There is a chance that I’m right and there are powerful politicians, media, and social media giants (Facebook, Twitter, YouTube, etc.) that are not letting you hear all the evidence on the subject and to make your own decision. They are censoring certain voices so that their preferred narrative has a greater chance of being accepted by the public. This article is about how the same (or similar) forces that crush minority views could crush Tesla’s chance to roll out Full Self Driving successfully.
I pride myself as being a good devil’s advocate and I could argue many reasons why everything in the previous paragraphs are unlikely to happen, but that is not the subject of this article.
The Seen and the Unseen
The concept of “the seen and unseen” was first advanced by a famous French economist (not as famous as he should be, but I digress), Fredrick Bastiat. He wrote an essay in July 1850 about a shopkeeper that has his window broken by a careless child. A poor economist thinks this activity boosts the economy since they focus on the money paid to repair the glass and how the multiplier effect takes that money and creates a wave of activity throughout the community that would not have happened if the child had not broken the window. The poor economist stops there. The better economist stops to ask the question, what would the shopkeeper have done with his money had the window not been broken?
The same message for Millennials and Zoomers (or anyone with a short attention span (I’m amazed you made it this far in this article):
Self-Driving Cars Approved When 10 Times Safer than a Human Driver
Let’s get some statistics out of the way for some back-of-the-envelope calculations. About 3 trillion miles a year are driven in the US, during which there are unfortunately about 4 million people injured and about 40,000 killed. The average miles a year per car is around 13,500.
For the sake of round numbers, let’s say that for every million miles you drive (or approximately once in your lifetime), you will be in a car accident with injuries (I’ve been in one and hope not to be in another, so the stats make sense to me). You have a one-in-a-hundred chance of being in an accident that has a death in it in your lifetime. A fatal accident happens about once every 100 million miles. But since you know more than a hundred people, you know someone (a friend or family member) who has died in a car accident. Maybe I hang around with a bad crowd or know way more than 100 people, because I know 3 people (2 of them close friends) who have died in car accidents (all alcohol-related) and of course many people who have been injured in car accidents, some of them quite seriously.
The Point of the Article — Yes, I’m Finally Ready to Give You the Punchline
Have you ever heard the saying, “to err is human, but to really foul things up requires a computer?” Like many quotes, its origin is disputed, but it is popular because it preys on the fears of people. And fear is a powerful motivator.
If Tesla is able to get FSD approved in a state or country and it goes well, other states will have a “fear of missing out (FOMO)” and quickly follow. Tesla will quickly allow 100,000 cars to use full self-driving without a human driver in 2022. Those cars will likely each drive close to at least 10,000 miles each. Those cars will travel one billion miles. Had they been driven by average human drivers, they would have had a thousand accidents with injuries and 10 accidents with deaths. Assuming Tesla’s Full Self Driving software is 10 times better than a human driver, Teslas will have many accidents, but only 100 with injuries and only 1 with a death. The $64 trillion question is, how will the public react to those injuries and deaths?
If the public is like a good economist and focuses on the 900 injuries saved and the 9 lives saved, the technology will be heralded and a great success. But what if forces (maybe even financed by people with ties to the fossil fuel industry, or foreign governments like Russia or China) focus the media on every one of those 100 injuries and a single death is on the front page of every newspaper. Okay, my mistake — there probably won’t even be newspapers in 2022. So, let’s say sophisticated but secret algorithms that promote these stories agnostically, stories which are written specifically to make Tesla look bad and uncaring. These stories could start sounding the drum for the technology to be recalled until it can be proven “safe” — of course, totally ignoring the lives and injuries saved by the technology.
Realize that although it is likely that FSD can be made 10 times safer than a human driver, it is not likely that when it does make a rare mistake, it will be a mistake a sober human would have made in that situation. This will make Tesla easy to vilify. They will be able to run the clips, over and over, and say that any human would have been able to avoid this injury or death, so why is Tesla killing and injuring people? The larger and more successful Tesla is at this time, the more enemies it will have amassed. Millions of people will have been displaced from the traditional auto industry, utility industry, and energy production sectors. Some of them will be looking to get even.
What Can Tesla & the Community Do to Prevent this Bad Outcome?
Well, I’d like to say that Tesla PR will be on top of this like flies on … wait, no, that won’t happen, since Tesla disbanded its corporate PR department. Well, this article is already way too long, and luckily my friend Chanan has already thought hard about this problem and proposed a solution. His solution may or may not be the correct approach, but it sounds reasonable to me. I think some work in “getting ahead of the story” is important, since as the accidents, injuries, and deaths eventually happen, Tesla needs to have a response ready to go.
Even before any of these happen, it is important for writers (like me) to temper our enthusiasm just a bit and warn our readers that no matter how good the technology, accidents, injuries, and deaths will happen, and as long as they are rare, the technology should not be recalled.
This doesn’t mean that each accident shouldn’t be reviewed and the software shouldn’t learn from its mistakes, only that we don’t need to disable the software to the hundreds of thousands of users while that learning takes place. Let us suppose that several accidents happen in a short period of time while the cars are making left turns while driving into a setting sun. It is reasonable that we might disable the cars from making left turns for an hour around sunset when driving into the sun for a period of time until we can determine what caused the issue. This is a reasonable response. What I fear is the FSD software would be disabled on all cars in all conditions (including conditions that the software is handling perfectly) until this edge case is solved.
Please share your own ideas in your comment section.
If you decide to order a Tesla, use a friend’s referral code to get 1,000 miles of free Supercharging on a Tesla Model S, Model X, Model 3, or Model Y (you can’t use it on the Cybertruck yet). Now good for $100 off either solar panels or a solar roof, too! If you don’t have any friends with a Tesla, use mine: https://ts.la/paul92237
Full disclosure: I own stock in Tesla [NASDAQ:TSLA]. However, I do not offer any investment advice of any sort (except “buy low and sell high”), and neither does CleanTechnica.
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Latest Cleantech Talk Episodes