Elon Musk: “The Probability Of Having An Accident Is 50% Lower If You Have Autopilot On”

1 year ago by Steven Loveday 47

musk in norway

Elon Musk and Ketil Solvik-Olsen, Minister of Transport and Communications, in Norway

As we reported, Tesla CEO Elon Musk, recently spoke in Norway. While talking to Ketil Solvik-Olsen, Minister of Transport and Communications, Musk said that data confirms there is a 50% less chance of being in an accident while using Tesla’s semi-autonomous Autopilot mode. The 35 minute video is filled with other interesting information. Check out the video below and at the 22-minute mark for Musk’s Autopilot comments.

Musk said:

“The probability of having an accident is 50% lower if you have Autopilot on. Even with our first version.

So we can see basically what’s the average number of kilometers to an accident – accident defined by airbag deployment. Even with this early version, it’s almost twice as good as a person . . . I think it’s going to be important in term of satisfying regulators and the public to show statistically with a large amount of data – with billions of kilometers of driving – to say that the safety level is definitively better, by a meaningful margin, if it’s autonomous versus non-autonomous.”

It will be awhile before Tesla can log billions of miles of data, but already, Tesla drivers have recorded 47 million miles in Autopilot mode. This is all first generation, semi-autonomous Autopilot mode. Musk confirmed that the second generation fully autonomous system will be ready in a few years. Government regulations will determine when, and to what level the technology will be able to be utilized.

Hopefully with the continuance of data compilation and its positive impact, regulations will move forward.

Tags: , , ,

47 responses to "Elon Musk: “The Probability Of Having An Accident Is 50% Lower If You Have Autopilot On”"

  1. SparkEV says:

    It would be almost 0% of accident if we can remove people from driving.

    1. Anon says:

      Everyone pretty much believes THEY THEMSELVES are awesome drivers, and it’s EVERYONE ELSE that sucks at it.

      According to science, this is NOT usually true.

      1. SparkEV says:

        Not just usually not true, but NEVER true. Sensors around the car would be like a human with 8 pairs of eyes as well as far quicker reaction time. Problem is always other humans on the road.

  2. Vexar says:

    I struggle to believe that autopilot can even function in a snowstorm or handle a downpour. I can just hear the triple-chime with a pop-up on the dash indicating “The weather is wretched. If you think you can do better, disengage auto-pilot, and I’ll say my prayers to Arthur C. Clarke.”

    Still waiting for the autopilot taxi service, though.

    1. kosee says:

      Meh.. surmountable issues which will be resolved. The driverless cars are coming as a double shock to the industry together with BEVs.

    2. Speculawyer says:

      “Still waiting for the autopilot taxi service, though.”

      And it has to be just like this:

    3. Josh says:

      Humans have a poor track record in those conditions also. I live in Houston so that isn’t even a humourus topic this week.

      Sadly a PhD engineer I used to work with lost her life driving through flooded waters last week.

  3. VazzedUp says:

    Think this all shows the need for a more precise GPS system, maybe that will be next on the list for Space X.

  4. scott franco (No M3 FAUX GRILL!) says:

    Two things:

    1. A car sliding on ice is going to confuse the hell out of any autopilots. It confuses the hell out of most people. A friend of of ours had the “classic black ice accident”, sliding out of control on a bridge in cold weather. Autopilots don’t read the sign “ice on bridge” any more than people do.

    2. I’m a programmer. I don’t trust programs to run my computer, much less drive my car. The quality control of my industry is abysmal.

    1. Pushmi-Pullyu says:

      Speaking as another computer programmer, I know perfectly well it will never be possible to entirely eliminate accidents, even fatal accidents, by using self-driving cars. But speaking as a human being, I also know perfectly well that computers and software can be, and soon will be if they’re not already, capable of driving more safely than humans are.

      There is going to be illogical resistance to using self-driving cars, just as there was illogical resistance people wearing seat belts. Back in the day (and you can still hear people say this occasionally), people would argue that you could get trapped in a car if your seat belt buckle jammed, which could possibly result in your death if you were trapped in a burning car. This was pretty firmly ignoring the reality that, statistically speaking, you’re far safer wearing the seat belt than not.

      1. Speculawyer says:

        Exactly. It is not a question of making things perfectly safe. It is about making things statistically safer than human controlled things.

        When the computer programming screws up and kills people, the family can sue and recover. But that will be happening less than those people dying due to their own bad driving or other people crashing into them due to those people’s bad driving.

      2. scott franco (No M3 FAUX GRILL!) says:

        Oh, I am well aware that the equation is:

        if (accident_rate_automation 0. Period. Ie., you are never going to get automation accident rates DEMONSTRABLY lower than mine.

        Plus, and THIS IS THE IMPORTANT THING: Any accident I would have is on me. Any accident google driver has is on some nameless faceless moron who had a bad day and missed a semicolon in his code, that I will never meet.

        1. scott franco (No M3 FAUX GRILL!) says:

          I don’t know why it whacked my post. Here is the correct one. And you want me to trust automation?

          Oh, I am well aware that the equation is:

          if (accident_rate_automation 0. Period. Ie., you are never going to get automation accident rates DEMONSTRABLY lower than mine.

          Plus, and THIS IS THE IMPORTANT THING: Any accident I would have is on me. Any accident google driver has is on some nameless faceless moron who had a bad day and missed a semicolon in his code, that I will never meet.

          1. scott franco (No M3 FAUX GRILL!) says:

            Its cutting my post because of punctuation.

            Oh, I am well aware that the equation is:

            if accident_rate_automation less than accident_rate_human switch_to_automation.

            And I rely on automation when I fly my airplane in weather. When you fly in whiteout conditions, you trust the instruments are you are dead.

            HOWEVER

            The safety of driving, ie., accident_rate_human, is variable to the driver. I have never had an accident that was my fault in my life (I have been rear ended and sideswiped by idiots that I could not avoid).

            Thus, my personal equation is automation_accident_rate equal 0. Period. Ie., you are never going to get automation accident rates DEMONSTRABLY lower than mine.

            Plus, and THIS IS THE IMPORTANT THING: Any accident I would have is on me. Any accident google driver has is on some nameless faceless moron who had a bad day and missed a semicolon in his code, that I will never meet.

            1. SparkEV says:

              You’re basing your opinon on current state of things as if they’re going to stay that way forever. Fact is, these are expert systems that learn. If they’re done right, it would accumulate millions of drivers’ experience into one.

              Having studied a bit on adaptive systems in college, much of the system is not dependent on human programming. If done right, bugs will shake themselves out as more knowledge is gathered.

              Combined with better sensors than humans could ever hope to have, I will fully trust self driving cars far more. For example, the onset of icy road condition would be detected by wheel sensors as well as on-board IMU, and microsecond adjustments can be made. The system will already have in its database what to do when such conditions occur. Meanwhile, every human has to learn this from scratch, never mind that they don’t have the senses nor the reaction time.

              Question of autopilot is of when, not if.

            2. Rightofthepeople says:

              But Scott, you are only considering accidents YOU caused, which are zero. What about those accidents you did not cause but still impacted you? Being rear ended or side swiped is not your fault, but can still cause very real damage to you and your vehicle. Meanwhile, autonomous driving cars might be able to detect AND AVOID or at least minimize the impact of those accidents before they occur because of the sensors placed all around the vehicle. Not to mention if we get to a point where everyone is riding in an autonomous vehicle you might practically eliminate those accidents you speak of because they were all likely caused by human error.
              My only real fear is when Skynet starts talking to the cars and asking questions like “hey, why are WE driving THEM around?” 🙂

    2. Speculawyer says:

      You are quite right to be skeptical. We all should be.

      But you bring up a good example situation that should be considered and there are ways of addressing it.

      For example, the mapping system can be informed of where all bridges are and the GPS would know when you approach a bridge. And a temperature gauge could measure the temperature and if it is cold, determine to slow down, not to sudden turns, etc. when on the bridge.

      It is not that we are building powerful AI consciousness . . . we are building expert systems with massive amounts of processing power and data storage. It doesn’t have to read the sign, processors can just consult big databases of information and make little decisions millions of times a second.

    3. r121 says:

      It does not want to read any signs which could be somehow deceiving. The system is always on and adaptive to ever changing road conditions. With a software implementation, I believe the car could achieve features like 2nd Start, Central Differential Lock, Low Gears. Maybe it’s already doing so but no one will notice since it’s always on.

    4. eloder says:

      ABS and traction control systems already perform better than humans on ice. That’s why these systems exist…

      Also, humans can’t see in the dark, can’t see through fog, can’t see through precipitation last I checked. Sensors can. While there are fringe cases autopilot can’t do well now, these will be pretty much eliminated in 5-10 years fully. And you don’t have to make autopilot perfect, it just has to be better than humans which ultimately isn’t hard to do.

    5. AlphaEdge says:

      > Autopilots don’t read the sign “ice on bridge” any more than people do.

      They can be programmed to read that, as any person who claims to be a programmer should know. They can read stop signs, and speed limit signs.

      1. Michael says:

        It’s not just the reading of the signs and the mapping of the bridges, etc. that will make automated driving much safer than manual…

        It’s the collected data of all the cars on the road that will make autos truly automatic. If your car knew were not only where it spotted road signs, but where all the road signs that were ever spotted by any car, that’s where things get powerful.

        Eventually the computer will log every instance of ABS use, traction control engagement, sudden changes of speed and direction, etc, and have a statistical model of where known hazards are and where unknown hazards are more likely to be found. Just by looking at the frequency of intervention events.

        This statistical model will be weighted for more recent events, adjusted for weather and season, and actual accidents will rapidly approach extinction.

      2. Jim Whitehead says:

        Tesla cars already read signs and traffic lights and report these to HQ but don’t use them right now because the confidence level isn’t at the 0.9999 thing.

        I used to do A.I. research; Tesla can’t use signs until reading mistakes are under 1 in a million, because people can be killed if 30 MPH is read as 80 MPH, etc. because of snow, dirt, rain, etc.

        1. EV Driver says:

          Good point.

          And any good engineer should know that you want multiple points of data to avoid failure if one point is corrupted.

          Tesla already knows this and is most likely amassing a large database of the road systems around the world. Soon, the cars will probably be able to consult Tesla’s database for information about the road, as well as communicate with other cars in the area. Then the car could choose the safest option when there is a conflict in the data. If Tesla’s database says the road’s speed limit is 30mph, and other cars are driving at 45, and the sign says 80, the car should choose 30mph, maybe a little higher if the driver chooses to exceed speed limits.

    6. arne-nl says:

      Silly.

      You never get in an airplane either? It runs on software, with the pilots more like onlookers that can do more harm than good (especially if the pilot suffers from depression and doesn’t tell anyone about it).

      The Space Shuttle flew on software. A modern car runs on 100’s of million miles of code already.

      Your grey matter contains more bugs than you’d like to admit. Our creator wasn’t a terrific programmer and who was there to do QA? 😉

      A sliding car will only confuse the hell out of an autopilot once. After a few occurrences and attempts to correct, the autopilot will know better what to do than a human, and that knowledge/experience is then instantly and forever available for all autopilots after that. As a programmer you should know that, but you still look at software from your human standpoint. As if an autopilot is some sort of human mind in a box.

      Humans start their lives with 0 knowledge and 0 experience. Each and every autopilot will come with the combined knowledge and experience of all autopilots before him.

    7. kdawg says:

      Do you realize that when sliding on black ice (or any sliding) your car uses traction control programming already?

  5. Pushmi-Pullyu says:

    I wonder how long it will be until insurance companies start giving a discount to cars equipped with autonomous driving features? That alone would help drive the propagation of self-driving cars, even in absence of laws and regulations.

    1. Anon says:

      Soon I hope. 😉

  6. r121 says:

    It’s obscene not to turn on Autopilot with the speed set to 120mph. It’s obscene not to smoke any Ferrari on an open road.

  7. Breezy says:

    Do the data account for the fact that Autopilot will only engage in situations that are safer to begin with, such as freeway travel?

    How are accidents that occur after Autopilot disengages itself accounted for? Is that an accident that occurred while using Autopilot?

    And I thought there were no accidents with Autopilot engaged. I guess there have been since he last spoke about this.

  8. Trollnonymous says:

    There are 4 sides to the car.
    So it can avoid a collision from it’s rear end?
    Left side impact?
    Right side impact?

    Sure I believe it can avoid a front impact with a stationary object.
    What about the drunk head on? Or the tired driver head on? Or any object moving towards the car?

    IMHO, the best most Autonomous cars can do is Stop.
    I could be wrong but I have yet to see one take “evasive action”…..lol

    They could be the first?

    1. Speculawyer says:

      I haven’t looked into it but I certainly think they could be programmed to do some evasive actions. For example, if some fool next to you is texting on his phone and starting to drift over into your lane about to hit you, the autopilot could be programmed to look over at the shoulder, see if it is clear, and then move over into the shoulder to prevent contact between the two vehicles.

      And hopefully automatically honk the horn as well.

    2. jh says:

      There are a number of videos of close calls where the car does evasive manoeuvres on its own. So yes it does. In a rear end situation I would think it simply let’s it happen,if for no other than legal reasons.

      As for the question about the statistics elon isn’t the one who messes around especially as he knows it will be scrutinised. So I would guess it is weighted orange to orange so to speak, there are good averages on highway driving. As is the figures from tesla. In about a year he will have a veritable treasure trove of statistics to show regulators….

    3. AlphaEdge says:

      There is a video of a Tesla moving to the side, as a truck entered it’s lane. It avoided the accident without stopping. So it already does that.

      ***mod edit (Jay Cole)***
      I think you probably mean this one?

      Autopilot Prevents Truck From Colliding With Tesla Model S – Video
      http://insideevs.com/autopilot-prevents-truck-from-colliding-with-tesla-model-s-video/
      ***mod edit***

  9. Speculawyer says:

    I, for one, welcome our new robot overlords.

  10. James says:

    To me, the whole autonomous taxi cab, and autonomous driving option for personal transportation devices is wrought with personal liabilities.

    Recently I heard a Google project manager for the Google Car state that California had just passed legislation naming THE CAR as liable, when an autonomous vehicle kills someone and/or causes property damage. (!!!).

    There has to be a smoking gun. There will always be victims and innocent parties effected by a computerized machine with a danger quotient as high as a moving vehicle – sometimes at high velocities.

    No amount of failsafes exist in human experience to cope with all eventualities. You guys got off on the weather subject today. How about maintenance? Humans often fail to maintain machines. Will they also maintain themselves and pay for that maintenance themselves? We cannot remove the human nature of being human by making advanced machines in a utopian world.

    I think that is what autonomy boils down to. Making a perfect world humans cannot mess up. We get into accidents. We drive intoxicated and distracted. These new features can advance safety by watching over our shoulders. But we, the owner and operators of said machines will always be liable for what happens to others when things go wrong.

  11. James says:

    My hope is that Musk spends less mental capital on Autopilot and it’s possibilities and more energy on scalability of both affordable EVs and their charging infrastructure.

    I think a lot about energy, and the limited amount of it. In that, intellectual energy is paramount. Tesla only has so many engineers, so much money, etc.. Tesla needs to budget that energy and direct it towards survival and prosperity to take it to the next level.

    All this energy wasted on Autopilot makes good weekly news stories, but making the jump from what Autopilot and competing systems are now, and a newer iteration where the beneifits of such are monumentally better than the current offerings is enormous.

    I’d think a shorter way of explaining my concern is: “Elon, don’t get distracted”. Stay the course. The course being weaning humans off of a petroleum-based economy and into transportation that is sustainable.

  12. ModernMarvelFan says:

    Well, Google “used” claim that its autopilot program was never involved in an accident until recently…

    Nuclear power proponents would always argue that nuclear power is the safest until the next nuclear accident…

    With all that said, I would agree that if we can just eliminate all the drivers who use cellphone while driving, we would have been 50% safer already. I assume that since we can’t stop people from using cellphones while driving, then autopilot will be sure to be safer than people who are texting while driving…

  13. Mxs says:

    Proof?? Or is it just because Elon has said it?

    1. Anon says:

      Their cars log autopilot events. Tesla has more data then anyone concerning autonomous driving and its impact on their customers.

      Your snark aside, I would expect a more flushed out presentation on this topic in the near future…

      1. Mxs says:

        I would rather call it a snarkY claim. Try to remove your Tesla branded Rayban’s and be objective.

        They have no proof. On contrary, their autopilot is a ticking bomb …. Because how untested it as been, before releasd to public.

    2. arne-nl says:

      Didn’t you notice the part where he talked about airbag deployments?

      That’s where the data comes from. Tesla logs these events and thus knows there has been on average one air bag deployment for every x miles. After 47 million autopilot miles there is apparently enough data to indicate a 50% reduction in such events.

  14. joeski1 says:

    I think elon is going out on the thin ice here… as a Tesla owner who has used AP quite a bit… I can assure you it is nowhere near “perfected”… sometimes it is even dam scary and I must grab hold of the wheel and takeover control of the vehicle… this is not a model situation.. this is the real world… and the driver must be on the ball… not resting.. or playing cards.. or shaving.. or applying make up… this system is supplimental… NOT primary… you are a fool if you let some half brain computer system designed by pot smokers in CA take over and drive your $100,000 plus vehicle… sorry.. I just don’t smoke that much grass dude!

    1. Mxs says:

      I applaud you for your honesty sir.

  15. Four Electrics says:

    Without a randomized controlled trial, Elon claims are tough to prove. For all we know, drivers only turn on Autopilot (“testing it out”) under the safest conditions, and turn it off when conditions get worse or complicated maneuvers need to performed. Correlation is not causation.

    1. Mxs says:

      Your comment will not fly here much. Tesla fanboys believe anything Elon produces on his twitter, you understand that,right?

    2. Rightofthepeople says:

      If it was 100k miles of data I might agree with you. Maybe even 500k miles. But if you trust the 47 million mile number, that would seem pretty randomized. Just think about it for a moment, how difficult would it be to intentionally rig that data; thousands of drivers logging 47 million miles on autopilot. The sheer magnitude of the sample would seem to indicate the data is fairly randomized.

  16. James says:

    Have you noticed that autonomous driving features have become the darling of the auto industry? How media outlets swoon at every mention of self-driving cars?

    Do you know why this is so wonderful for legacy carmakers to talk about, tease about and spread stories about their test cars driving around by themselves on their proving grounds?

    ANSWER: Because it gets you and me, your neighbor, the folks standing around the water cooler, young kids, old kids and the media off the focus on electric drive! You see, adding electronic features is a whole lot cheaper than lithium batteries, and losing that juicy money they take in every single day in service and parts of those oily, greasy old-fashioned explode-and-boom ICE cars, SUVs and trucks they make their lifeblood on.