Elon Musk’s Decision To Call Autopilot 2.0 “Full Self-Driving” Led To Resignation Of Autopilot Director?

2 months ago by Domenick Yoney 67

Tesla Autopilot 2.0 – The Smoothness Is Coming

Report claims unrest in Tesla engineering ranks

At Tesla, Elon Musk is pushing hard for vehicle autonomy. He has famously said that the cars produced since last October (2016) have all the equipment necessary for self driving, and just need the software catch up. If this report by the Wall Street Journal is to be believed, however, his mouth may be writing cheques his tech can’t yet cash. It attempts to make the case that engineers are in such disagreement with Musk’s assertions regarding autonomy that they’ve quit.

Sterling Anderson, former Tesla Director of Autopilot Programs

According to the paper, Sterling Anderson, Tesla’s Autopilot director at the time of the complete hardware suite announcement said the decision to claim its eventual full self-driving ability came from Mr. Musk. While it doesn’t say whether this supposed conflict was explicitly the cause, it does point out that Anderson resigned just two months later, along with a couple other engineers. Anderson then co-founded his own self-driving technology company — Aurora Innovation — with  Chris Urmson, who had headed up an autonomous driving program at Google.

His replacement, Chris Lattner, lasted only six months in the position, tweeting out that “Turns out that Tesla isn’t a good fit for me after all,” upon his departure.  His replacement is Andrej Karpathy, who has an extensive background in AI.

For its part, Tesla says the churn in its Autopilot team is down to intense competition for professionals in this niche. It has, itself, hired 85 people for its own department since the beginning of 2016.  It also points out that NHTSA found its cars saw a 40 percent reduction in crashes since it implemented the auto-steering function.

Full autonomy is a high-priority goal being sought by every automaker, and while Tesla is certainly pushing an aggressive schedule — an attempt to cross the breadth of the country in full autonomous mode near the end of the year is in the works — it’s not alone. Ford, intends to have full autonomy in commercial operation in 2021. In June, GM CEO Mary Barra said her company is the only one currently with “the unique and necessary combination of technology, engineering and manufacturing ability to build autonomous vehicles at scale,” though a target date for sales hasn’t yet been publicized. They are building them, though.

We expect we’ll hear more on the topic of autonomy from the California automaker on September 28th when the company reveals it Semi truck for the first time. Trucking is just one of the many industries set to be disrupted by autonomous vehicles, and no doubt the tech is as central to this vehicle as others adorned with the Tesla “T.”

Source: Wall Street Journal

Tags: , , , , ,

67 responses to "Elon Musk’s Decision To Call Autopilot 2.0 “Full Self-Driving” Led To Resignation Of Autopilot Director?"

  1. ModernMarvelFan says:

    None of them is full autonomy. All of them are trying to “market” it as autonomy or the best yet. The sad part is that some people will abuse it which will cause death that would otherwise be avoided.

    We are approaching quickly toward the “valley of death” in the self driving development phase.

    1. Texas FFE says:

      This is unsupported rhetoric. Many of the basic autonomous features like Adaptive Cruise Control and Automatic Breaking are already preventing accidents and saving lives. Autonomous features are primarily safety features and just like all automotive safety features are bound to meet resistance and negative rhetoric from the public until the features are proven.

      1. Four Electrics says:

        Air France Flight 447 proves that there is, in fact, a valley of death between safety features and full autonomy. The roughly 10x higher per-mile fatality rate of Tesla Autopilot 1.0 vs non-AP Teslas proves this point.

        1. Pushmi-Pullyu says:

          ROTFL!!
          😆 😆 😆

          WOW! I think 4E is trying for an Olympic medal in FUD!

          Reality check: The NHTSA says that Tesla cars with Autopilot + Autosteer installed (just installed, mind you… not necessarily operating!) have a nearly 40% lower accident rate than those without.

        2. Brett says:

          Think you have your ratio reversed. At the time of the first fatality, Tesla had 130 million miles of Autopilot driven, compared to the average 93 million miles driven per fatality.

          At October 2016 there 220 millions miles of autopilot data. Haven’t seen an update on total miles travelled recently, but I would guess its north of 300 million miles, maybe higher, without additional reported fatalities using autopilot.

          1. ModernMarvelFan says:

            “Tesla had 130 million miles of Autopilot driven, compared to the average 93 million miles driven per fatality.”

            That is called data spin.

            Compare a single model with a feature with “overall” market stat is a spin of the data.

            IIHS studies also show that there are various models with ZERO death. So, if anyone cherry pick those particular models to compare against the overall death rate, it would also show that it is lower.

            Does that mean those cars without autopilot is even safer than Tesla with Autopilot?

            1. Pushmi-Pullyu says:

              No. What it means is that if you have only a single data point, or even worse none at all, then you don’t have a statistically valid sample size.

              It would be far better if we had all the accident statistics to look and and try to analyze, not just the extremely low number of fatal ones.

              1. ModernMarvelFan says:

                “It would be far better if we had all the accident statistics to look and and try to analyze, not just the extremely low number of fatal ones.”

                But you did quote the fatality rate per million miles which IIHS shows that there are plenty of models that had ZERO FATALITIES.

                It shows that “picking any single model (such as Model S)” against the overall fatality rate is FALSE narrative.

                1. Pushmi-Pullyu says:

                  “But you did quote the fatality rate per million miles…”

                  No, dude, I didn’t. Look again.

        3. CCIE says:

          You should read a little about AF447 before using it as an example.

          It crashed due to pilot error. After an airspeed indicator issue, the pilots kept the plane in a stall all the way from flight altitude until it crashed into the ocean. They did so with the stall horn and verbal warnings blaring all the way down.

          If they had let go of the control sticks the computer would have corrected and the plane would have been fine.

          1. speculawyer says:

            Wasnt the pitot tube plugged up.

            I wonder if they can make a decent synthetic airspeed now with GPS & weather satellites to be a back-up for pitot tubes.

            1. CCIE says:

              2 of 3 pitot tubes clogged, causing the autopilot to disengage and the aircraft to enter “alternate law” control mode. The pilots took over and stalled the aircraft. Eventually the pitot tubes unclogged, providing accurate airspeed. But, the pilots ignored it and kept the aircraft stalled all the way down. Crazy.

      2. jakaracman says:

        The problem is that Musk is misleading customers, and that can couse them to misuse AP and get killed.
        Noone should be allowet to cal assistance technologies self-driving (and word Autopiloz means just that for 99% od people) until it abosluttelly, 100% is level 5 autonomous. It should be just called assistance system (as other manufacturers cll it).
        Anything else is encouraging people to get killed more and more.

        1. Pushmi-Pullyu says:

          We will never have 100.00% safe self-driving cars. Waiting for 100.00% safe self-driving systems would be about the worst imaginable case of “The perfect driving out the good.” That’s as silly as claiming everybody should turn off their air bags because one of them occasionally explodes.

          Tesla Autopilot + Autosteer is saving lives right now, and that will only get better over time.

          “The thing to keep in mind is that self-driving cars don’t have to be perfect to change the world. They just have to be better than human beings.” — Deepak Ahuja, CFO of Tesla Inc.

          1. jakaracman says:

            I agree with you that assistance systems (not only Teslas) are saving lives right now, but still: it’s an assistance system, not an autopilot so it shouldn’t be named so deceptively.

        2. Ken says:

          Totally agree with this. +1.

          I think people advocating Tesla’s wording-case here are not seeing it clearly. They are comparing the average death in a Tesla with this functionality vs the average fatality. They should really compare it to an equally new car .. for instance a Volvo or something. I am very sure that the statistics would beat even the Tesla’s given this comparison.

          1. Pushmi-Pullyu says:

            Amazing how many people claim to be “sure” or even “very sure” of something, despite not having enough data to make a meaningful comparison.

            I think it’s a psychological thing. Usually when people assert they are “sure” of something, they’re indicating that they’re trying to convince themselves of something which they’re not sure about at all!

      3. Hans Hammermill says:

        Computer-driven driver-enhancement features in the past like anti-lock brakes and stability control undeniably saves many many lives too.

        We did not call them autonomous driving.

        Most arguments are based on a false equivalency; a self-driving car is not the same thing as safety aids.

        Not saying it is good or bad, just that it is not the same thing so one can not project the results as if it were the same thing.

    2. Nick says:

      You’ll know full autonomy when they can drive without a steering wheel or other controls and no data link.

      1. fotomoto says:

        I believe a data link will always be necessary both for communication between other vehicles and things not even developed yet. Like a lot of things in life, this can be it’s strongest attribute yet its achilles heel.

        IIRC, the definition of true level 5 is no physical vehicle controls needed for operation.

      2. Mark C says:

        But I won’t buy one.

  2. Pushmi-Pullyu says:

    Of course I have no idea what lead to the resignation (or termination) of Tesla’s recent “Autopilot director”, but it certainly does seem to me that Tesla (or Elon) is getting out over their skis in labeling Autopilot hardware 2.0 as being sufficient for full autonomy.

    Sorry Elon, but depending on cameras to see every direction but straight ahead ain’t gonna cut it. As I’ve been saying for at least months if not years, You need 360° active scanning for reliable full autonomy. And probably that scanning needs to extend out to 150-200 yards in the forward arc, and out to 100 yards in all other directions.

    And I’m far from the only person saying this.

    1. Roy_H says:

      Are you talking about LIDAR? I don’t agree that LIDAR is necessary, however if cameras are used I believe binocular vision is necessary (preferably two pairs forward one pair at left and right sides of windshield) to get proper depth perception. You cannot gauge distance with a single camera and trying to measure distance with radar, then match that to an image is very difficult at best.

      1. Pushmi-Pullyu says:

        Widely used active sensor systems are lidar and radar. Tesla seems to be doing pretty well with radar, but they need more radars pointing in different directions, or else they need a rotating radar on the roof of the car like other companies are using rotating lidar scanners.

        The problem with using camera images is that you have to use software to interpret those images and recognize objects. The limitations of optical object recognition are well known in the software industry. It’s simply not a reliable process, period. That’s not a problem Tesla is going to solve, nor is it going to be solved using current computers.

        The human visual cortex is the result of billions of years of evolution, and has far stronger and more sophisticated visual image processing power than any computer Tesla is going to install in a car within the next few years. That’s just the hardware limitation, nevermind the software limitation.

        Even if Tesla could match the visual image processing power of the human brain, that process would still be using cameras, which have more or less the same limitations as the human eye: They can’t see in the dark, nor thru fog.

        Active scanning gives positive returns from objects, allowing instant detection and range to target. It doesn’t need to rely on imperfect attempts by software to figure out what objects are shown in camera images. Active scanning also doesn’t care if it’s light or dark outside, and depending on the frequency used can see thru fog perfectly fine.

        Bottom line: Active scanning is a far more reliable, far more useful, and faster way for a computer system to be able to “see” real-world obstacles… moving or stationary.

        1. speculawyer says:

          Tesla does have 360 degree coverage with ultrasonic sensors. Dont know if that will be sufficient but that is more than just forward facing radar.

          1. Pushmi-Pullyu says:

            The ultrasonic sensors are much too short-range to be of any use in avoiding accidents on the road. They are useful for detecting nearby obstacles when parking, and in the future may be useful to avoid hitting objects while maneuvering at low speed in a parking lot. But at any real difference in speed between the Tesla car and an approaching vehicle, they simply don’t allow enough reaction time for the car to brake or steer to avoid an accident.

      2. Someone out there says:

        LIDAR might not strictly speaking be necessary but only using cameras will unnecessarily make things much harder for you. There is a huge chance that binocular vision miscalculates distances because of some optical illusion, the same happens to humans. With LIDAR you measure the actual distance directly instead of having to guess it with advanced calculations.

        1. Pushmi-Pullyu says:

          Well said, and thank you!

          Yes, I’ve often wondered how optical object recognition handles optical illusions. Better than the human brain? Or worse? Probably rather differently.

    2. Dave_the_braver says:

      Gee, I don’t have 360 degree vision, and certainly not 360 degree radar or Lidar, and I manage driving just fine — so your supposition fails. Pontificating as to ‘what is required for FSD’ without definitive proof is speculation at its best.

      1. Asak says:

        You do have 360 degree vision *capability*, it just requires using your mirrors or sometimes turning your head. Just because you can’t look in all directions *at once* doesn’t mean you can’t *look* in all directions.

        That’s different from what we’re talking about in a car.

      2. Pushmi-Pullyu says:

        “Gee, I don’t have 360 degree vision, and certainly not 360 degree radar or Lidar, and I manage driving just fine…”

        Sorry, but actually you do not “manage driving just fine”. Neither do I. We are only human**, and the goal here is to produce an autonomous driving system which is significantly safer than human drivers.

        **Actually I’m a two-headed llama, but for purposes of discussion here I pretend to be human. 😉

      3. Pushmi-Pullyu says:

        “Pontificating as to ‘what is required for FSD’ without definitive proof is speculation at its best.”

        I have pretty definitive proof that Tesla’s Autopilot can’t tell the difference between a tree and an object that is in the path of the vehicle. Or at least it couldn’t when the video linked below was posted! Note now many dozens or hundreds of trees have green outlines in the right and left rearward camera views, and note that Tesla’s software paints those green outlines around things it “thinks” are “in the path” of the vehicle. I dunno about you, but I’ve never seen a tree run up from behind a car and jump out in front of it! 😀

        How much more proof do we need that using cameras instead of active scanning to the sides of the vehicle is wholly inadequate?

        * * * * *

        Dave, you would be correct to say that my assertions here are not “proof”, but nowhere did I claim to have proof. I’m making predictions based on what I know, including my experience as a computer programmer. I’m not offering “proof”, and sometimes my predictions have been wrong.

        However, neither are my assertions mere groundless speculation. They are educated guesses based on the publicly available evidence.

        If you think you can make a case equally good or better than mine with a different scenario, then by all means write up yours and post it for discussion!

        1. Doggydogworld says:

          Why would the car think objects in the rear view cameras are in its path?

          The front camera seems to distinguish between in-path and out-of-path objects pretty well. It looks like the rear cameras just show everything. I wouldn’t hang my hat on that.

  3. speculawyer says:

    Elon is getting too far over his skis and better watch his words. If he’s not careful, he’s gonna get hit by shareholder lawsuits, consumer class action lawsuits, tort lawsuits, etc.

    Keep working on it but don’t say it does more than it really can do.

  4. (⌐■_■) Trollnonymous says:

    Meh.
    If there was a version of the M3 without AP I would order that one.

    1. 2013VOLT says:

      Agree completely, I don’t need it.

    2. stimpy says:

      AP is an option, so go nuts.

      1. (⌐■_■) Trollnonymous says:

        No.

        https://www.tesla.com/blog/all-tesla-cars-being-produced-now-have-full-self-driving-hardware

        “all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. Eight surround cameras provide 360 degree visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength, capable of seeing through heavy rain, fog, dust and even the car ahead.”

        1. speculawyer says:

          So? it has sensors but ap isnt activated unless you pay $5k more. sensors are not that expensive so they put them in every car. And if they get it working well, you can change your mind & activate it.

          1. Priusmaniac says:

            Sensors are not that expensive but in the same time there is no money for a driver display?

            1. speculawyer says:

              There is a driver display…it just isn’t where you’d like it I guess.

              The interior is greatly simplified because that is an area that usually requires a lot of human assembly. Simplifying the interior simplifies assembly.

      2. Unplugged says:

        “AP is an option, so go nuts.”

        And it’s a $5,000 option at that.

        1. Priusmaniac says:

          Since you don’t take it, why bother about the price? 5000$ 50000$ or 5 million $ it makes no difference.

  5. SparkEV says:

    If it’ll let me sleep while creeping along in LA freeway traffic (eg. 5 FWY) in one lane and wake me up if the speed picks up or there’s difficult situation it can’t handle, I’ll call that “full self-driving”. I think we’re practically there, but not yet regulatory there.

    If it can drive under all conditions, that’s “fully autonomous”. It’s nice to have, but not really necessary for me.

  6. abc123 says:

    Can anyone with a Tesla report how autonomous driving works in this condition:
    raining and foggy at night with lane markers faded or not visible.

    Will the car still drive itself?

    What about this situation:
    Autopilot approaches an uncontrolled railway crossing. A train is approaching as well. At the current speed, the train and car will intersect causing an accident. Any human driver, would either slow down or speed up to avoid an accident because of anticipation. What would autopilot do?

    1. John Ray says:

      Google Tesla Motors Club and go to the forums. There you can get real experiences and opinions from actual Tesla owners regarding AP1, AP2, etc. It makes for some interesting reading.

    2. Pushmi-Pullyu says:

      “Autopilot approaches an uncontrolled railway crossing. A train is approaching as well. At the current speed, the train and car will intersect causing an accident. Any human driver, would either slow down or speed up to avoid an accident because of anticipation. What would autopilot do?”

      This would be more or less the same situation as approaching any 4-way intersection. The fact that the approaching vehicle is a train instead of a car shouldn’t matter much, as far as how Autopilot should react to it, except Autopilot should be programmed to treat the vehicle approaching from the sides at a train crossing as always having right-of-way (and won’t stop).

      But how many train crossings these days exist without at least red lights to warn of a train approaching? Autopilot should react to the red light and stop, without needing to detect the approaching train.

      Unlike human drivers, Autopilot won’t stupidly try to outrun the train!

      1. cmg186 says:

        Here in rural Ontario, there are many railway crossings marked with an unlighted ‘X’. I think it’s actually a good question.

        1. Asak says:

          There’s almost guaranteed to be unexpected problems with autonomous driving, whether it’s from Tesla or other car manufacturers. Chances are at least one or two people end up being killed because of it.

          In the long run autonomous driving will make the roads safer (if for no other reason than that most drivers are bad), but in the short run I wouldn’t want to be a beta tester of these systems.

          1. Pushmi-Pullyu says:

            One person already has been killed.

            But this is the same argument that people used to make against wearing seat belts. “I won’t wear them because the seat belt might get jammed and trap you in a burning car!”

            Well yeah, it might, but the odds are much, much better that it will save your life in an accident. And the odds are already much better than Autopilot will avoid a serious accident than be the primary cause of one. And those odds will only continue to improve as the functionality of Autopilot + AutoSteer improves.

  7. CDAVIS says:

    This article”s title is:

    Elon Musk’s Decision To Call Autopilot 2.0 “Full Self-Driving” Led To Resignation Of Autopilot Director?”
    ——–

    … but OP provides nothing in the article body to support the OP’s article title assertion that Tesla’s Autopilot Director resigned as a direct result of Musk decision to call autopilot “Full Self-Driving”. The nearest thing @OP provides is:

    “While it doesn’t say whether this supposed conflict was explicitly the cause, it does point out that Anderson resigned just two months later…”

    A much more truthful article title would have been:

    It’s Not Known if Elon Musk’s Decision To Call Autopilot 2.0 “Full Self-Driving” Led To Resignation Of Autopilot Director”

    I’m a long time big fan of InsideEVs but disappointed to see these type of missleading editorials increasingly creep in an attempt the “make news”.

    1. CDAVIS says:

      The added “?” at the end of the article’s title is a positive step … also adding “Did” at the begging would be a further improvement.

    2. Domenick Yoney says:

      The headline was written by someone else before the article was written, and I wrote the piece according to what was in the source story, leading to a bit of a disconnect between the two.

      If I notice a similar disparity in future, I’ll bring it up with the final editor.

      1. Dave_the_braver says:

        Please do. As it stands it is just click-bait and reflects poorly on InsideEVs.

      2. Pushmi-Pullyu says:

        Mr. Yoney:

        Your article has numerous qualifying statements that make it clear you are not asserting this is what happened, and the headline similarly makes it clear your article is questioning if this happened… not stating it did.

        I don’t think you need change a thing in that regard, and in fact I congratulate you for putting in stronger qualifying wording than is normal for this sort of reporting. The criticisms pointed at you above are completely unjustified. I wish more online articles were written to your high standards, sir!

        1. Domenick Yoney says:

          Mr. Llama-thingy,

          Thank you.

          p.s. Feel free to call me Domenick. 🙂

      3. CDAVIS says:

        @Domenick Yoney, Thanks for taking time to provide an explanation.

  8. BillT says:

    When I look at all the bulky hardware Google and everyone else *except* Tesla attaches to their semi-autonomous/autonomous cars I have to wonder whether they know something Tesla doesn’t or vice-versa in terms the hardware required for full autonomy. That being said I am really *counting* on autonomous vehicles to drive me around within 20 years and as a cyclist would trust them more than the bag of salty water distracted by a smart phone piloting today’s vehicles. So, bring ’em on ASAP.

    1. Pushmi-Pullyu says:

      I can certainly see why Tesla is resistant to putting a dome on top of the car, which is the position for proper mounting of a 360° scanner. I understand why they are resistant to how that will affect both the aero drag and the styling.

      But I don’t see that there is any choice in the matter. It’s simple geometry; a scanner should be mounted as high as possible, for the best position to “see” over intervening vehicles and other obstacles.

      Of course I could be wrong, but my prediction is that Tesla will eventually have to give in and mount an active 360° scanner in a dome (or roll bar) on top of the car. Not necessarily a rotating one; multiple fixed-position solid-state radars pointed in several directions may do as well as a rotating scanner. And I won’t consider Tesla serious about full autonomy until it starts doing that.

      Again, just my opinion.

      1. Priusmaniac says:

        I think Elon is right that cameras are sufficient for a self driving car but I also think that the AI level involved is much higher than he anticipates and that the AI needed will be close to or at sentient level. In other words a true complete artificial intelligence human level. It also means self driving research is actually artificial intelligence human level research which kind of contradicts his summoning the demon warnings.

      2. Dave_the_braver says:

        I think you are wrong. I suspect that the 360 degree cameras with supporting radar is sufficient for FSD. This doesn’t mean that multiple body-mounted Lidars might not be added as their price and performance improve — redundancy is a good thing.

        1. Pushmi-Pullyu says:

          See my post above linking to a Tesla demo video where Autopilot confuses trees behind the car for objects “in the path” of the car. And not just a few times, but dozens or hundreds of times in a less than 10 minute drive.

          The confusion comes only where Autopilot is using cameras to do the “seeing”. Autopilot’s radar is not confused in this manner.

          I’m not just stating an opinion about cameras and optical object recognition being inadequate; I’m stating a fact, at least insofar as the state of Autopilot development when that video was released.

          Now, of course you can argue that Tesla will manage to improve that to the point that it’s actually reliable. But since that has been a goal of a lot of robotics researchers, and none of them have succeeded despite decades of work, I am rather skeptical that Tesla can do so within the next couple of years.

  9. Chris O says:

    Intel recently acquired Mobileye for $15 billion. That’s 15 billion reasons right there to leave Tesla, start one’s own company and see how many billions it will sell for at some point.

    This tech is pure gold and greed rather than an “oh no, Elon called it “full driving”, how could I possibly live with that” is the more likely motivator.

  10. JeremyK says:

    I simply can’t imagine purchasing an option that isn’t yet functional. Let’s all remember that a car is a depreciating asset. Every day that you own that car, without being able to use a feature, is another day of money going down the drain.

    Assuming that “full autonomy” is EVENTUALLY made functional, there will be better, less expensive hardware on the market from other OEMs. High risk of this embedded tech being obsolete by the time it becomes fully validated by Tesla = very bad deal for the consumer.

    1. Pushmi-Pullyu says:

      ^^ this.

Leave a Reply