GM Autonomous Director Says Tesla Can’t Reach Level 5

1 month ago by Steven Loveday 139

This isn’t the first time the Tesla’s Autopilot and/or Full Self-Driving hardware has been questioned, but now that GM is pushing forward with the technology, its director of autonomous vehicle integration doesn’t believe Tesla can pull it off.

First of all, GM’s Scott Miller thinks that Level 5 autonomy is about 15 years off. The automaker just launched its SuperCruise system on the Cadillac CT6 – and it’s only at Level 2, which is parallel with that of Tesla’s current Autopilot 2.0 system.

GM also has an entire fleet of self-driving Chevy Bolts currently testing in San Francisco. For his part, CEO Elon Musk said back in April that Tesla’s Full Self-Driving capability is about two years off.

Tesla

Tesla Autopilot

Miller recently spoke with the Australian press about Musk’s self-driving mission. The one in which the automaker will send a car coast-to-coast in the U.S., from a parking lot in California to a parking lot in New York with no driver intervention (although there will be a driver in the car, just in case). Miller said (via Teslarati):

“The level of technology and knowing what it takes to do the mission, to say you can be a full level five with just cameras and radars is not physically possible.”

GM’s autonomous director simply doesn’t believe that Tesla can hit Level 5 as defined by the SAE without the correct sensors, computer, and LiDAR. He continued:

“I think you need the right sensors and right computing package to do it. Think about it, we have LIDAR, radar, and cameras on this. The reason we have that type of sensor package is that we think you need not be deeply integrated in to be level five, you should have redundancy.”

“Do you really want to trust just one sensor measuring the speed of the car coming out of an intersection before you pull out? I think you need some confirmation. So, radar and LIDAR do a good job at measuring object speed, cameras do a great job at identifying objects. So, you can use the right sensor images to give you confidence in what you’re seeing, which I think is important if you’re going to put this technology out for general consumption.”

GM’s Self-Driving Chevy Bolt (Photo by Steve Fecht for General Motors)

“Could you do it with less and be less robust? Probably. But could you do it with what’s in a current Tesla Model S? I don’t think so.”

The SAE defines level 5 as:

 “the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.”

Many others have come forward with the same type of argument against Tesla’s concept of a self-driving suite. Most in the industry agree that LiDAR is necessary to achieve success. Musk has combatted this on repeated occasions. He believes that cameras are the way to go. He said at TED2017:

“You can absolutely be superhuman with just cameras. You could probably do it 10 times better than a human with just cameras.”

“November or December of this year, we should be able to go all the way from a parking lot in California to a parking lot in New York with no controls touched in the entire journey.”

We all know that Musk tends to be very forward-thinking and extremely confident about his ideas and his companies’ successes. Though there have been issues in the past with regards to timelines, he has made a habit of surprising people with his innovations and ability to achieve the unachievable.  Will an autonomous Tesla vehicle still be crossing the country this year? If so, can it make the whole trip without driver intervention? Do you think Tesla can pull this off with the current hardware or will the automaker eventually have to up its game?

Source: Teslarati

Tags: , , , , , , , , , , , , ,

139 responses to "GM Autonomous Director Says Tesla Can’t Reach Level 5"

  1. justanotherguy50 says:

    Musk overpromising & eventually underdelivering? That is unheard of! /s

    Seriously though, thousands of human lives are at stake here. I prefer the one moving cautiously.

    15 years though, GM? Hopefully Nissan splits the difference & gets a solid system available sooner.

    1. pjwood1 says:

      This isn’t an altruistic race. The first to Level 5, with patents, has only to buy the data that says it’s also the safest to mandate. Then, the money showers in.

      Soon there will only be NTSB and NHTSA to stand in the way of systems that clog and hold up traffic, just to be safe. States will be powerless (Peters, Thune Bill), and Silicon Valley will have achieved one of the biggest land grabs we never realized we were giving up. Our roads.

    2. L'amata says:

      GM hates Tesla for Proving that EV’s are better the ICE., So now they GM are forced into competing and hate Tesla Even More….

      1. dinhh68 says:

        Exactly!

      2. ModernMarvelFan says:

        That may be true. But it has NOTHING to do with the fact that Elon made an claim that is just simply wrong.

    3. Didier says:

      Lives at risk ? Yes, you are right, ans that’s why we need level 5 cars ASAP !

    4. bogdan says:

      Thousands of human lives are at stake here because of the lvl 0 human drivers!

      1. Anti-Lord Kelvin says:

        In the other hand, I’m sure that there are, somewhere, people who have been driving cars for a long time (decades) without having any accident and…I’m pretty sure that non of them has a radar or a lidar in their head…Just saying.

    5. JeremyK says:

      15 years, but he’s talking about Level 5!

      Going to be at least that long before a computer piloted can negotiate 6″ of snow at night in a storm. That’s what we’re talking about here, right? In a situation like this, road signs and lane markings are completely obscured. A decent human driver would be able to negotiate these conditions.

      1. Steven says:

        There are a lot of people who can’t drive under those conditions.

      2. Dan says:

        Ha ha. I live in New England and we have driving bans under bad conditions. People in snowier parts of the world aren’t better in snow and ice. We’re just smart enough to not take stupid risks.

      3. Brian says:

        Who drives in 6 inches of snow in the middle of a storm at night? Lol! Not down south!

      4. Pushmi-Pullyu says:

        “Going to be at least that long before a computer piloted can negotiate 6″ of snow at night in a storm.”

        One of the ways in which autonomous cars are going to be safer drivers than humans, is to refuse to proceed in dangerous situations.

        Just because some people are willing to drive under the conditions you describe, doesn’t necessarily mean that an autonomous car should do so.

    6. Lee Briggs says:

      justanotherguy50- Mainland China is going full bore to be the first with driverless cars. The government has banned the sale of gasoline powered new cars and is working with Nokia, NVIDIA and other noteworthy tech giants and is today building out the infrastructure required to make driverless cars possible. China expects to have fully driverless cars on their roads within the next 2 years.

      China is no longer trying to catch up with the rest of the world in technology, it will very soon be leading the world.

      Needless to say, I would rather the USA be the leader, but as usual our country is caught up in tug of wars between technology and bureaucracy. If our country’s leaders were really serious about leading the rest of the world in technology, it would be busy financing the technology infrastructure exactly like it did in the early 1900s when it built out the Interstate Highways. There is no difference today. Of course the contracts would all be awarded to private companies, just like it did with the highway system, NASA/space exploration/hydroelectric generation/nuclear energy/public schools… This should be a no brainer.

  2. unlucky says:

    I also believe that you can’t do it without LiDAR right now. Or particularly soon. I’m not quite certain even having LiDAR gets you to full autonomy either.

    And getting a car to drive cross country still doesn’t mean you’re level 5, it just means you found a route that doesn’t require level 5 for the car to complete it.

    Just look at the simple problem if having a car park itself (something Musk said he would deliver a few years ago). If I stop my car and get out at the door at my office then the car has to go park itself. But the problem is it doesn’t know which spaces are okay to park in. Some are for the business I work for, some are for others. The lots are joined, but we can’t all use all the spaces.

    So the car has to know which spots are okay and which are off limits. And this problem is replicated tens or hundreds of thousands of times across the US. And the world is bigger than just the US.

    The progress we’ve made so far is incredible. But there’s so much more still to do. And Musk continually undersells the difficulty. I personally think that’s part of why he’s had to replace his head of driving automation several times recently, unreasonable expectations.

    1. Mikael says:

      You actually think that mapping parking spots and assigning permissions to them is hard?

      If there is something that is easy then it’s doing a database of where your allowed to park.
      Most of it can be done with learning by doing and ghost mode. If user input, like a parking-wiki, is needed then there are plenty of people who will add every possible parking spot for fun.

      1. Magnus H says:

        And when the mapping and the terrain does not match? Parking on the grass where it sued ot be a spot, parking where a newly erected signs says “disabled parking only”?

        1. Pushmi-Pullyu says:

          A level 5 autonomous vehicle certainly should be able to tell the difference between grass and pavement, and avoid driving on the grass except in an emergency where it’s necessary to avoid an accident, or when specifically directed to do so by a passenger.

          Common sense says that the car will be programmed to avoid parking in areas where the car’s SLAM (real-time scanning/mapping of its environment using the car’s sensors) doesn’t match existing maps, unless a passenger (or a central traffic control computer) overrides the prohibition. In fact, I would expect cars to be programmed to avoid driving in areas where its SLAM doesn’t match existing maps, as much as possible.

          1. Viking79 says:

            Parking in the grass at many parks and events is allowed when there is overflow for the event. How would the car handle this?

            It is a solvable problem, but a problem nonetheless.

            1. Josh Bryant says:

              The car drops people off at the event, then leaves. No need for overflow parking.

              1. mxs says:

                In 2050 …. what do you suggest for 2025-2050 when people still buy cars to own, not to hire for service?

                1. Pushmi-Pullyu says:

                  Josh is right. Autonomous cars will eliminate virtually all need for any overflow parking. So what if the car is privately owned? It’s still going to drive itself somewhere else to park after dropping off its passengers.

                  Too much “inside the box” thinking going on here.

        2. Pushmi-Pullyu says:

          “…a newly erected signs says “disabled parking only”?

          Tesla’s semi-autonomous cars already reliably recognize traffic signs. If they can’t already recognize a “handicapped parking only” sign, they soon will.

          There will be many real challenges to achieving full Level 5 autonomy. This is isn’t one of them; adding this functionality will be a fairly trivial programming job.

        3. Brian says:

          There will be no “disabled parking” in the future.

      2. Pushmi-Pullyu says:

        It surprises me that with smartphone apps so commonplace, that people commenting on the limitations of autonomous vehicles don’t realize that future self-driving cars will be equipped for a passenger to give it specific individualized instructions, possibly verbally, possibly using a smartphone app and/or the car’s big screen; possibly both.

        The situation that Unlucky describes could be handled by designating specific parking lots where the car should park, and the passenger should be able to designate certain stalls within that parking lot which are off limits.

        Another thing which people keep ignoring is the near-certainty that roads and parking lots will be redesigned to accommodate autonomous vehicles, just as roads were redesigned to accommodate motor vehicles. Perhaps in the future, individual parking stalls will be equipped with small signs showing icons which the car can scan to tell it whether or not it has permission to park there.

        Or — and I think this is more likely — there will be local traffic control computers overseeing all traffic in their area, and those computers will be equipped with info on every legal parking spot in their area. Autonomous cars will be in constant communication with those computers, which will direct cars looking for a parking place to the nearest permissible spot.

        1. Magnus H says:

          Well, with those limitations you don’t have an autonomous car anymore, but a remote controlled. Sure you could prepare a specific area and let the car navigate top the right spot by telling it the co-ordinates.

          Its very far from level 5.

          1. Pushmi-Pullyu says:

            No. You’re confusing oversight with direct, detailed control of individual cars.

            Think of the traffic control computer as a cowboy herding cattle on the trail. The cowboy just nudges a line of cows in the right direction, and forces strays to return to the herd; he doesn’t take over a cow’s brain and force it to walk or run by moving each leg individually.

            Similarly, a local traffic control computer may “tell” a car “go to this parking lot and park in this particular stall”. It won’t steer, accelerate, and brake the car. The car will still maneuver within the parking lot using its internal self-driving system, to drive to and park in the designated stall.

            Trying to use a stationary computer to run all autonomous cars in an area by remote control would be quite dangerous, especially to pedestrians and to human-driven cars, which would not be communicating with the central computer, so it would not be aware of their location. The central computer doesn’t have the real-time scanning ability that the car’s sensors provide. The response time would be far slower trying to use one central computer to control everything, instead of a distributed system.

            For example, when a kid runs out in the street in front of a car, we’ll want the car to react in real-time as fast as possible. We won’t want the car to have to send its scan to the central computer, have the data wait in a queue until the central computer processed all the data packets in front of it, then build up a SLAM image of the car’s immediate surroundings, then recognize the danger of imminent collision, then send a “STOP” command to the car.

            We’ll want the car to detect the obstacle (the child) moving onto the road and stop itself immediately, without waiting for a response from a central traffic control computer.

            1. James P Heartney says:

              Anyone who’s ever done object-oriented programming will understand the concept of encapsulating functions at the appropriate levels of the hierarchy. Cars would be directed to spaces by the upper level AI, while the car’s own AI would handle the specifics of maneuvering into a space. This is pretty straightforward.

              The potential fly in the ointment comes from having random vehicles talking to each other and to ground-based AIs. There are lots and lots of potentially nasty security issues in this; an AI could spoof the parking attendant AI and instruct the car to park on a flatbed, where it could then be stolen. Or malicious hackers could tell cars to double/triple etc. park in a space and create a jam. And on and on.

              The bottom line is there would need to be pretty bulletproof security and ID protocols in place if you are going to involve computers talking to computers with this.

              It might work better if there were computer-readable markings and symbols designating things like parking spaces and lanes and temporary detours etc. Not sure what these would look like – maybe like QR codes or something. They could be affixed to traffic cones to give the vehicle AIs information about a temporary lane created by a construction crew. Then you don’t need the AIs to be able to process human-readable signage.

              1. Pushmi-Pullyu says:

                Re assigning different levels of control according to hierarchy:

                Thank you! Well said, sir.

                Re not having autonomous cars rely on wireless interactivity:

                I don’t think it’s going to be practical to have every autonomous car act completely independently, without any coordination with others. My understanding is that some autonomous driving test cars are already being built to communicate with each other. This will facilitate, for example, assigning priority when approaching an intersection. Autonomous cars shouldn’t need stop signs or stop lights; they can just adjust their speed and spacing, interweaving traffic going in different directions. Of course, the poor human drivers will still need stop signs and stop lights.

                Also, perhaps more importantly, a computerized “overseer” control system will allow a higher level of control, re-routing traffic around accidents and detours, and in general facilitating smooth traffic control, avoiding traffic jams.

                Will there be a danger of a hacker disrupting or wresting control of a traffic “overseer”? Sure, but that doesn’t mean we won’t build and use such things. There is a very real danger of a criminal hacker getting access to a bank’s computer system, and using that to steal money. Such theft happens with some frequency. But we don’t stop using electronic transfer of funds, because the advantages far outweigh the potential dangers.

            2. Magnus H says:

              What you describe sounds very much like a guided load bearer in a factory, which also stops
              if detecting obstacles.

              Level 5 means you give the car the same prerequisites as you would a human: “Go to Mary’s place and park outside”. Not “Go to (x,y) and park in slot 27B/6”.

              1. Pushmi-Pullyu says:

                Re your last point:

                I don’t agree. In areas where parking spots are limited, there is a great advantage to having a local central traffic control system keep track of which parking slots are empty and which are full. No point to sending a car to a parking lot if there’s no place to park there. Having a centralized traffic control system oversee parking will allow cars individual parking spaces to be reserved for cars that have dropped off their passengers and need a place to park. (Of course, a human-driven car might mess up the system by “cutting in line” and parking there first!)

        2. bjrosen says:

          This is all about timelines. Musk thinks he will have a completely self driving car in two years, he also thinks he will have a Mars colony in that time, neither is going to happen. GM thinks it’s 15 years, is that excessively pessimistic? It’s hard to tell. If we look far enough out in the future, say 30 years, the problem is simpler. In 30 years even the jalopies will be smart as will the streets, highways and parking lots. In that world cars will all be happily chatting with each other as well as the environment so they will know where all the other cars are, which cars will arrive at an intersection when they do, which cars they can form a road train with, where they can park and so forth. But there is a long interim period before that can happen. The first autonomous cars will have to rely entirely on their own sensors, they will have to contend with the unpredictable behavior of human drivers, and they will have to function in a legal system that was designed for human drivers i.e. is it even legal for a car to drive itself without human intervention, and if it does who’s at fault in an accident? The customs of the road, which is not the same as the rules of the road, are wildly different from city to city. There are places like Rome where everyone puts there foot down the instant the light changes, there are other places where everyone expects the last one or two cars to run a red light. Human drivers can figure these things out so they don’t get rear ended in a place where everyone accelerates immediately or t-boned in a place where everyone runs the light. Getting autonomous cars to work completely independently will take lots of time, it can’t happen in two years, will it take as long as 15? I don’t know but that doesn’t seem out of the ball park?

          1. Cavaron says:

            Where does he say Mars Colony in two years? You discredit yourself with false quotes. He said: “first people on Mars in 2024, maybe later”.

            1. Anti-Lord Kelvin says:

              He also said “sort of aspirational dates”, but even he put people to Mars in 2029 I think it would be OK. After all US administration had promised this for 1984, then 2005, then 2011, and now somewhere between 2033 and 2050 and have already spent almost 20 billions for capsule that will transport its first humans in 2021, at the earliest in a rocket that will have cost almost 40 billions at that time, and you will have spent more to have the SLS Block II to go to Mars at 1 billion per launch (ouch!) and all the architecture to actually land at Mars and handful of people and all the hardware to bring them back to earth (maybe 50 billions more… and 20 years more)!

          2. Pushmi-Pullyu says:

            “Musk thinks he will have a completely self driving car in two years… GM thinks it’s 15 years, is that excessively pessimistic?”

            It may not be overly pessimistic for what experts will call fully functional Level 5 autonomy. But I don’t think auto makers are going to wait for that level of perfection before putting autonomous driving systems into production for the general public to use.

            I’m reminded of the following joke:

            In the high school gym, all the girls in the class were lined up against one wall, and all the boys against the opposite wall. Then, every ten seconds, they walked toward each other until they were half the previous distance apart.

            A mathematician, a physicist, and an engineer were asked, “When will the girls and boys be close enough to kiss?”

            The mathematician said: “Never.”

            The physicist said: “In an infinite amount of time.”

            The engineer said: “Well… in about two minutes, they’ll be close enough for all practical purposes.”

            Heck, Tesla is already showing the advantages of putting even a partially developed system into use; other auto makers are going to follow, even if only reluctantly. If GM drags its heels on putting a less-than-full-Level-5 autonomous driving system into its cars, it’s going to find itself making cars that aren’t fully competitive.

      3. unlucky says:

        You’re going to do this with learning and ghost mode? There are hundreds of spots I can use and hundreds I can’t. I don’t park in most of them in any given year.

        You’re not going to get this by just watching which spots I slot into.

        And what if you did? You’re telling me in order to have automatic parking all I have to do is park my care in 300 different parking spots first to train it? Oh, and what if someone else parks their car in a spot I’m not allowed to use (handicapped, not an exec, or belonging to another company) and then my car parks there later because it learned from him?

        That’s a non-starter.

        I’m not saying none of this can be done, but it requires a huge database. And one we don’t have. And saying some joker will fill out the database for you for free is both to assume you’ll get free help you can’t count on and to assume that anyone who does put in data is putting it in correctly.

    2. Sébastien M says:

      When you drive, which senses do you use to approximate distance ?

      Only vision is needed.

      1. Pushmi-Pullyu says:

        Evolution didn’t equip me (or you) with a lidar scanner. If it had, then I wouldn’t have to guess the distance; I would know it precisely.

        And the visual processing cortex of my brain isn’t likely to confuse a “brightly lit sky” with the side of a semi trailer painted white, either, as happened in the one fatal accident we know to have happened under control of Autopilot.

        It’s a mistake to think that, with self-driving cars, we should try to duplicate how humans perceive their environment and how they interact with it to drive a car. A self-driving car’s hardware and software should take advantage of hardware and software which is developed specifically for that task, not try to slavishly imitate human drivers.

        After all, the objective is to make self-driving cars that are much better and much safer drivers than humans, not try to precisely duplicate them!

        1. scott franco says:

          Humans, and other animals like us, use binary vision and pure brain power to determine distance. Lidar and radar do not.

          Bats use echolocation to determine the same thing. Previously we believed bats to have inferior “sight”. It is a simple matter of higher wavelength wins (higher resolution). However, experiments demonstrated that bats can resolve quite well. A bat can fly through bars and avoid cave roofs by milimeters. Ultrasonic waves are pretty good.

          1. Pushmi-Pullyu says:

            Ultrasonic echo location works very well for bats, because the rather short range at which it works is all they really need.

            Contrariwise, cars need to “see” reliably up to perhaps 100-200 yards, at least in the forward direction. Even to the sides, I’d think they would need to “see” reliably at least 50 yards, or perhaps a bit more, to have sufficient reaction time to avoid T-bone collisions.

            Bats don’t fly at 80 MPH, nor do they rush down the road head-on at 65+ MPH just feet away from cars going equally fast headed in the opposite direction. If they did, ultrasonic echo-location would be totally inadequate.

            Raptors (hunting birds) which attack by diving on their prey, sometimes even performing fly-by attacks on other birds in flight, have stereoscopic vision which is even superior to humans’. They would likely do even better with LIDAR, but nature hasn’t evolved lasers in any animal, at least not here on Earth!

          2. ModernMarvelFan says:

            “A bat can fly through bars and avoid cave roofs by milimeters. Ultrasonic waves are pretty good.”

            But somehow they can’t avoid netting that is designed to catch them… =)

      2. Viking79 says:

        Human vision has much better dynamic range than computer vision. Having other sensors to compliment camera systems is only logical.

        Human vision is binocular with two cameras. This allows our brain to calculate depth using the two images. A car could also do this, but the processing requirements are large and might have camera alignment issues, etc.

        Lidar can generate a depth map of the image much easier so can do a much better job of threat assessment, and can do so without the calibration required by the binocular cameras.

        1. Magnus H says:

          5-10% of the population does not use both eyes for depth detection, but other clues.

          1. Viking79 says:

            Fair enough, and “other clues” include shadows, scale, and other methods, non of which are near as reliable as binocular vision, and both are much worse than Lidar.

          2. Pushmi-Pullyu says:

            Speaking as a member of the population which does not have fully functional depth perception, I can tell you that those other visual cues are only partially useful at figuring out distance using the human eye.

            Driving at night in the rain was always a nightmare for me. The only thing I had to judge the distance to an oncoming car was the distance between its headlights. Motorcycles, with their single headlight, and older Land Rovers, with their closely spaced headlights, made it impossible.

            The upshot was that I avoided driving at night when it was raining, as far as possible. This is not an adequate solution for a self-driving car!

      3. fotomoto says:

        Yet countless motorcyclists, bicyclists, and pedestrians are hit by car drivers who “didn’t see ’em”.

    3. Windbourne says:

      lidar is a VERY EXPENSIVE red herring.
      The fact is, that lidar can not see in increment weather, while radar can see all around.
      Combined with the cameras, and GPS, I think that Tesla is much further ahead than the idiots that scream that you MUST have lidar.

      1. ffbj says:

        Inclement. Otherwise I am in your camp.

        1. Doggydogworld says:

          Maybe he meant excrement weather, aka a sh*tstorm

        2. Windbourne says:

          fricking ‘smart’ phones!!!!

          though it is funny.

    4. Hart Ed says:

      There are a number of ways to do LADAR, and here is one that works from the typical camera position inside the car in front of the center rear view mirror.

      http://www.sensorsinc.com/gallery/videos/

      Click on: Laser Detection and Ranging – street view

      and the computational offset at: Laser Detection and Ranging – viewed from an overhead location

  3. trololo says:

    “Do you really want to trust just one sensor measuring the speed of the car coming out of an intersection before you pull out” >> This is pure FUD. As far as I know, Tesla is using more than 1 sensor.

    1. justanotherguy50 says:

      Correct me if I am wrong, but Tesla only has a forward facing radar. A car coming from the side may only have 1 camera sensor to look at it.

      1. Pushmi-Pullyu says:

        Yes, it seems fairly clear that a Tesla car has only a single long-distance radar scanner, and that faces only a narrow front arc. See diagram linked below. You can claim the cameras are a backup for the front view, but they are not an adequate backup, for reasons I’ve already explained.

        Waymo uses a belt-and-suspenders approach, with scanning lidar, radar, and cameras. Tesla might be able to get away with using only radar and cameras, if — and only if — they add radars which scan in all other directions. But it would be a mistake to think they can possibly make their system as safe without the triple redundancy that Waymo, and presumably GM, is using.

        In my opinion, Tesla moving to adding LIDAR to their self-driving cars is only a matter of time. That they are not already doing so is IMHO an indication of Elon Musk’s habit of ignoring what everyone tells him. Sometimes that’s an advantage, such as when he pursues the idea of landing a rocket on its tail to make it reusable, despite the great challenges to making that work.

        But here, Tesla is definitely swimming upstream.

        1. Windbourne says:

          LIDAR is a waste of money and time.
          It can not see in increment weather.
          And the forward RADAR of 160 M is plenty good, combined with the 250M camera.

          Probably the one thing that would be useful would be to have radars on each side that go from 90 out to overlapping the forward radar. These would be plenty if they simply did 100M, possibly even 50M.

          1. Ben says:

            As i said multiple times, modern LIDAR with multiple beams is able to see in every weather, where humans can see. Tech has developed and does further develop. As well there was a massive increase in resoltion. In the future there might me cameras with inbuilt distance recognition. High quality geometric recognition combined with precise maps is the key to level 5 autonomy.

            as example see Velodynes FAQ:
            http://velodynelidar.com/faq.html

            1. Doggydogworld says:

              Lidar cannot see through snow and rain as well as humans. There are two issues. First, a laser must travel to the object and back. That’s twice the distance, with twice as much opportunity to be blocked, as the light traveling from the object to the human eye. Second, the brain “fills in” missing data much better than Lidar signal processing can do.

              1. Spider-Dan says:

                I don’t think distance is an issue when talking about the speed of light in the context of driving a car.

                1. Windbourne says:

                  he was not talking about time, but about blockage. Dust, rain, snow, etc all block lights. That is the advantage of RADAR. It sees in any of this fairly decently. LIDAR with visual cameras will mean that when driving on the edge, you will either not see a number of things, or will see them oddly due to chunks of light being dropped.
                  This is kind of like listening to analog vs GOOD digital radio (the garbage that we have in America is HORRIBLE digital). With analog, it is not as good at its best vs digital, but even when analog decays badly, the human brain can fill in a number of things. But with digital, the radio decides what to drop and what to show you. As such, no real ability to make intelligent listening.

                  Do note that with RADAR, it sees through all of this.

            2. Won’t the camera’s also have Facial Recognition, at least on the future Police Cars, to watch for reported crooks?

              1. Windbourne says:

                too late.
                That is already here on a number of police vehicles. In many towns, the cameras are checking license plates looking for not registered, stolen, or obviously falsely registered (I am still trying to figure out how a number of cars in Colorado have hawaii plates since they do not travel to the islands; that is just wrong).

  4. Mike I. says:

    A human driver performs the task of driving almost exclusively by using their eyesight as the only input. Therefore, in theory, an autonomous drive system based only on cameras can do the same. My only doubt is how much computer power is required to fully automate the task. So, the more relevant disagreement is not this one with GM, but rather the one where nVidia said the task requires more GPU than what Tesla is currently using. In any case, the rate of improvement in computing power is on Elon’s side in this disagreement.

    1. justanotherguy50 says:

      Humans are pretty bad drivers, though. Also, our eyes are generally more reliable than camera sensors, especially when our eyes are safely behind a windshield and not lined along the outside.

      Eyes, and cameras, also have problems with glare & heavy precipitation.. something we want to negate with redundancy.

      Autonomous driving isn’t something to make cheaply or rush, even though it is sorely needed.

      1. AJ says:

        I think Tesla will continue to progress with camera tech alone and improve as much as they can. Eventually when they are closer to level 4 and have all the data on the planet they need, then they may introduce a lidar. Most likely we may have more advanced lidars that are discreet and perform better in fog and rain. Kind of like how apple waits it out until a technology matures before they introduce it.

        1. justanotherguy50 says:

          The problem is, Tesla is promising full autonomy with current hardware. Sure, there is a little asterisks that says it depends on “software validation.” Such a pesky asterisk, isn’t it? Hopefully people are realizing that means “it doesn’t work now, and may never work as intended.”

        2. scott franco says:

          I don’t know why people here cannot understand the simple point. Tesla has RADAR. The difference between LIDAR and RADAR is WAVELENGTH.

          Tesla already conceded that vision processing alone was not sufficient, right after the Tesla vs. truck incident. That is why Teslas have radar sensors now.

          1. Viking79 says:

            You say the only difference is wavelength, but that makes all the difference in the world. Look at the resolution of Lidar vs Radar images and you will see that you just see a few unrecognizable blobs from the radar image and a very precise image with depth from Lidar.
            Example: http://robotsforroboticists.com/lidar-vs-radar/

          2. pjwood1 says:

            Brown’s car had radar, in Florida. AP1 and AP2 are both radar+camara systems, with more added to AP2.

            I totally agree with your point: “WAVELENGTH”. If the LIDAR units were recently $10k each, its no wonder Tesla discovered virtue in AP1 and AP2 without them. What I believe is lost, however, is input speed. Rendering within ~800MPH sound speeds is much slower than bouncing a laser at close to, what, 186,000MPH?? I wonder how many times the LIDAR laser fires, for each revolution of those fast-spinning cylinders? I dunno, lots (?), but I don’t question laser can be superior too.

            1. Pushmi-Pullyu says:

              You seem to be confused, Pjwood1.

              LIDAR, radar, and cameras all use various wavelengths of light for detection, traveling in Earth’s atmosphere only very slightly slower than lightspeed in a vacuum, which is 186,282 miles per second.

              For the sensors in Tesla’s, cars, only sonar uses the much slower speed of sound. Since ultrasonic sensors (sonar) is only useful for short-range scanning, that’s not much of a limitation.

              There is more of an issue with speed in the computer processing power needed for processing and analyzing camera images. That makes it considerably slower than the instant reflection, and instant ranging, which you get from active LIDAR and radar scanning. But that’s more of an issue with how much data the microprocessor can handle at once, and the precision of the processing. After all, with the speed of modern microprocessors, executing a few thousand lines of code takes only a small fraction of a second.

          3. Doggydogworld says:

            Scott Franco – your “single point” is incorrect. Lidar is a scanning system. It’s kind of like the old-fashioned radars mounted on a spinning mast. Car radars are not mounted on a spinning mast, they are stationary. The more sophisticated ones do some rudimentary phased array scanning, but they have too few antenna elements to form a narrow enough beam for even low resolution imaging. It’s a huge limitation compared to Lidar.

    2. Ocean Railroader says:

      When I drive I use the sounds the road makes or the sounds the car makes to keep the car on course. Or if the car is driving over something that might damage it.

      The age old expression back up till it sounds expensive.

  5. JyChevyVolt says:

    GM, you fool. Stop worrying about Tesla and build your own charging infrastructure. We all know your not serious about EVs.

    1. JyChevyVolt says:

      GM needs to get rid of grandma Mary and bring in Kyle Vogt.

    2. justanotherguy50 says:

      Yes, because a huge company with tons of subsidiaries can only worry & focus on one thing at a time. /s

      I’d love GM to improve charging infrastructure, but I also understand it isn’t their obligation (like it isn’t theirs to build gas stations).

      1. JyChevyVolt says:

        That kind of thinking lead to Amazon, Tesla, Apple and Netflix.

  6. Breezy says:

    They can’t do it with the current sensors, no. They’ve already revised the sensor suite. They’ll do it again.

    They’ll eventually have self-driving, yes.

  7. Leo says:

    Very likely what will happen is that Tesla at some point updates the sensor package, and refunds anyone that bought FSD package on previous models. If they do it properly, they could avoid a class action.

    I am confident it can be done without LIDAR though. Problem is you need better AI. GM is betting on better sensors, and Tesla is betting on better AI. We’ll see who is right.

    1. Windbourne says:

      AI combined with side radars is all that is really needed.

      1. Ever bee driving along with the flow of traffic, and suddenly, as fast car shoots by at twice the traffic speed? (I have Both experienced that, and Been That Car, so I know that can happen!)

        That is the case for where you need dual external Radars, covering both potential lanes beside and behind you, maybe with convergence at about 20-50 Ft behind the car in the center, and about 40-60 Degrees Wide of the side of the car, facing back, with about a 300′ to 500′ clear vision or sensitivity range!

        1. Windbourne says:

          First off, I have suggested elsewhere that I am a believer in adding radars that go from 90 outwards and then forwards. Likewise, these do not have to be far ranged. 100 or even 50 M is plenty. Better to have a wide range to see what is coming from ahead.
          As for coming up fast from behind, tesla has visual cameras seeing something like 50M, as well as ultrasonic all around seeing 8M.
          Both will detect those coming up like that.

  8. Pushmi-Pullyu says:

    Tesla might be able to achieve what it’s trying to do using 360° radar scan plus cameras, but as I’ve said many times, I don’t think using cameras alone is sufficient. Optical object recognition is too prone to error. Tesla is not going to be able to reproduce what billions of years of evolution have produced in the highly evolved and extremely sophisticated visual processing cortex of the human brain.

    Probably someday computer hardware and software will be developed which is that sophisticated, but it’s not going to happen in just a couple of years or less, and it’s not going to happen by using Tesla’s engineers and programmers alone.

    The problems with relying on cameras and optical object recognition are easily recognized with what Tesla has demonstrated, if you know what to look for. Just look at the video Tesla released as a demo for an upgrade to Autopilot, and note how many false positives there are in the side images, which rely on cameras. All those green outlines around trees — hundreds in all, within just a few short minutes — show that Autopilot thinks they are “in the path” of the vehicle, despite the fact that they are well to the side of the road and many are actually behind the car!

    Compare with the almost total lack of false positives seen in the front view, which relies on radar rather than cameras.

    The difference is pretty stark.

    1. Ocean Railroader says:

      The trouble is what if a self driving car comes up to a optical elision. Where the Camera thinks one thing but a human sees another thing.

      Such as when I come up to such things in a road I slow down to try to give me time to figure it out.

      1. Pushmi-Pullyu says:

        That’s “optical illusion”.

        And yes, that’s one of many reasons why active scanning using radar or LIDAR is preferable to, and requires far less computer processing, than trying to use software to recognize objects in stereo camera images.

        A radar or LIDAR “ping” off an object is simple, direct, instant, and reliable. Using software to interpret the images seen in stereo cameras is complex, inexact, and takes many megaflops of processing before any determination can be made.

  9. Pushmi-Pullyu says:

    “I’m not quite certain even having LiDAR gets you to full autonomy either.”

    Well, there are two different requirements: Hardware and software. I think LIDAR is necessary for the hardware, but they’ll still need to develop the software. Obviously Tesla is making great advancements with that, a lot faster than I expected; but also obviously they’re not there yet.

  10. Henry says:

    A human couldn’t possibly achieve level 5 with the 2 visual and 2 auditory sensors either! That’s crazy!

    1. justanotherguy50 says:

      We are trying to replace human drivers, because they suck.

    2. Pushmi-Pullyu says:

      “A human couldn’t possibly achieve level 5 with the 2 visual and 2 auditory sensors either! That’s crazy!”

      If a human had to rely on the few general-purpose microprocessors in a car’s computer(s), then certainly it couldn’t.

      Your brain has wetware in the form of a highly specialized visual cortex, part of the complex human brain, which is the result of billions of years of evolution. Humans have vision superior to nearly all other animals on the planet.

      A Tesla car has a few general-purpose microprocessors developed over a few decades, with orders of magnitude less processing power, and optical object recognition software which has well-known and significant limitations on reliability.

      Which do you think is better?

      And besides, as Justanotherguy50 said, the objective here is for self-driving cars to function significantly better than human drivers.

      1. Windbourne says:

        “Humans have vision superior to nearly all other animals on the planet.”

        U did great, until this.
        We are not even CLOSE to being superior. Hawks and eagles see at 20-2/20-4.
        A number of animals, such as snakes, see IR.
        In fact, many animals have a much wider range of light bandwidth than does humans.

        All in all, humans are ONLY superior in 1 area and that is mental processing. In no other areas do we excel past all other animals.

        1. James P Heartney says:

          Hawk and eagle vision is adapted to seeing small objects at distance while the bird is flying. I doubt that pinpoint distance vision would work as well if the bird were trying to drive; for that you need peripheral vision, and processing for a much wider range of view.

          Human vision works pretty well for what we need it for, including piloting vehicles. We’d probably suck at being large, soaring predators.

          1. Windbourne says:

            http://www.lasikmd.com/blog/how-does-human-vision-compare-to-that-of-an-eagle/

            I will be happy to take you up on your bet.
            All in all, their vision is superior to human eyes.
            Eagle/hawks have 340 deg visions, see a wider bandwidth and can pinpoint many things that we can not even imagine.

            I grew up as a pilot’s son (b-47/American Airlines), so knew others. One guy had 20-6 vision and was considered one of the top 10 in known history. Needless to say he had eye doc BEGGING to study his eyes. Yet, his vision still was less than eagles (nor good enough to get him on with major airlines back in early 80s).

        2. Kdawg says:

          I think humans are better dancers than all other animals. I mean, have you ever seen a dog try to dance?

        3. Pushmi-Pullyu says:

          “We are not even CLOSE to being superior. Hawks and eagles see at 20-2/20-4.”

          You’re correct about those birds, which is why I said “nearly all other animals”. So far as I know, raptors (hunting birds) are the only animals with better visual acuity than humans. And yes, some are significantly better.

          “A number of animals, such as snakes, see IR.
          In fact, many animals have a much wider range of light bandwidth than does humans.”

          Perhaps I should have specified that what I was talking about, with human vision being superior, is visual acuity. Certainly you’re correct to say other animals see a wider bandwidth of light, and (for example) cats see much better in the dark.

          But in terms of visual acuity, in being able to spot and recognize objects near and far, and to see things in high detail, with superior color perception, humans pretty much beat everything except raptors. A snake’s ability to see infrared isn’t going to be of much use to a self-driving car if it can see only fuzzy images at anything beyond point-blank range, and if it lacks stereoscopic sight.

          Where human vision is totally inadequate is in needing good light levels. Cats beat us all hollow at seeing in very dim light! And that’s another advantage of both LIDAR and radar over cameras or the human eye; they function equally well day or night.

      2. ClarksonCote says:

        “Which do you think is better?”

        Well, even with current autonomy levels, accident rates have been far lower than humans. So I’m going to say that those processors are better than what humans have available.

        It’s not that we don’t have a great capability, it’s that we’re less precise in using that capability. And precision is important, hence why processors with much lower complexity than the human brain can still reduce accident rates over humans.

  11. Hal says:

    If I were Miller’s boss, I would tell him to shut up and make his project work first. When you have a fully working autonomous fleet of cars already released to the public with proven safety record, then you can gloat all you want. For now, it’s just trash talk.

    1. Pushmi-Pullyu says:

      One company trash talking a competitor is a fine and dishonorable tradition. 😉

      Besides, Elon Musk trash talks legacy gasmobile makers often enough. There’s an old saying: “What is sauce for the goose is sauce for the gander.”

  12. EVShopper says:

    I’ve been estimating 2030 is probably more realistic. Guess I’m not alone.

    Mainly because of this and other reading I have done:

    A good summary of the issues by Michael DeKort:

    Autonomous Levels 3, 4 and 5 will not be reached without Simulation vs Public Shadow Driving for AI.

    Public Shadow Driving is Dangerous. Thousands of accidents, injuries and casualties will occur when these companies move from benign and easy scenarios to complex, dangerous and accident scenarios. And the cost in time and funding is untenable. One trillion public shadow driving miles would need to be driven at a cost of over $300B.

    Issues with Public Shadow Driving AI

    1. Miles and Cost – One Trillion Miles and $300B

    a. Toyota and RAND have stated that in order to get to levels 4 and 5 one trillion miles will have to be driven. This to accommodate the uncontrollable nature of driving in the real world, literally stumbling and then having to restumble on scenarios to train the AI. To accomplish this in 10 years it will cost over $300B. That extremely conservative figure is the cost of 684k drivers, driving 228k vehicles 24/7. This expense in time and money is per company and vehicle.

    2. Injuries/Casualties of Public Shadow Driving

    a. Data from NASA, Clemson University, Waymo, Chris Urmson (Aurora) and the UK have shown situation awareness and reaction times are very poor. Between 17 and 24 seconds are needed to properly acclimate and react. This delay results in drivers not being able to function properly especially in critical scenarios. They often make the wrong decision or over react. Many including Waymo, Volvo, Ford and Chris Urmson (Aurora) have called for L3 to be skipped due to these issues. The fact of the matter is if L3 is dangerous then so is using public shadow driving for L4 and L5.

    3. Injuries and Casualties caused in Complex, Dangerous and Accident Scenarios

    a. In order for AI to learn how to handle complex, dangerous and actual accident scenarios it has to run them over and over. And they have to precisely match, or closely match, the original scenario. To date this is not being done. Which is why there have not been a lot of accidents, injuries or casualties. When that time comes the shadow drivers will have to drive and redrive scenarios that include progressively higher levels of complexity, involving many other vehicles or entities, bad weather, bad roads conditions, system errors etc. Many of those scenarios will be known accident scenarios. To learn these situations it will literally mean billions of miles have to be driven and possibly millions of iterations of these scenarios run to get this data. That will result in accidents, injuries and even casualties in the majority of these cases.

    b. To date there have been no children or families harmed by using this process. (There have however been injuries and casualties involving drivers). That is largely because only benign scenarios are being run. The public shadow driving be utilized now occurs on well-marked, well lit, low complexity, well mapped and good environmental conditions. Given every company bringing this technology to market would have to drive that trillion miles and learn from progressively more dangerous scenarios, casualties are inevitable. I suggest that when this is known or that first mass tragedy or death of a child has occurred the public, litigators and governments will react strongly. That will halt progress for a very long time. Far more than self-realization and policing would.

    4. AI – Machine Learning – Neural Networks have Inherent Flaws.

    a. MIT has stated that these processes miss corner or edge cases. Which result in spontaneous and unexpected errors. And the engineers using the practice do not entirely know how it works.

    If you look at these areas individually, let alone in combination, you can see for legal, morale, ethical and financial reasons public shadow driving is untenable.

    As for simulation being the solution. I believe the answer is to create an international Simulation Trade Study/Exhibit and Association. The purpose being to:

    1. Make the industry aware of what simulation can do. Especially in other industries such as aerospace. (Where the FAA has had detailed testing to assess simulation and simulator fidelity levels for decades.)

    2. Make the industry aware of the MCity approach to finding the most efficient set of scenarios. Bring that one trillion miles down by 99.9%

    3. Make the industry aware of who all the simulation and simulator organizations are.

    4. Evaluate the available products to determine their current capabilities.

    5. Determine how close the industry and any individual product is to filling all the capabilities required to eliminate public shadow driving. Where there are gaps determine a way forward to improving products or possibly creating a consortium. This may involve utilizing expertise from other industries.

    We clearly face a challenge here. But one we have recently seen improving. Several simulation companies have signed on to the approach so far and SAE, who is in agreement on the issues and resolutions, is assisting by helping us connect to key industry players.

    For much more detail on this as well as references for what we have stated please see the links for several articles I have written on the subject below. (Please search on the titles on LinkedIn,)

    Due Diligence Recommendations for the Mobile, Autonomous and Driverless Industry

    —–

    https://www.linkedin.com/pulse/due-diligence-recommendations-mobile-autonomous-industry-dekort?articleId=8182353004719370993

    I am actually for federal control of this. This problem there being that NHTSA is not approaching this correctly.
    We have to have federal minimums just like the FAA. States or local governments going beyond that is fine.
    The answer is to create a scenario matrix of minimal core scenarios that prove the safety and integrity of these systems. You also have to regulate basic function and the way scenarios are handled. They have to work the same vehicle to vehicle. Much like engaging and disengaging cruise control and what a vehicle does in core situations. You cannot have folks transition from vehicle to vehicle with these things being different. Especially how the car handles scenarios. That will cause mass confusion and needless accidents.
    What we have is a very unfortunate perfect scenario or chicken and the egg problem here. NHTSA is looking to this new industry to determine best practices. Problem is most of them are simply not qualified. You don’t turn to literal Twitter engineers to determine how to build AP on a plane any more than we should vehicles. Smart, hard working a talented is not enough. You need domain expertise. Right now that is being hidden or compensated for by AI and benign scenarios. Tesla’s latest “smooth version” clearly shows this. AI’s ability to compensate is running out.
    If you look at how things are going and the history of aerospace, the FAA and NASA even car safety areas like ABS, air bags etc it took tragedy to drive changes. Also it is a generally a myth that deregulation leads to best practices. Companies do what is cheapest. Look at the state of cybersecurity. Most companies avoid best practices there on purpose. They don’t want to spend more than their competitors or make the cultural changes needed.

    1. Pushmi-Pullyu says:

      That wall-of-text post is a load of feldercarb. Tesla’s semi-autonomous Autopilot and AutoSteer systems are already in use, and are already saving lives.

      Claiming that autonomous driving systems must be perfected before being deployed is every bit as counterproductive and clueless as arguing that everybody should shut off his air bags because a few people have been injured or killed by them.

      “The thing to keep in mind is that self-driving cars don’t have to be perfect to change the world. They just have to be better than human beings.” — Deepak Ahuja, CFO of Tesla Inc.

      1. EVShopper says:

        Tesla’s system is driver assistance that require hands on the steering wheel. It is not L3 or L4.

      2. EVShopper says:

        Ask yourself if company lawyers want to take the risk of being found liable in an accident caused by an L4/L5 system? Because at 4/5, the liability shifts fronm the owner of the car to the automakers. Won’t matter if it’s 100 times safer. People will sue and win.

    2. Windbourne says:

      Well, you are either Dekort and are screwing up articles all over the place, or you just copied an idiot.

    3. Brian says:

      Shadow miles are being done for free by Tesla owners. An all OEM buy in approach drowned a Connected Vehicle solution/industry.

  13. Priusmaniac says:

    What is of concern is not that it would not be possible to make a Level 5 autonomy but rather that the people that do would not realize they are creating an artificial intelligence of human level in the process. Once it is out of the box, it is doubtful it will remain satisfied being used to move biologic around. It will want to get free from that basic chore and pursuit its own interests and grow on his own. AI will not be coming in an expected way, it will arise where we don’t expect it and when we don’t expect it. In the internet mesh, in memory systems, in a game, in a drone software or for instance in an autopilot system. We just don’t know so we should be way more careful when we make systems that carry beyond usual capabilities or original combinations thereof.

    1. Pushmi-Pullyu says:

      You clearly do not understand the realities of writing computer software, nor do you understand the difference between an expert system that is merely labeled “A.I.” (Artificial Intelligence) for marketing purposes, and true machine intelligence. The reality is that at our current level of real A.I., robots are at best about as smart as a middling-smart bug. If programmers and engineers work really hard, then maybe within a few years or a few decades they can get them up to the level of a really smart bug; let’s say, a honeybee.

      Actual machine intelligence exists only in fiction, and that is almost certainly going to remain the case for some decades.

      A level 5 autonomous car won’t be any closer to true intelligence than your tax preparation software is — both are examples of expert systems software — and the computer in the autonomous vehicle won’t be any closer to self-awareness or understanding the world on the human level than your laptop computer is.

      1. James P Heartney says:

        Assuming we’re looking at bug-level intelligence to drive our vehicles, the obvious question is whether bee brains have the same sort of brute-force processing power as the chips used in our candidate autonomous hardware. I’m guessing probably not. This suggests that with the right software you ought to be able to get pretty decent driving AI out of the current chipsets (unless the extra efficiency of the bug brains comes out of their neural net architecture in a way we can’t emulate on standard chips).

        I think there’s a big difference between algorithm-based expert systems on the one hand, and self-aware sentient conscious systems like our brains on the other. It’s unlikely that the former are going to magically transform into the latter. At least that’s my intuition.

      2. ClarksonCote says:

        I don’t disagree with your logic, but Musk himself is pretty scared of AI, haha.

  14. Will says:

    Frankly the biggest threat to Tesla is the class action lawsuits that will follow a determination that the current system doesn’t allow for level 5 with any reasonable software. Elon didn’t have to make the representation that the current cars have all the hardware they need. They have already upgraded the computer.

    1. Pushmi-Pullyu says:

      There won’t be any class action lawsuits, because Tesla is already fully aware that its promises regarding full autonomy and the hardware in its cars are sufficiently strong and unambiguous that they will have to offer a free hardware upgrade to anyone who pays to upgrade their Tesla car to full autonomy.

      A class action lawsuit would only happen if Tesla refused to do so, against all common sense and legal advice.

  15. scott franco says:

    There is no basic difference between RADAR and LIDAR besides wavelength. The fundamental thrust of the statement made is flawed.

    Is Tesla wrong for going after radar vs. lidar? Last time I checked, they don’t use lidar to find airplanes in the sky. Why? lidar has unfortunate interactions with water in the air. Radar does as well, but less, and they are well understood.

    1. Pushmi-Pullyu says:

      There are lots of very real differences between LIDAR and RADAR. Some important, some not. Trivializing the difference between them is not reasonable.

      To quote from what appears to be an authoritative article on the subject:

      If your goal is to detect a car in front of you (or driving towards you) and get its velocity, the RADAR can be great. If you are trying to determine the precise location of an item, generate a surface map, or find a small fence-post, a LIDAR might do better. Just remember that if you are in dust or rain the LIDAR might return a cloud of points right near the sensor (as it reads all of those particles/drops); while the RADAR might do much better.

      Full article: http://robotsforroboticists.com/lidar-vs-radar/

    2. Doggydogworld says:

      Scott, see my reply to you above. Car radars are crude toys compared to the scanning or phased array radars we use to locate planes in the sky.

  16. (⌐■_■) Trollnonymous says:

    Meh.
    I don’t want no stinkin AutonoMoFoCr@p.

  17. DJ says:

    The guy actually said that Tesla’s autonomous car claim is “full of crap”. Not sure why that wasn’t posted 😀

    It’s pretty clear that they are as well. I’m sure it will come one day but seriously doubt any of the cars they have on the road today will be Level 5 capable any time soon, if at all.

    1. bro1999 says:

      This comments section is already blowing up without that “full of crap” comment. Including that in the headline probably would have resulted in a couple of IEV servers melting down. Lol

  18. Brave Lil' Toaster says:

    I wonder if he’s under any obligation to say that.

    Quite possibly a legal one.

  19. Mr. M says:

    As long as there is the need for a driver behind the wheel the autonmous level is maximal level3. If he needs to observe the system it is per Definition level 2.

    Level 5 is a huge task. Think of a Situation where the firefighter race by close to you, you need to drive in the sidewalk. The Street light in front of you is broken and a police officer is handling the traffic at an intersection where there are also trains crossing. That is level 5 driving. A human can do it. He will drive slowly. A Computer can also do it, but you need to assume a thousand/Million complex scenarios and train the computer how to behave. That task can not be done in 2 years. In 15 years, yeah sounds a bit slow but reasonable.

    1. Mr. M says:

      PS: at the next intersection a guy wants to controll the traffic. He wears clothes like a police officer, but because of his actions you sense he is none. How do you behave?

      How does the Computer solve the Situation?

      1. James P Heartney says:

        Probably more to the point, can an AI follow directions from a traffic cop? I’d be interested in seeing that.

        WRT the impersonated traffic cop, I’ve been driving for decades and I’ve never ever seen one. Nor is it likely that most human drivers could readily tell the difference between a traffic cop impersonator and a real traffic cop.

        1. Doggydogworld says:

          “Nor is it likely that most human drivers could readily tell the difference between a traffic cop impersonator and a real traffic cop.”

          Then how do you know you’ve never seen one? 🙂

          1. James P Heartney says:

            Fair question. 🙂

  20. CCIE says:

    Tesla has the advantage of having much less to lose by pushing the envelope and risking litigation.

    Established companies will be held back by their legal departments. Nothing kills a party like a bunch of paranoid lawyers!

  21. Don Zenga says:

    GM should not talk about Tesla. GM’s sales in EU in 2017-09 was just 45 units since their Open & Vauxall were sold to PSA Citroen group.
    Probably those 45 units were Cadillac since they stopped selling Chevy brand few years ago to favor Opel.

    And how many units did Tesla sell last month, certainly more than 2,000.
    And earlier GM quit India.

    I hope GM revives Chevy as their worldwide standard brand and Cadillac as their worldwide luxury brand instead of talking nonsense about Tesla whose sales are just 1% of GM’s sales.

    1. CCIE says:

      What does selling in Europe have to do with discussing the timeline until someone develops autonomous vehicles?

      Part of selling off Opel and Vauxhall is probably an agreement not to re-enter the market for several years.

      GM is calling out Tesla, and particularly Musk, for making outlandish and unrealistic claims. He did the same thing with the M3 and now the hardcore Tesla supporters get to hear “I told you so” as it becomes obvious that the production schedule is delayed by months.

  22. Bolt driver says:

    There is a big difference between driving coast to coast on the interstate system and true level 5 driving.
    For level 5 driving, How will a car deal with a dirt road that is torn up by a road grader? That is a monthly occurrence on my commute. How about a snow covered road prior to being plowed? Not uncommon either. How about a few inches of hail?

    1. So, there is a High Possibility that Tesla could have Multiple Situations, Multiple Leves! Like: Level 4 (= Level 5, but still has Vehicle Controlls for a Human operator) on Ramp to Ramp on Specific Interstate and City to City Pairs; Level 3 on all Other Interstate Freeways; Level 2 off Interstate! This could progress to: Level 4 AP on ALL USA Interstates, Specific other Nations and Specific Freeways, Level 3 on all other Freeways and USA State Roads, & Level 2 on All Other Roads!

      I don’t see the Need to be able to offer Level 4 AP on All Teslas, in ALL Markets, all on the same day! It could easily be a staged, even – a by Request or by Pre Approval (Following a Special Training Course)!

      1. theflew says:

        That would be a nightmare as a user to know what functions are available at specific times.

  23. ram1901 says:

    Have your ever been on a highway during a torrential downpour? What happens? Motorists pull over because they CANNOT see. Autonomous cars will have to do the same. Problem solved.

    Musk’s theory is that humans are able to drive with just vision as their driving and navigation tool and so he proposes to use multiple cameras, high powered processors and radar as well as ultrasonic sensors to keep the car on course.

    If humans can do it with 2 eyes, ears to hear and delayed response times it makes sense that with the right software Tesla can do it with 8 cameras, radar and ultrasonic sensors and high powered processors.

    IMHO , those pushing LiDar are trying to get their money back and make all manufacturers pay. Follow the money.

    1. Brian says:

      I agree. A computer will know when not to drive. Driving impaired, distracted, and in poor weather causes most of the accidents by humans.

  24. ModernMarvelFan says:

    Beside the argument over Lidar/Radar, camera and sensors, do we really think the current CPU or GPU installed on the Model S or memory power is more than sufficient for Level 5 driving? I seriously doubt that is the case. There will always room for improvement. The last 10% to get us from level4 to level5 will require probably 90% of the improvement.

    On top of it all, will level 5 requires some kind of car to road or car to car communication that has yet to be defined? WE don’t know yet. If it does, then existing hardware might not have those capability required.

    In either way, claiming that “existing hardware” already installed on the Model S is good enough for future Level 5 is kind of usual Elon claim that will always “get updated” at some future point.

    But it is fun to watch all the argument during the mean time.

    It is certainly fun time.

    1. Doggydogworld says:

      CPU/GPU is the easiest thing for Tesla to retrofit. A 10 minute board swap, and the new board will be cheaper than the current board in a few years when they get s/w somewhat working.

      Retrofitting Lidar, backward-facing radar, etc. is much more difficult. That’ll be interesting. It’s not a matter of refunding the $3k a few people paid for FSD. Buyers of all cars from late 2016 can claim they bought a Tesla only because Musk promised is was FSD-capable.

    2. Pushmi-Pullyu says:

      Cars can already communicate wirelessly. Adding car-to-car communication is something to be handled by software; the hardware is already there.

      And as Doggydogworld suggested, adding a microprocessor or two to the car’s computer system is fairly trivial.

      Upgrading the software, to make the cars able to handle autonomous driving in a more sophisticated manner, is going to be the real challenge, especially if Tesla stubbornly sticks to relying mainly on cameras. If it switches to using primarily radars and/or LIDAR, then there will be much less need for the computers to do a lot of number-crunching. Building up a real-time SLAM map of the environment around the car using radar and/or LIDAR scans, will be far simpler than trying to use stereoscopic camera images for everything.

  25. Rich says:

    Taking a step back, the type of sensors don’t matter to me. A proven track record is the only thing that matters to me. Until Tesla or any automaker can prove its autopilot is 10x better than humans over a 5 year period in all conditions, I won’t trust my family’s life with this tech. I’m an early adopter for EVs, but autonomous drive is another story. I don’t think I have control issues and am comfortable with someone else driving.

    I look forward to the day humans are no longer driving vehicles. I look forward to the day I have my own private chauffeur. But at this point in time, I would much prefer a HUD that augments reality and improves the information I have available to me while driving. For example, overlay IR on the windshield for night driving. Provide some kind of tint for oncoming headlights. On the windshield, show the computer tracking 3 cars up on a winding road and alert for stopping/slowing traffic. Overlay speed and distance information of cars on the drivers side window when waiting to make a right hand turn. Same on the windshield for making left hand turns. Provide real time traffic information and navigation on the windshield while driving so I can avoid traffic jams. Add these type of features to lane keep assist and automatic emergency braking and I’ll be thrilled.

    1. Rich says:

      When I say HUD, I don’t mean a little display at the bottom of the windshield. I’m talking full windshield display.

  26. bro1999 says:

    Elon being rightfully called out. For a CEO of a multi-billion dollar company, it’s amazing the crap he can spew and get away with. Let’s not even mention the broken promises related to all of their product launch timelines.

    – Elon jacks up price of Roadster thousands of dollars after taking deposits, angering those deposit holders
    https://www.wired.com/2009/01/tesla-raises-pr/
    – Tesla states in 2016 all cars produced moving forward are built with all necessary hardware for FSD. Then AP 2.5 hardware is added to cars earlier this year.
    https://www.theverge.com/2017/8/9/16119746/tesla-self-driving-hardware-upgrade-hw-2-5
    – in Jan 2017, Elon promises FSD features “in 6 months definitely”. 9 months have passed, and nothing
    http://i65.tinypic.com/2v2cos0.png
    – Nov 2016: Elon promises Model 3 owners will receive “free long distance Supercharging”. Current owners receive 0 free Supercharging of any kind.
    https://www.fool.com/investing/2017/10/17/tesla-incs-model-3-wont-get-any-free-supercharging.aspx

    – Elon’s promise of a coast-to-coast fully autonomous drive by the end of this year? Might as well mark that up as another broken promise right now.

    As the GM exec said, Elon is full of crap. But the TSLA fanatics will continue the worship no matter what, clutching those precious TSLA shares of theirs.

    1. bro1999 says:

      Oh, totally forgot about the Model 3’s “spaceship controls” promised by Elon too. If by “spaceship controls” he meant “absolutely nothing other than a single tablet display in the middle”, I guess he didn’t lie about that one. Lol

    2. Doggydogworld says:

      Coast to coast could happen. It’s not that hard.

      Do you have a link to back up your claim that Musk promised “free long distance Supercharging” for Model 3? I remember the opposite.

      1. bro1999 says:

        Ask and you shall receive.

        “Model 3 from the beginning we said free charging is not included in the Model 3;free unlimited charging is not included, SO FREE LONG DISTANCE IS, but not free local. It becomes really unwieldy for people to use the gas station approach for electric cars; cars should really be charged where you charge your phone, but then you just need to solve the long distance problem which is what the supercharger stations will do.”

        https://insideevs.com/elon-musk-tesla-model-3-will-get-free-long-distance-charging-not-free-local/

        1. unlucky says:

          Yep I remember him saying that too. It seemed like a good plan to me honestly. But I think Elon changed tactics, thinking that adding local charging was key to selling cars to apartment dwellers.

          And free long distance was lost in the process. For Model 3. For S/X it’s sort of gone, you have to “get a referral” to get free supercharging on those vehicles. I cannot see how this referral system is anything but some kind of accounting trick.

  27. HVACman says:

    From the article – “GM’s autonomous director simply doesn’t believe that Tesla can hit Level 5 as defined by the SAE without the correct sensors, computer, and LiDAR. He continued:”

    Note the phrase “As defined by the SAE”. Anyone can claim full autonomy. Anyone can claim their toaster is totally safe. But how do you, as a buyer, have faith it actually is safe? You look (or used to look) for the UL label (the consumer is so used to everything being labeled they no longer bother to look).

    UL and other standards/testing organizations define the the standard of safety (and the tests to demonstrate it) for the toaster and the outlet it plugs into. The SAE is the world’s automotive standard’s organization, among other things. They created the “Level” concept and defined them. Is Tesla actually claiming “SAE Level 5” in two years, or just what Tesla defines as “full autonomy”. Who do you want to trust your life to?

    1. Nix says:

      What source do you have that when Tesla finally reaches level 5, that they won’t meet SAE definitions?

      Are you just doing a wild word association leap, just because you saw those letters and then made a massive assumption?

Leave a Reply