ZF CEO Says Lidar Is Necessary For Autonomous Cars, Tesla Seems To Disagree

JUL 14 2016 BY ERIC LOVEDAY 54

Four Systems, Including Forward Looking Radar And Sonar Assist In Tesla's "Auto Pilot" Package

Four Systems, Including Forward Looking Radar And Sonar Assist In Tesla’s “Auto Pilot” Package

ZF CEO Stefan Sommer made some remarks at a recent news conference.

His comments were made following the fatal Tesla Model S crash in which the car was in Autopilot mode at the time of the incident.

According to Sommer:

“For autonomous driving, we will need three core technologies: picture processing camera technology, short and long-range radar and lidar.”

Tesla’s system lack LIDAR, likely due to how expensive the system is to incorporate into mass-produced vehicles.

Tesla’s stance has long been that LIDAR isn’t necessary, yet just a few days ago a Model S with LIDAR technology was spotted in and around the Palo Alto area before it pulled into Tesla’s parking lot. Tesla wouldn’t comment on the car though. Nor would the automaker say if LIDAR will be incorporated in the future.

Sommer added:

“Laser or infrared-based lidar technology will help vehicle sensors pick up contours and contrasts of obstacles which normal cameras are unable to detect, particularly in low light situations.”

“…cameras relying on visual signals alone were insufficient for safe autonomous driving at high speed.”

As for what effect LIDAR would’ve/could’ve had on the fatal Modle S accident, Sommer stated:

“I have no details about the Tesla accident, so I can’t comment on it.”

Source: Automotive News

Categories: General, Tesla

Tags: , ,

Leave a Reply

54 Comments on "ZF CEO Says Lidar Is Necessary For Autonomous Cars, Tesla Seems To Disagree"

newest oldest most voted

I think this is where Tesla is going bridge too far. Autonomous driving may be possible, but it is a lot harder than they seem to think it is.

The graphic that is usually shown of the little bubbles and cones around cars on a straight multi lane highway – is the problem. Many of us drive in situations far more complicated than that, and real world conditions will overwhelm the computing power and/or the iterations anticipated by the programmers.

Build the best cars possible, please Tesla! Build the hell out them. Build a second Gigafactory, and ship a half million Model 3’s – and then come back to autonomous driving.

Safety systems are one thing – autonomous driving is a different thing.

They can and have been doing both. Tesla has already proven they can walk and chew gum at the same time. AutoPilot is a foundation to build off from. Stopping work on it, makes no sense.

Keep in mind that AutoPilot was never sold as a Level 4 Class Fully Autonomous System– just a beta that can offload some of the work of driving; but needing to be constantly supervised by the human operator. People being irresponsible with it, should not be surprising to anyone.

And finally… You don’t have to use it, if you don’t want to.

+1M

This is beta code. Users accept the risk of giving up control of their Precious to experimental software. Nuff said.

Customers aren’t guinea pigs. If the feature is labeled “beta” it shouldn’t be in the car at all!

EVA-01 said:

“Customers aren’t guinea pigs.”

Note you have to “opt in” on choosing to enable AutoSteer and other driver assist features in Tesla’s cars.

Clearly Tesla has decided to let each individual car owner decide whether or not he wants to be a beta tester, or “guinea pig” as you term it. Also clearly, a lot of them have chosen “Yes!”

I think Lidar and GPS assisted positioning (blackout whether conditions) are necessary for autonomous driving. Google has it right, Tesla is behind. Google should make the Lidar lower profile or nearly invisible and of course the cost needs to come down for Lidar.

Certainly vehicle-to-vehicle wireless communication will be part of the future of self-driving cars, but each autonomous vehicle needs to be able to operate independently. It’s going to be a very long time until all vehicles on the road have the ability to network with other vehicles to resolve right-of-way and avoid accidents. And even when that becomes reality, there’s still the possibility of a malfunction by another vehicle.

A fully self-driving car should certainly be communicating with other such cars on the road, but in a belt-and-suspenders fashion, should also be capable of operating independently of such input in relative safety. (Driving safety will always be relative. Moving down the road at high speed is inherently dangerous. We can reduce the risk, but never eliminate it entirely.)

Does not work for objects besides cars.

The NTSB is pushing transponders, but that is a solution for airplanes, where all of the relevant non-aircraft objects (ground, towers, etc) can be mapped in a database (ie., they don’t move around a lot).

Perhaps the transponder solution can work so long as there are stationary central data collecting and processing nodes to handle the traffic. Those might be set up like a cell phone network.

The individual car sends data to the central computer node: “My GPS coordinates are these; I’m going in this direction at that speed; and here’s the real-time data from my lidar scanner.” The central system determines vehicle positions and vectors, calculates possible intersections — that is, collisions — and sends instructions back to the individual car when necessary to alter direction and/or change speed to avoid collision.

Note that when all or nearly all vehicles on the road are so equipped, stop signs and traffic lights are no longer necessary. Those few remaining human-operated vehicles will be detected (detected by the sensors in individual cars) and traffic will be routed around them as if they’re moving obstacles.

Until you have to avoid that refrigerator dropped into the road by a truck that does not have a transponder (the frige, that is).

There will NEVER be true autonomous driving because auto companies know the system is flawed and the OEMs will never accept liability for their Frankenstein creation.

If the so-called auto-pilot fails it doesn’t matter because the driver was supposed to be driving anyway. After all, it says so in the lawsuit-proof instruction manual….

What a joke ! You might as well just drive the car hands-on yourself and avoid the mental terror of trying to figure out when the auto-pilot will fail.

When the ‘auto pilot’ WILL fail?! That’s quite an assessment. What do you base it on, your experience with ‘auto piloted’ assistance driving or your requisite knowledge of such systems and their ultimate limits?

How about we find out if the ‘auto pilot’ ACTUALLY FAILED in the instances where it was being utilized, before we call it a failure?
Jumping to conclusions must be a fun exercise, a LOT of people seem to like doing it.

Look floydboy,

Everything fails. But you already know that. The point is auto companies don’t want to be liable for auto-pilot failures.

So far, their response has been that the driver is responsible for the conduct of the automobile. They are shifting blame for any system (software) failures from themselves to the driver.

If I am ultimately responsible for the failings in THEIR AUTOPILOT system, then they can keep it.

One thing all the knee-jerk defenders of Tesla need to understand is that this technology will also be in ICE and that numerous other car companies are involved in the technology besides Tesla.

Autonomous cars will kill the ICE car.

Volvo announced they will accept liability for wrecks when their system is at fault. Ford and Google said the same to 60 Minutes.

Full autonomy in “geo-fenced” areas will show up in two years or so. Coast-to-coast autonomy will happen around 2021.

jmac said:

“There will NEVER be true autonomous driving…”

You’re a true successor to Lord Kelvin.

“I can state flatly that heavier than air flying machines are impossible.” — Lord Kelvin, 1895

For the long term, I have to disagree. As it becomes well established that autonomous driving systems are safer than most people, you will see a sea change in liability reasoning. Don’t forget that the average driver is not that safe, and can be expected to have one or two accidents in their lifetime.

Me, I have a zero accident record at 58 years old. I’ll be turning the system off.

But be careful, you are due!

Humans driving is going to be a thing of the past. But perhaps we can allow a 60+ yo to keep driving with annual tests of cognition, eyesight, actual driving. Along with no phone and breath test at each startup. And a quick cognitive test at each startup. Oh yeah – I’d like an annual stress test and a cath every 5 years. We could start with what a commercial airline pilot has to go through. Then you might be as safe as a computer.

I am a pilot, and I fly regularly. I meet IFR qualifications, which are considerably more strict that a normal pilots, and not much different than a commercial pilots (in fact many IFR pilots go on to get commercial licenses).

I have about 20 years left on the planet. You aren’t going to automate the roadways in that amount of time. After that you can have trained cats drive for all I care.

Given the number of fatalities currently (let alone historically) being caused by human driving I can’t see the logic in your argument! Give enough people a car and sooner or later one of them will cause a death through driving it. Why is that any different?

Eduardo Pelegri-LLopart

That’s funny you just posted that as I was going to suggest “wny not some kind of laser array instead of a spinning mirror”. It seems someone has already made a product using this principle…wow.

One of my concerns would be cross-talk between two (or more) cars using this system. They would have to polarize the light or use some other means to prevent this.

Lets see. Light travels about 1/10 of a mile in one microsecond.
That means that if you issue a pulse, you are looking for it to
return in a window of about that, a microsecond (0 to 1us),
since lidars are only accurate to about a 1/10 of a mile (176
yards).

That gives you a million windows per second. If you take a
guesstimate of say, 1000 ranging operations per second,
that means you have 1000 possible windows to look for your
return pulse.

Thus, unlikely to see another (asyncronous) cars sweep
pulses in any frequent way.

Disclaimer: I flunked math regularly. Once I actually decided to try to get a good grade on my trig final and scored a b+. The teacher accused me of cheating, since I never scored better than a d in that class.

@eduardo
I bot a LIDAR rangefinder for my quad rotor for 100$ so Tesla could add Lidar for not too much more.

Good article, I didn’t know Tesla did not use LIDAR. I always assumed they did.

Eduardo Pelegri-LLopart

I’m a bit surprised that Tesla is emphasizing Lidar is not needed. It seems to me it can’t be the cost of adding it to new cars in the future; maybe it’s the complexities of retrofitting it into old cars, or even cars in the design stage, like Model III – so far Tesla has had a very nice story about “upgrading existing cars”

GeorgeS said:

“I bot a LIDAR rangefinder for my quad rotor for 100$ so Tesla could add Lidar for not too much more.”

I suspect that’s like saying you can buy a radar gun for $100, so it should be cheap to put in a radar scanner at an airport.

It has been said in more than one article on InsideEVs that the type of lidar scanner that Google uses on its self-driving cars is expensive. Reportedly significantly more expensive than the front-facing radar plus various ultrasonic sensors which Tesla puts in its cars.

I see no reason to doubt that is true.

Don’t you get it ? Even if the auto companies adopt Lidar, the driver is still going to be responsible (and liable) for the conduct of the automobile.

Which has always been the case! There are currently no systems on the road today that will countermand a drivers input! So, consequently the DRIVER of the vehicle IS IN CHARGE.

To floydboy:

So, why would anyone pay for a system that actually increases their liability ?

The driver must be on deck at all times to wrestle the wheel away from the errant auto-pilot at the speed of light. What a crock !

When the system fails, as all systems do, the blame for the software-sensor failure will of course be shifted from the auto companies to the fool that purchased the auto-pilot system in the first place.

So what are you saying? Systems fail but people never fail?

jmac said: “Even if the auto companies adopt Lidar, the driver is still going to be responsible (and liable) for the conduct of the automobile.” You’re trying very hard to conflate the issue of legal responsibility with the issue of the capability (and safety) of self-driving cars. The legal issue will have to be sorted out by the courts. Regardless of how it’s sorted out, development of autonomous cars will continue. There is too much pressure toward adoption of self-driving cars, from insurance companies, and public safety agencies such as the NHTSB, and those interested in advancing robotic tech, for this genie to ever be put back into the bottle. Now, that’s not to say there won’t be any impact from the legal system on the development of self-driving cars. And perhaps it’s appropriate that there will be, as it’s a matter of public safety. Will we see frivolous lawsuits? Of course we will. But frivolous lawsuits rarely take a product off the market. McDonald’s still sells coffee, and still sells it at its drive-thru windows. It may well be that auto makers will have to raise the price of their cars to offset increased liability from autonomous software, once… Read more »

NO MATTER WHAT! The driver enters the initial commands ie: , Speed & @ certain road Conditions ,locations etc: therefore , the driver is & should be held liable ..So..Keep Ur eyes on the road & Ur hands upon the Wheel. Drive Responsibly & defensively , don’t speed excessively & a lot less sh!t will Happen…

Of course lidar isn’t necessary to drive, human drivers go without it all the time. All that’s really needed is good vision, computational ability to make correct decisions and GPS. In the next few years the number of traffic deaths should start to see an exponential decline.

Humans are remarkably good at spotting and recognizing large moving objects, using visual images from our eyes plus the “wetware” inside our heads.

Video cameras and computer software aren’t nearly as good. Optical object recognition is something that software simply isn’t very good at. My understanding is that scanning lidar, such as Google uses for its self-driving cars, works much more reliably.

One can always argue that optical object recognition software will improve, and is improving. But speaking as a programmer, I seriously question that it will improve rapidly enough, or far enough in the near future to be as reliable as using a lidar scanner. Optical object recognition has proven to be a very difficult problem for software engineering.

You probably mean ‘logarithmic decline’ since the rate can’t go below zero.

I think he used the term “exponential decline” accurately. See “exponential decay”, like a half-life curve.

https://en.wikipedia.org
/wiki/Exponential_decay

Maybe someone can get Stanford University to comment – it’s apparently a partnership with them, given the emblem on the rear window…

Here’s the bottom line:

I am a fully functional human being that is capable of driving a car without the aid of auto-pilot.

If I turn my auto-pilot on then I am liable not only for my own driving mistakes but also those of the auto-pilot that I must constantly monitor.

Auto-pilot does not lift my driving burden. On the contrary, it doubles it.

Auto-Pilot is an unfortunate term because a very small portion of the population have ever flown in a general aviation aircraft that employs it. It doesn’t do anything but keep a course and altitude (not talking about heavies and advanced aircraft). You still have to takeoff, land, handle thermal turbulence, make all the decisions. If you turn on AP the plane will just keep flying at that alt and heading until you hit something or the fuel runs out or you get rocked.

This should have been called cruise control 2.0 or Advanced Cruise Control. I bet you use CC in your car. Do you expect it to make decisions on when to stop/go? How fast to go? Same concept, just on the steering wheel instead of the go pedal.

Okay,

Musk called it auto-pilot, that daft ba.t.rd.

He knew exactly what effect such an outrageous claim like that would have on sales.

Then don’t buy it. Simple.

@Rightofthepeople

I won’t buy it. You’re right. It’s an extra liability on top of normal driving.

Who or what is “ZF”, and why should we pay any attention to what that organization’s CEO says? Without knowing who they are or what they’re selling (if anything), there’s no context by which to judge his remarks. * * * * * I certainly won’t claim to have inside knowledge here, but based only on popular press articles, it seems to me that Google’s self-driving cars are significantly more advanced in autonomous capability than Tesla’s Autopilot/AutoSteer. And with Tesla claiming that the recent fatal accident with a Model S using AutoSteer was a result of the car not being able to tell the difference between a “brightly lit sky” and the side of an 18-wheeler rig trailer painted white, then it seems to me pretty clear that lidar offers a superior ability to detect other vehicles. I’ve written before on the subject of the limitations of using cameras and optical object recognition. That has proven to be something which computers simply aren’t good at. We humans have a highly developed ability to recognize objects visually, but that is an ability developed over billions of years of evolution, and humans are better at that than nearly every other animal on… Read more »
Eduardo Pelegri-LLopart

https://en.wikipedia.org/wiki/ZF_Friedrichshafen

Mostly I’d heard of them because of all the cars that use ZF transmission components but they seem to be fairly large, with ~140K employees.

I just said this same thing a for another article.

What the difference is (again) is active vs. passive systems. Lidar (as well is ultrasonic and microwave based systems) are active systems, the camera based system is not.

Cameras involve a lot of computer visual interpretation. Range to target systems like lidar does not. It finds, unambiguously, the distance to target, whatever the first thing is that interrupts the beam. Then it relies on sweeping the beam to form a complete image of the objects around it.

Although google presents the system as a roof mounted, 360 degree Lidar, that is only done for cost reasons. A bumper level Lidar needs more than one sensor (ideally 4 or more of them), and obviously greater costs. However, the costs of Lidar are declining at an impressive rate.

Scott, I appreciate your comments, but I take issue with your claim that bumper-mounted lidar scanners can properly replace a roof-mounted one. The reason is simple geometry. The higher the lidar scanner is placed, the better it will be positioned to see other vehicles and obstacles beyond any vehicles close to the scanner.

Here’s a hypothetical example: The self-driving car using bumper-mounted lidar needs to change lanes, and scanning around, decides the coast is clear and moves over in front of another car.

What the bumper-mounted scanners have failed to detect is the motorcycle rider coming up fast around the same car, and trying to move into the same spot.

The self-driving car with the roof-mounted lidar scanner would spot the smaller vehicle which otherwise would be hidden behind an intervening larger vehicle.

I predict that in the very near future, roll bars atop ordinary cars will become standard. That’s by far the best placement for a lidar scanner.

And I’ll never be able to think about a lidar scanner in a roll bar atop a car without thinking of…

I can’t help but agree with you. Buuuutttttt…. it really is going to come down to styling. Nobody is going to buy a car with big ball mounted on it. 🙂

Form follows function. I rather imagine that when people first saw horseless carriages with the engine mounted under a “hood” in front of the seats, and a steering wheel instead of a tiller, it probably looked strange and off-putting to them.

If I’m right, then a “roll bar” atop a car will become an integrated part of a typical car’s styling. Of course, they could mount it like a dorky looking old-style police bubblegum light, the kind seen back in the days when cop cars had just a single bubble gum light in the middle of the roof. That’s what Google is doing now. But a roll bar would look ever so much more elegant! And would also put the scanner higher without having to raise the roofline.

No, definitely not this…

https://tctechcrunch2011.files.wordpress.com/2015/06/google-car.gif?w=738

That’s a Stanford University logo on the rear window of that Tesla car. So I question that the car shown is Tesla Motors testing a lidar scanner.

Google Maps says it’s only 20 miles from Stanford, California to Fremont.

http://sportsinvasion.net/wp-content/uploads/2012/11/Stanford-Football-Logo.jpg

While we are playing detective, is that a white HOV sticker? On a car with no license plate?

Wouldn’t it be funny if it was just some owner playing a prank? A google employee with a finger in the eye of Tesla? Knowing that the news recently will pick up with this and run.

It’s my belief that Google and Tesla have two very different short term goals.

Google looks to be creating self driving cars to create a fleet of taxi’s. In which case an extra $10k in cost is no big deal and since they will will be taxis nobody will really care what they look like.

Tesla looks to be creating cars for people that own cars that want self driving technology to enhance the user experience. An extra $10k in hardware is a big deal. Plus that thing on the roof is just awful.