Watch Tesla Autopilot Again Attempt To Hit Off-Ramp Divider

APR 2 2018 BY ELECTRICCARSTV 91

It stops short of impact, but it’s obvious Autopilot is not capable of correctly handling this situation.

The video uploader is attempting to recreate the same situation in which a Tesla Model X driver lost his life last week.

Related – UPDATE: Tesla Says Autopilot Was Activated During Fatal Tesla Model X Crash

That unfortunate event occurred while the Model X was on Autopilot and, as we can see in this video, a similar wreck is set to occur again under a close-to-identical setup.

As the video depicts, the car merges into the lane in which the off-ramp divider is positioned. Rather than attempting to exit the freeway, Autopilot decides to stay within the markings that lead it on a crash-course with the divider.

Video description:

Tesla AP2.5 takes a wrong turn into gore point barricade.

An overlooked portion of Tesla’s statement on the wreck has been posted below:

Tesla Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur. It unequivocally makes the world safer for the vehicle occupants, pedestrians and cyclists.

No one knows about the accidents that didn’t happen, only the ones that did. The consequences of the public not using Autopilot, because of an inaccurate belief that it is less safe, would be extremely severe. There are about 1.25 million automotive deaths worldwide. If the current safety level of a Tesla vehicle were to be applied, it would mean about 900,000 lives saved per year. We expect the safety level of autonomous cars to be 10 times safer than non-autonomous cars.

Tesla Model X at Supercharger
29 photos
Tesla Model X at a Supercharger Tesla Model X Tesla Model X Tesla Model X (wallpaper 2,560x) Tesla says they know the issues with current Model 3 production, but didn't say that the issue had (as of yet) been resolved. Tesla Model X Tesla Model X Tesla Model X Tesla Model X Tesla Model X Tesla Model X Tesla Model X Tesla Model X Tesla Model X Tesla Model X Tesla Model X In White (click for high resolution) Tesla Model X (click for high resolution) As Seen  With Skis Tesla Informs Model X Reservation Holders That It Will Be Opening Up More Configurations Soon, Hopes To Release All By Year's End Tesla Model X (click for high resolution) Tesla Model X Tesla Model X 3rd Row Seats Latching Mechanism Found Not Suitable For Europe Tesla Model X Interior/Belts Tesla Model X Gets 6-Seat Option New Design Studio Pictures Show What The Tesla Model X Can Accomodate Tesla Model X Roofline

Categories: Crashed EVs, Tesla, Videos

Tags: , , ,

Leave a Reply

91 Comments on "Watch Tesla Autopilot Again Attempt To Hit Off-Ramp Divider"

newest oldest most voted
Another Euro point of view

What is clearly missing here is a device that constantly monitors driver’s attention. I believe GM has that in its Supercruise system. Tesla has so far been lucky. I mean for example the Model S that crashed against the road work vehicle not long ago, it could have been road workers in between the truck and the Tesla. I have a hard time to believe Tesla will get away with their AP system as it is now.

dan

Monitoring the human is like putting a bandaid on a 3rd degree burn. There is a huge body of research describing how difficult it is to keep pilots engaged and situationally aware when they are on autopilot. Pilots are trained to follow “sterile cockpit” rules and other checklist driven ways of focusing the brain to identify anomalies. There is no way an automated car can depend on the human to jump in as needed. Some companies have (wisely) skipped levels 2,3,4 automation and are working on a fully automated car. I think the Tesla approach of granting autonomy and yet asking drivers to stay in control, and deploying code whose behavior changes too frequently to learn to be comfortable with is too fundamentally flawed to be able to find ways around. This style of technology is likely to keep pushing ahead until the NTSB just shuts it down altogether. Just like with air travel, even if that one crash every so often is far less likely % wise than in driving, if the scenario cannot be explained, when the passengers lack control, there will be an outsized backlash focused on safety. That is how humans work.

Another Euro point of view

OK, I see. Then those systems should perhaps only provide assistance in very restricted circumstances until proven fully autonomous . For example providing steering and breaking assistance in traffic jam conditions on the highway under 40 mph. I mean experts seem to agree that those systems should not be allowed to slow a car to full stop when at highway speed to avoid “false positive”. That is de facto admitting that those systems should not be used at highway speed to start with.

dan

That’s where I’m personally at. I think automatic emergency braking is an excellent addition to enhance human driving. It behaves exactly like a human reflex. Our own body has built in reflexes to keep us from doing stuff that will kill us off before we can learn not to do it. There are probably other technologies that fit this pattern – lane departure warnings, sounding an alarm when the car detects that you have lost attention (in manual driving mode). Much of the rest is in a gray area where it is not clear if the human or machine is supposed to be in charge. Beyond those “reflex” like technologies, I’ll wait for a reliable Level 5 car.

TwoVolts

I agree 100%.

mx

I drive with ACC: Automatic Cruise Control all the time, it’s only an Aid. You’re not supposed to use it to take a nap. The driver is always responsible.

Leptoquark

I agree with you, in that I’ve never thought it particularly sustainable that the driver is expected to sit there and watch the car drive itself, then when it gets in trouble to suddenly jump in. The only way around this I see is for cars not to rely on lane markings, which I’ve never thought was a good idea anyway, but instead to have some sort of machine vision system that looks at the road the same way a human does, and interprets things the same way a human does. If humans don’t drive by sticking to lane markings, why should we think a car should?

arne-nl

You suggest that the Tesla Autopilot only relies on lane markings, but it doesn’t. There are tons of videos where the car confidently navigates roads without any markings at all.

Tesla is developing a machine vision system that recognizes where to drive better than a human. But it will not do it the same way as humans do. Machines are inherently different from humans. Which doesn’t mean the system will be worse. Just different.

We’ll have to accept the fact that machines will make stupid mistakes. Just as humans. But machines can and will consistently improve over time, their collective learning distributed to every self-driving car. Improving year after year, forever.

For humans this is not the case. We can’t change what we are. Every day a new, unexperienced motorist is born. We humans, with all our (digital) distractions, actually seem to be going backwards.

Pushmi-Pullyu
You are correct to say that autonomous cars should not be designed to “see” the surroundings the way humans do. Unfortunately Tesla has been trying to do just that; trying to rely primarily on cameras and optical object recognition systems — including “edge finding” software which lets it — sometimes — find the edge of a road when there are no lane markings. But I don’t see any indication that Tesla is actually working on a way for its cars to “see” everything in the environment. They don’t seem to be even trying to detect fixed obstacles, such as parked fire trucks or barriers at the end of concrete walls dividing traffic lanes. There is no indication that Tesla cars are building up, in the car’s computer system, a detailed 3D map of the environment, or SLAM. A SLAM is needed for Level 4/Level 5 autonomy. Designers working towards fully autonomous cars should work on developing a reliable SLAM first, as a foundation on which to build a proper autonomous driving system. I think Waymo is doing that, but I’ve seen no indication any other company is taking that approach. Everybody else seems to be trying for incremental improvements in… Read more »
Jason

I’m pretty sure you’ve seen this video before. It clearly shows the Tesla object recognition. It doesn’t show how good the recognition is, or the decision based on the detection, but clearly this is not simple edge detection. And it is the new hardware suite.
https://youtu.be/VG68SKoG7vE
It would be excellent if Tesla had a way for this information to be displayed on your 17″ (or 15″) screen in the car. In the same way Waymo is doing, it would build confidence in what the car is seeing/doing. Supposedly the Tesla is in shadow mode all the time, so even while AP is not engaged you could get an idea whether the car is “seeing” what you would expect.

Scott Franco

“There is a huge body of research describing how difficult it is to keep pilots engaged and situationally aware when they are on autopilot. ”

Actually, even that statement does not do the situation justice. A cockpit is a very busy environment even with an autopilot on, there are people to talk to (controllers), navigation tasks to perform, etc.

In a car, with the AP engaged, there is far less to do and more opportunity for attention to wander.

Pushmi-Pullyu

@dan

Hey, thanks for that info!

There is a current discussion on the InsideEVs forum (link below) where someone said that Ford had a problem with its engineers acting as monitors in self-driving cars falling asleep and otherwise not remaining attentive. I was just wondering how airlines deal with this issue, since pilots are supposed to “Keep watching the skies!” even when an autopilot is turned on.

It’s not realistic to believe the average human driver is going to receive the level of training that pilots do.

https://insideevsforum.com/community/index.php?threads/uber-autonomous-car-fatality.965/page-4#post-9805

dan

Pu-pu,

The answer is Checklists, checklists, and more checklists. If you ever wondered why pilots are required to call out the items in a checklist, the same psychological basis is why Japanese train drivers and factory workers are required to point at everything. The human brain has evolved to ignore 99% of the information coming in. If it could identify an anamoly out of the blue, there would never be a Where’s Waldo puzzle. You need to build process in to actively counter that bias:

https://www.atlasobscura.com/articles/pointing-and-calling-japan-trains

TwoVolts

+1000,000. You have hit the nail on the head.

pjwood1

There’s no debating how the system can false, like this. Only a cultural debate that accepts, or rejects Level 3, which is where many are going. Only if a system can be understood for its short-comings, can we begin to raise expectations for its operators.

Adaptive cruise, with auto-steer, may soon be limited by speed. I expect the wonderful “update”, any minute.

Some have known, for years, that Tesla’s AP and stationary or off-set objects are not well-recognized by the system (lots of YouTubes). Failing to observe this can be catastrophic, but not random. A place in Mountain View, and as depicted above, shows something predictable.

No question, something’s wrong. AEB shouldn’t be failing, where AP isn’t using stationary inputs. But what is an over-reaction, in my view, is completely abandoning L3.

Martin Winlow

No, all Ap-equipped Teslas have a device that monitors whether the driver has their hand(s) on the steering wheel and will nag the driver until they take the steering wheel again.

What *is*, very clearly, missing is a solid white line on the RHS of the hatched portion of road leading up to the start of the crash barrier (like there is on the LHS). Had this been in place, this fatal collision would simply never have happened.

I could just as easily see this same collision occur with a poor human driver in bad visibility. There simply isn’t enough warning in the existing road markings, especially given the extreme danger that the situation entails.

dan

Humans have a “looming reflex” – if any object suddenly increases in size, we close our eyes without thinking. The idea is that the object is heading towards us and we need to reflexively protect our eyes. Most of us who have driven long enough also have wiring in our heads that connects the looming reflex to slamming our foot on the brake. It is nowhere as quick as closing our eyes but it happens practically immediately. It’s one thing to not notice the white marking, but how can you justify not reacting to a looming concrete barrier?

marshall

It looks like lane markings are missing. Once again, clearly a lack of maintenance by the DOT.

In this case, it looks like the Indiana DOT is not maintaining this section of freeway.

Unfortunately it appears, Tesla autopilot can’t handle lack of maintenance issues on it’s own.

https://mutcd.fhwa.dot.gov/services/publications/fhwaop02090/

dan

Welcome to the real world. Lane markings are frequently missing on roadways. That is not going to change.

Martin Winlow

It will – when the money everyone is going to make out of autonomous vehicles is threatened by a lack of clear road markings. It would also help if people sued the local authorities where a failure to maintain the roads properly can be directly linked to a collision or other situation where damage or injury is caused. But people/insurance companies don’t bother!

mx

LOL. PENNSYLVANIA lane markings have NOT been repainted for 15 years, Thanks Republican House and Senate.

Rich people need deep tax cuts!

TwoVolts

The GOP solution is to privatize public roads with P3s (public private partnerships). Crony capitalism will surely offer the solution to all of our road maintenance needs.
https://www.commondreams.org/news/2017/08/09/pences-indiana-cautionary-tale-privatizing-infrastructure-projects.

Scott Franco

Ask Quebec how good socialism has been working for their roads. The socialist government has been legendary for poor road repair.

Mark.ca

You mean the same socialist system they have in N EU? Sit down fool!

dan

Only Denmark from northern Europe even breaks into the top 5 for road quality. The best roads are actually in decidedly non-socialist central and southern European countries like Netherlands, France, Portugal, and Austria. The same holds in the US, where the best road quality is often found in places in Texas, as opposed to northern Vermont.

“A fool thinks himself to be wise, but a wise man knows himself to be a fool” – Shakespeare.

John Doe
In the end, somebody have to pay. There are some key factors, like how many people who live in the area. NL is densly populated. More people to share the cost. Others on the list have toll roads. Like France and Spain. High quality roads, but it cost pretty much to drive . Then you have the PIIGS countries who have gotten billions from the rest of the EU to pay for infrastructure. If you drove in Spain and Portugal 30+ years ago, you would have seen the massive changes. Germany has “always” had good roads. Same with Austria. Norway have few people divided by a large area, with probably some of the most expensive are as to build roads. Add 20-30 years with a government that would not have the roads too good, to prevent driving as much as possible.. everybody should use trains, and that world well for just some people. Without a trainstation, a lot of people was excluded. They had to drive expensive cars, with expensive fuel, and the last 20 years also toll roads in some areas. Now massive road projects are under construction. That equals toll roads. I remember when foreign truck drivers startes… Read more »
arne-nl

Netherlands non-socialist?

Compared to the US, every EU country is very socialist. 😉

dan

In the grand scheme of things, the Dutch are more capitalist than a good chunk of the US. The US is far more diverse than the EU. Once you add state and federal taxes, there are about 10-12 US states have higher tax rates than in the NL even though the average is lower. The Dutch pay less taxes than people in New York, Connecticut, or Illinois, for example.

pjwood1

I no longer define “Capitalism” by government, or market control. As much as governments can get too big, market dominance can socialize Capitalism away.

Asak

I’m pretty doubtful of the claim that Portugal has among the best roads.

menorman

Implying that the private road owners have any more incentive to keep the lines painted.

mx

Privatization WORKS! For 300% more in cost.

Asak

It’s a situation that a human can easily handle. Even if they miss the lane markings they see they’re heading straight for an impact with a barrier. If this is something that AI cannot handle then it really shouldn’t be allowed to drive for you.

vvk

When I use Autopilot, I feel like I can pay a lot more attention to the road. And I do pay very close attention. It definitely helps A LOT when I am tired and sleepy. It is so much easier to do only one thing — watch the road — vs multitasking. I can definitely see how and accident like this can happen if you take your eyes away from the road. Unfortunately, many drivers do that while fiddling with their facebook feed or whatever else they think they can be doing behind the wheel. Accidents happen. Many people think they are in control behind the wheel — they are so wrong about that. There are hundreds of factors that can take away that control way faster than a human being is able to react.

tftf

As others mentioned, a good AI system would supervise the human (control his eyes etc.) before we go to L4 or L5 systems.

Tesla doesn’t do that – in theory it can do it better one day in the Model3 (camera installed), but it’s impossible in current S and X cars without an expensive HW retrofit.

GM’s system is better:

Cadillac has a pretty smart system to ensure you aren’t catching up on e-mails while it’s turning the wheel.
(…)
there are urgent “beep, beep, beeps” and the car is on the verge of going into “lockdown” – a state in which it concludes I’m either a total idiot or having a medical crisis. If I don’t take over immediately, the car will come to a halt in my lane, turn on the emergency flashers and call for help.

Terawatt
Agreed. At the same time, there’s little doubt that some people are indeed getting tempted to relax even more beyond what they ought to because of autopilot. I’ve noticed how it’s becoming rather commonplace in reviews of EVs in particular for the reviewer to convey a dangerously inaccurate impression of what level of autonomy is available. YES these systems can make you safer. But NO you cannot trust them to take necessary action, or even not to take dangerous action, in ANY situations – because you don’t know everything. This accident looks like a prime example – it’s seemingly a situation AP usually handles very well, until it’s suddenly a situation it handled very badly. It’s very hard not to suspect that the driver completely ignored the road ahead, to an extent that he very likely would not have if he want using AP. A review I saw of the 2018 LEAF (in a car review YouTube channel, not some one man show like Tesla-Bjørn but a seemingly professional show) had the reviewer complaining that ProPilot was “a bit too aggressive” in reminding the driver to keep his hands on the wheel. But taking your hands off the wheel in… Read more »
Pushmi-Pullyu

Terawatt said:

“I’ve noticed how it’s becoming rather commonplace in reviews of EVs in particular for the reviewer to convey a dangerously inaccurate impression of what level of autonomy is available”

I agree 100%. And it’s not just what they say, but what they do during a video review. It’s appalling seeing car reviewers sitting in the driver’s seat while a semi-autonomous car drives itself, fully engaged in conversation with the guy in the passenger’s seat and paying no attention at all to the road. Or even, in some cases I’ve seen, the “driver” is narrating while turned around, looking into the camera being held by someone sitting in the back seat!

This is most definitely giving a misleading impression of how safe it is to let the car drive itself.

pjwood1

With hand on wheel, you can still constantly have your eyes someplace else, like say, fixing “Loading Error”, or waiting for the upper controls to drop down, or looking for just the right pixels within the Controls menu to adjust your sunroof, or to change “+”/”-” cruise, or wipers in a Model 3. You can cover lots of feet doing these things, with one hand on the wheel.

Will

Not good. Dont blame the lack of markings because more then half our roads dont have markings

Terawatt
And don’t blame the lack of marking because not seeing the markings is a situation the software must handle – slow down and alert the driver to take over, bringing the car to a stop if he fails to do so. I don’t know for how long the driver had to be inattentive for this accident to happen. Maybe it was just very bad luck of timing, as many accidents are. (Everyone is inattentive to the road some of the time; very seldom does the beginning of that inattention coincide with the first possible detection of danger, but when it does it’s often too late when danger is discovered, or the crash happens before the driver even knows anything dangerous is happening.) But simple math says it’s more likely the driver was inattentive for longer (there are many more opportunities for the timing to be just wrong if you look away for ten seconds compared to one second), and coupled with the behaviour I see from YouTubers demonstrating AP, I’m inclined to assume this accident sorts under the “false security” blanket. It may or may not be the case that these systems make driving safer. But to gain acceptance they… Read more »
mx

I blame the invisible MARKINGS. 100%.
Budget cuts for the rich are killing us.
The reason for the markings is to make the road safe, you cannot cut the budget and not have people KILLED because of tax cuts.

Asak

Lack of road markings there wouldn’t cause a single human driver to crash.

Leptoquark

That’s what I don’t understand about this video, and the fatality in Tempe. I thought the car was supposed to have radar that looked way ahead, at least three or four car lengths. Why didn’t the car see the barrier coming, irregardless of the lane markings? Why didn’t it see the lady walking her bike before she walked into the lane?

Pushmi-Pullyu

The low-resolution Doppler radar units used by auto makers for their ABS systems, and by Tesla as part of the Autopilot package, are not intended to detect stationary obstacles. They are only intended to detect moving objects.

Just look at how little data is provided by them, and then perhaps you’ll understand why self-driving cars need lidar or high-resolution radar to properly “see” the environment:

comment image

Re the Uber accident: I’m asking the same thing. The Uber car was equipped with a 360° lidar scanner. So why didn’t the car “see” the pedestrian pushing a bicycle? Lidar should work just as well at night as it does in daylight. It looks to me like either something (hardware or software) failed to do what it was designed to do, or else Uber’s self-driving tech is so poor that its lidar scanner isn’t intended to detect anything as small as a pedestrian.

Bar

Also, pointy and angled objects (like the narrow end of the highway divider here) are very hard to detect via radar (any radar).

Just look at stealth aircraft. They are designed to be “invisible” to radar through the use flat, pointy, and angled surfaces, that reflect either very little signal, or redirect the signal away from the sender.

Unfortunately in this case, the narrow end of the highway divider, to the Tesla radar, was like a “stealth” object — and probably very hard to detect.

Dan

They have maps with locations of where there is a ramp and divider. It shouldn’t be too hard to change the programming of the autopilot to handle these situations, or at least warn the driver that it is approaching a segment where he is going to take over.

Kdawg

Hopefully Tesla, and others, are building a 3D map of the freeway system as we speak. Roads and their markings are always going to be suspect. What happens when it snows? For these systems to work reliably, they need several things; realtime camera/lidar/radar data, GPS data, and to x-ref with their 3D map data.

TwoVolts

If we are to achieve the order of magnitude reduction in traffic fatalities that Musk envisions, the investment required will be quite staggering. IMO, the government needs to play a leading role in setting the strategy and funding the required improvements to roads and real time highway mapping. Automakers will need to develop the complementary technology for their cars as well.

The chances of dying in a terrorist attack is neglibly small, and yet we fund the war on terror to the tune of trillions of cumulative dollars. Reducing annual traffic fatalities from 40,000 to 4000 is the equivalent of preventing a 9/11 attack every month – in perpetuity. It seems to me that making full autonomy a reality provides the much bigger bang for the buck in terms of lives saved.

Asak

A lot of companies make money off the “war on terror”. Sad to say but that’s all you need to explain it.

Pushmi-Pullyu

Navigational maps, no matter now detailed, should never be what self-driving cars are primarily relying on. There will always be the possibility of temporary conditions which won’t be in the archived data. For example, the fire truck stopped in a lane that one Tesla car using AutoSteer ran into, fortunately without any serious injury to anything but the car.

For safe autonomous driving, detailed navigational maps should be only a backup for real-time, detailed scanning of the car’s environment. In other words, autonomous cars need a good, reliable SLAM system.

James P Heartney

There ought to be markings on the right side of the divider. Highway maintenance has gotten away with this kind of sloppy marking because drivers are smart enough to avoid the divider anyway. But with autonomous vehicles on the road, we’ll need clear markings everywhere.

On any given stretch of road, construction crews will need to drive the finished areas with autonomous cars to “beta test” their work for usability, rather than just blaming drivers for ambiguities caused by poor design or execution. And before anyone claims that it’s the fault of the vehicle autonomy system, I think it’s up to road designers/maintenance to assure that roads are unambiguous and driveable. Autonomy systems will give us an objective benchmark for saying whether this is true or not.

Pushmi-Pullyu

All the best lane markings in the world won’t prevent cars from running into stopped fire trucks, or pedestrians, or the side of a building.

Fully autonomous cars have to be designed and built for active, detailed, real-time, 360° scanning of their surroundings. In other words, they need a good SLAM system. Unless and until all self-driving cars are equipped with those, then we will continue to see news like this.

If you want to see cutting-edge self-driving car systems, look at Waymo. Not Tesla.

James P Heartney

I’d imagine that poor or ambiguous markings cause plenty of human-driven accidents. I know that a reliable Level 5 system will need to be able to deal with unexpected obstacles; that’s not the point. What I’m saying is that plenty of existing highway infrastructure is poorly/ambiguously marked, and they get away with it by blaming drivers for not figuring it out. That excuse goes away if the same poorly marked streets and highways can’t be navigated accurately by a benchmark self-driving system. And I’m not necessarily even talking about accidents; I’m talking about confusing road markings that only work because the locals have figured them out.

ffbj

Here’s another one.
https://www.youtube.com/watch?v=BEBgmO-heoU
It’s still driver dependent technology, if ignore warnings and blatantly disregard potential problems then you will have problems. You are the pilot.

Looks like when the right line disappeared the AP followed the left lane marking. Surprised there aren’t more accidents there!

carcus

There are at least 5 (and probably more) visual cues that I counted ..
1. A road sign
2. A road sign
3. Beginning of a solid line marker
4. Traffic flow separation
5. Chevrons painted
….. telling either the driver or Tesla’s camera system that a fork in the road is upcoming.

That’s before we even start talking about GPS.

James P Heartney

The chevrons are barely visible. Traffic flow separation doesn’t help if there’s no traffic. And the strong line on the left along with nearly nonexistent chevrons creates a strong visual ambiguity, which you can’t/shouldn’t try to patch over with some signs. This is very bad road marking, and would be even if there were no auto-piloted cars driving there.

carcus

6. Wear lines in the concrete
7. ‘arrows’ road sign
8. Light pole in direct path
9. Edge of bridge in direct path

..before we even start talking about GPS

James P Heartney

“Wear lines in concrete”
Barely visible, and concrete textures vary dramatically all over road surfaces. Hardly a guide for where lanes are.

And the objects in direct path are all in the distance. The visible lines painted on the road essentially depict a lane that ends in a flat obstacle. As I said, poorly marked. Now, most human drivers can overcome these mis-markings, but that doesn’t change the fact that they’re poorly done. And it would not have been difficult for the highway dept. to paint the lines in correctly.

carcus

So the Tesla auto-pilot just follows a line painted on the concrete? That’s all it knows how to do? That’s the only data coming in and that’s the only data it cares about?

Pushmi-Pullyu

Yes. Exactly.

There are other things that Autopilot can sometimes use for cues, such as other vehicles on the road and their direction of travel. But if there isn’t a line of cars to follow, then all Autopilot has to go on is the lane markings, plus whatever data is in its navigation database.

So yes, sometimes — probably often — Autopilot is relying on nothing but the ability of its cameras and its optical object recognition software to “see” the lane markings.

So now, Carus, perhaps you understand why those using AutoSteer must keep their eyes on the road at all times, ready to take over when Autopilot doesn’t detect a dangerous driving situation.

carcus

Nope. I don’t understand.

What I thought I understood — was that Tesla was utilizing/processing all the millions and millions of miles travelled/recorded by past/current Tesla owners …. feeding data into the giant AI brain that would then “tentacle” out to current/future Tesla owners ,,.. catapulting them to the front of the competition in autonomous drive.

Remember?

The Amazing Ways Tesla Is Using Artificial Intelligence And Big Data
https://www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/2/#5128037f30f0

David

Self driving is no where near ready for prime time. Tesla’s marketing makes you think it is while they warn to keep hands on the wheel.
There is zero value in autopilot if you have to watch its every split second move. Better to have systems to protect the drive rather than give the false impression that they’re driving for you. Elon wants it both ways. Make the promise by giving the impression of self driving and disavow responsibility with a keep hands on the wheel disclaimer.

Another Euro point of view

Exactly, it is Tesla who rushed this technology on the roads while other car makers thought the technology was not mature enough. I believe it is now time for regulator to put a strict regulatory frame on this before other car makers are under the impression that having your systems beta tested by your customers is the right way to go after all.

arne-nl

Autopilot is not sold as self-driving. It is a driver assist feature.

A distinction that seems to be lost on some people, including Tesla drivers…

Asak

I’d argue that it’s marketed in a way that is at best ambiguous and at worst outright misleading. Even the name implies that it is some sort of self driving system. Tesla talks a lot about rolling out full self driving in the future–how the Model 3 is equipped for it.

A lot of people clearly don’t heed the warnings and believe the system is more capable than it is. It doesn’t matter what sort of disclaimers you include, if that’s how people are using it then there’s a problem.

Pushmi-Pullyu
“There is zero value in autopilot if you have to watch its every split second move.” Wrong. And you mean AutoSteer, not “autopilot”. But the NHTSA reports that Tesla cars with Autopilot + AutoSteer installed — mind you, that’s merely installed, not necessarily operating — have a ~40% lower accident rate than Tesla cars without AutoSteer. I’d call that very, very far from “zero value”. I’d say that’s worth quite a lot! What amazes me as that so many comments here seem to take as their premise that no human driver would ever drive into that same collapsible barrier. Why do you think the barrier is there? Why do you think the barrier had been collapsed two weeks earlier? Because someone drove into it, that’s why! And that “someone” was almost certainly a human driver, not a robotic one. It is of course a tragedy that someone was killed in a Tesla car under the control of AutoSteer. But no more tragic than any other time a human being is killed in a traffic accident. It’s really depressing seeing so many people react with fear, rather than with reason, suggesting that nobody should be using AutoSteer just because of one… Read more »
Scott Franco

What was Asimov’s Rule of Robotics? Do no harm to humans?

The rule for cars is DON’T HIT ANYTHING. Yep, no matter what the lines say (or don’t say). That’s why LIDAR/RADAR is superior. Its a simple 2-D calculation: that object out there, moving or not, is going to intersect the position of this car. Apply brakes.

(⌐■_■) Trollnonymous

Wasn’t that Uber car outfitted with LIDAR/RADAR?

Pushmi-Pullyu

Yes.

I think we now have a reasonably good understanding of why a Tesla car hit that collapsed barrier on the highway. Tesla cars, unfortunately, don’t have active scanning with lidar or high-resolution radar. No active scanning to detect fixed obstacles; only low-resolution Doppler radar used to detect large moving obstacles. “Large” as in moose-sized or larger, according to Elon Musk.

Contrariwise, I have no idea why that Uber car hit and killed a pedestrian walking her bicycle. The Uber car was equipped with an active scanner, a rotating lidar scanner. So why didn’t the self-driving system detect the pedestrian as an obstacle in the roadway, and stop?

Pushmi-Pullyu
Isaac Asimov’s “Three Laws of Robotics”: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. * * * * * But Asimov envisioned his “positronic” robot brain in an era of analog computers, when a “top-down” architecture made sense for designing computers and their controlling system all as one unit. No software, just hardware. Contrariwise, today we use general-purpose digital computers, with the function being controlled entirely by software. Asimov’s top-down approach does not work in designing complex real-world software. For example, let’s consider the difficulty of getting software that could reliably recognize what a “human” is, and distinguish it from not-human. Consider the difficulty of using nothing but video camera images to identify a “human” in such situations as: 1. A child on a tricycle 2. A baby in a stroller 3. An amputee hobbling along on crutches 4. An unconscious pedestrian lying down in… Read more »
Peter Miller

I guess there is a big issue with Tesla’s statement that “Tesla “Autopilot does not prevent all accidents – such a standard would be impossible – but it makes them much less likely to occur.” That may be true, but basically the same “driver” i.e. Tesla’s autopilot software is installed on all Tesla cars, that means the same mistakes will occur on all cars and software is very likely to repeat the same mistakes over and over until it is fixed. In the long haul, that might be an advantage to safety, but clearly the same deadly crashes and accidents could happen repeatedly across the country until any issue is fixed.

Pushmi-Pullyu

Tesla said — at least, I think they said — that there had been ~200 instances of a Tesla car under control of AutoSteer passing the same point on the highway in the two weeks since that collapsible barrier was collapsed in a previous non-Tesla-related accident.

So why didn’t any Tesla car hit that collapsed barrier on an earlier date?

Well, we have seen one comment claiming someone had had a similar problem at that same point on the road, and had to grab the wheel and steer away from the barrier. So it’s reasonable to conclude that there were one or more previous instances of a near-accident, where the human driver was alert enough to take control of the car away from AutoSteer.

Also, the victim’s brother claimed the car had tried to steer into that same barrier “7 or 10 times before”, but logic and reason suggest that is at least an exaggeration. (That is, if the driver had experienced a near-accident at that exact place 7 or 10 times before, then he had repeated very clear warnings of very real danger, and certainly should have been very alert and hands-on when driving that section of road.)

TwoVolts

Tesla stated on March 27th: “There are over 200 successful Autopilot trips per day on this exact stretch of road.”

You keep insisting on telling others they really mean Autosteer when they correctly refer to Autopilot.

Get Real

What the serial anti-Tesla trolls don’t understand or admit is that in any technological revolution there will be both advances and (temporary) setbacks.

This is how progress is made and it has already been proven that Tesla’s tech has already reduced the frequency of accidents some of which may have been fatal if they hadn’t been avoided by the tech.

The bottom line is, we will not have rapid advances if we let THE PERFECT BE THE ENEMY OF THE GOOD!

Edge cases as this one is are very unfortunate but they will happen no matter what (the Uber example also comes to mind) and they collectively lead to refinements that improve the system and advance the tech.

Asak

Tesla *claims* it’s reduced accidents. They also claim a lot of other stuff.

DJ

Well, my guess is that they’re going to be screwed here. I really don’t get why their cars don’t stop before they hit solid objects. I mean what, this, an ambulance, a stopped car on the freeway, etc.. What is the point of all this self assisted driving if it doesn’t literally slam on the brakes to keep you from plowing in to a fixed object!?!?!?

Still that said I wish people wouldn’t feel the need to whip out their phone and video themselves trying to recreate something like this. The driver could have easily caused an accident around them but hey he gots to be on YouTube!!!

Pushmi-Pullyu
The answer — and you’re not going to like it, but it’s the answer — is that Autopilot is not intended to react to fixed obstacles. With the rather limited sensors in Tesla’s current cars, an attempt to design the car to react to fixed obstacles would result in an overwhelming number of false positives. Just look at the number of trees outlined in green in the side views in the video linked below. According to the caption in the video, a green outline is something the car detects as being an “in-path” obstacle. (Remarkable that a tree well off to the side of the road could be “in the path” of the car! 😉 ) There are something like 200 trees detected as “in-path” objects during that ~10 minute drive! This very clearly shows the limitations and unreliability of using optical object recognition software. If a Tesla car under the control of AutoSteer stopped every time Autopilot detected an “in-path” obstacle, then it literally would never go anywhere! Now, the above analysis does contain some assumptions which might not be true. Perhaps Tesla has improved the ability of its cars to reduce the number of false positives for in-path… Read more »
Pushmi-Pullyu
I think the arguments here over whether or not the lane markings were clear, really miss the fundamental issue. The important point is that if we’re going to trust our lives to a self-driving car, then it needs to have a better “view” of its surroundings than just following lane markings. It seems to me that what Tesla is doing is trying to milk the very limited information its sensors get for every last scrap of info, such as the claim of “bouncing a radar beam off the road to detect a car in front of the car ahead of you.”. That Tesla has actually managed to demonstrate this in practice is remarkable, altho I’m very skeptical that this can be done reliably. However, pushing the envelope of what’s possible with very limited sensors is ultimately futile. This accident, and the one in which a Tesla car under control of AutoSteer ran into a fire truck parked in a traffic lane, point out the limitations of relying on cameras and low-resolution Doppler radar, the latter of which is only intended to detect large moving objects. What is needed is active, real-time 360° scanning of the immediate environment around the car.… Read more »
TwoVolts

“It stops short of impact, but it’s obvious Autopilot is not capable of correctly handling this situation.”

The article’s opening statement is ambiguous and could be interpreted as implying AP stopped the car. To be clear, Autopilot (it) did not stop short of impact. The speed clearly does not decrease in the video until the sound of AP being disengaged is heard. The human intervened and stopped short of impact here.

Pushmi-Pullyu

Well, it does seem the human driver intervened, because he apparently let the hand-held camera droop when he grabbed the wheel. But if there is “the sound of AP being disengaged”, then I don’t know what that would be. I don’t hear anything but road noise, some incidental noise from the driver moving around, and something that sounds like brakes being applied. Maybe the latter is what you meant? Hitting the brakes would certainly disengage AutoSteer. (Autopilot is always on and can’t be shut off by the driver.)

TwoVolts

The sound is there – at 00:31 mark. It is also the point at which the speed begins to drop from the constant 59 mph. Turn your sound up and try again. Maybe try some earbuds or headphones.

TwoVolts

Tesla’s March 30th official statement refers to Autopilot being engaged:
““In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged…”

Are you saying AP cannot be disengaged?

TwoVolts

Pu-Pu,
Please clarify. When I referred to AP being disengaged, your reply included this statement: “Autopilot is always on and can’t be shut off by the driver.”

I then pointed out that Tesla’s March 30th statement referred to Autopilot being engaged: “In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged…”

Why do you insist on stating that Autopilot can’t be shut off? Clearly, Autopilot was engaged at the time of the crash according to Tesla’s own statement. Are you saying that once AP is engaged, it cannot be disengaged?

Nigol T.

We volunteer all our personal information and we gladly agree to become guinea pigs for autonomous cars. Something is very very wrong with the common man.

Lou Grinzo
Time for Lou to put on his curmudgeon hat… I’ve been a programmer professionally and/or as a hobbyist for 41 years. If there’s a software design or coding mistake I haven’t made, then it hasn’t been invented yet. As I keep saying online, I REALLY want to see fully autonomous vehicles become available, as I am sick to death of dealing the moronic drivers on public roads. So I’m completely, almost militantly pro-AV. But I also know that: 1. AV won’t grow beyond the nifty parlor trick stage for a long time. 2. Companies will continue to oversell AV tech., for competitive reasons. 3. Consumers will grossly overestimate what AV can do. 4. The legal system will, as always, lag behind tech. advances. 5. The only way to make significant advances in AV will be a combination of [a] vastly more capable AI and [b] vastly smarter roads that provide information to cars. As I keep pointing out, I’ve personally encountered numerous situations, including icy roads, police checkpoints, EMS vehicles on the side of the road, accidents, detours, drunk drivers, etc. that would be very challenging for any car to handle on its own without [a] and/or [b] above. 6.… Read more »
TwoVolts

Thanks for your unique perspective and excellent summary. Regarding point 5, I also expect that it will take more to get to full autonomy than just software reacting to sensor input. It seems to me that we will need roads talking to cars, and cars talking to other cars as well.

Pushmi-Pullyu

@Lou Grinzo:

Thank you very much for your remarks!

How refreshing to see someone who takes the long view of the subject, looking at the forest rather than minutely examining the leaves on one tree.

I also find relevant the section of Larry Niven’s SF short story “Flatlander”, the section where Beowulf Shaeffer, in an era where all travel by “car” is by fully autonomous aircars, describes some groundcar enthusiasts who race their antique human-controlled groundcars on one of only two remaining sections of freeways maintained for their crazy and very expensive hobby. Imagine, trusting your life to driving in cars with no radar, steered and stopped by human muscle power, controlled by mere human reflexes, and limited in traction to four rubber balloons trying to grip smooth concrete.

You’d have to be crazy to do that! 😉

Pushmi-Pullyu
“As I keep pointing out, I’ve personally encountered numerous situations, including icy roads, police checkpoints, EMS vehicles on the side of the road, accidents, detours, drunk drivers, etc. that would be very challenging for any car to handle on its own…” Hopefully car-to-car wireless communications can handle such situations as police checkpoints, and emergency vehicles on the side of the road — or coming up behind the autonomous car. I agree that things like construction zones and detours probably will require “smart roads”, or more precisely, will require construction workers to place multiple markers to steer traffic safely in a temporary lane; markers specifically designed for autonomous cars to detect them easily. Icy conditions and drunk drivers? Well, not just drunk; sleepy drivers and drivers texting on their phones have been shown to be, statistically, just as bad or in some cases worse. Let’s just face reality here and recognize that moving down the road at highway speed is never going to be really safe, no matter what person or robot is in control of the car. At least when autonomous cars become commonplace, that will reduce the number of drunk/ sleepy/ texting drivers on the road! Even an impossibly… Read more »
Jason

I think human drivers would get confused by this intersection, especially if they are not familiar with it. Regardless of AP, I wonder why they haven’t marked it with diagonal lines to indicate a non drivable part of the road? That is common in my country and makes this sort of intersection much easier to understand.
Add some rain, fog, nighttime, and this is a pretty dangerous bit of highway.