Mobileye Sees What Uber Didn’t, Cyclist Detected 1 Second Before Impact

MAR 27 2018 BY DOMENICK YONEY 78

Result was obtained via video of the scene

The tragedy of the first pedestrian death by an “autonomous” vehicle is only a few days behind us. While many individual voices have been raised in discussion of the incident — we have an on-going, wide-ranging discussion on the InsideEVs Forum — other suppliers of autonomous vehicle systems are now chiming in. One of them is Mobileye.

Images from a video feed watching a TV monitor showing the clip released by the police.

Once partnered with Tesla in its autonomous program, the Intel-owned outfit has released an editorial (which you can read in full below) that talks up the result of an experiment they carried out. Employing the advanced driver assistance systems (ADAS) software already found in many modern cars, they scanned the video via a monitor. Despite missing a fair amount of high dynamic visual information, their system was still able to detect the pedestrian, Elaine Herzberg, a full second before impact.

Read Also: Public Split Between Tesla And Autopilot Chip Provider Mobileye Gets Messy

Reportedly moving at 38 miles per hour, that would have given the car 56.7 feet in which to attempt to stop. It seems quite probable, had it been installed in this vehicle, Miss Herzberg would be with us today. Of course, if Uber hadn’t, reportedly, turned off the safety system in the Volvo XC90 PHEV, it’s quite likely it too would have detected the woman walking her bicycle across the road. It also goes without saying that if the car’s operator had of not had his eyes off the road for a full five seconds before impact, there is a chance this fatality could have been avoided.

Besides discussing the importance of legacy ADAS software and how it works, the editorial, written by senior VP of Intel and Mobileye’s CEO and CTO, Amnon Shashua, also discusses the need for redundancy built into systems, and outlines how Mobileye achieves “true redundancy” by having a “separate end-to-end camera-only system and a separate LIDAR and radar-only system.”

While we lack the technical expertise to fully evaluate the quality of the Mobileye system, we can get behind the final portion of the editorial in which he calls for a convening of “automakers, technology companies in the field, regulators and other interested parties” for a discussion of a safety validation framework for autonomous vehicles.

Here’s the press release from Intel, owner of Mobileye:

Now Is the Time for Substantive Conversations about Safety for Autonomous Vehicles

Society expects autonomous vehicles to be held to a higher standard than human drivers. Following the tragic death of Elaine Herzberg after being hit last week by a self-driving Uber car operating in autonomous mode in Arizona, it feels like the right moment to make a few observations around the meaning of safety with respect to sensing and decision-making.

First, the challenge of interpreting sensor information. The video released by the police seems to demonstrate that even the most basic building block of an autonomous vehicle system, the ability to detect and classify objects, is a challenging task. Yet this capability is at the core of today’s advanced driver assistance systems (ADAS), which include features such as automatic emergency braking (AEB) and lane keeping support. It is the high-accuracy sensing systems inside ADAS that are saving lives today, proven over billions of miles driven. It is this same technology that is required, before tackling even tougher challenges, as a foundational element of fully autonomous vehicles of the future.

To demonstrate the power and sophistication of today’s ADAS technology, we ran our software on a video feed coming from a TV monitor running the police video of the incident. Despite the suboptimal conditions, where much of the high dynamic range data that would be present in the actual scene was likely lost, clear detection was achieved approximately one second before impact. The images below show three snapshots with bounding box detections on the bicycle and Ms. Herzberg. The detections come from two separate sources: pattern recognition, which generates the bounding boxes, and a “free-space” detection module, which generates the horizontal graph where the red color section indicates a “road user” is present above the line. A third module separates objects from the roadway using structure from motion – in technical terms: “plane + parallax.” This validates the 3D presence of the detected object that had a low confidence as depicted by “fcvValid: Low,” which is displayed in the upper left side of the screen. This low confidence occurred because of the missing information normally available in a production vehicle and the low-quality imaging setup from taking a video of a video from a dash-cam that was subjected to some unknown downsampling.
The software being used for this experiment is the same as included in today’s ADAS-equipped vehicles, which have been proven over billions of miles in the hands of consumers.

Images from a video feed watching a TV monitor showing the clip released by the police. The overlaid graphics show the Mobileye ADAS system response. The green and white bounding boxes are outputs from the bicycle and pedestrian detection modules. The horizontal graph shows the boundary between the roadway and physical obstacles, which we call “free-space”.

Recent developments in artificial intelligence, like deep neural networks, have led many to believe that it is now easy to develop a highly accurate object detection system and that the decade-plus experience of incumbent computer vision experts should be discounted. This dynamic has led to many new entrants in the field. While these techniques are helpful, the legacy of identifying and closing hundreds of corner cases, annotating data sets of tens of millions of miles, and going through challenging preproduction validation tests on dozens of production ADAS programs, cannot be skipped. Experience counts, particularly in safety-critical areas.

The second observation is about transparency. Everyone says that “safety is our most important consideration,” but we believe that to gain public trust, we must be more transparent about the meaning of this statement. As I stated in October, when Mobileye released the formal model of Responsible Sensitive Safety (RSS), decision-making must comply with the common sense of human judgement. We laid out a mathematical formalism of common sense notions such as “dangerous situation” and “proper response” and built a system to mathematically guarantee compliance to these definitions.

The third observation is about redundancy. True redundancy of the perception system must rely on independent sources of information: camera, radar and LIDAR. Fusing them together is good for comfort of driving but is bad for safety. At Mobileye, to really show that we obtain true redundancy, we build a separate end-to-end camera-only system and a separate LIDAR and radar-only system.

More incidents like the one last week could do further harm to already fragile consumer trust and spur reactive regulation that could stifle this important work. As I stated during the introduction of RSS, I firmly believe the time to have a meaningful discussion on a safety validation framework for fully autonomous vehicles is now. We invite automakers, technology companies in the field, regulators and other interested parties to convene so we can solve these important issues together.

Professor Amnon Shashua is senior vice president at Intel Corporation and the chief executive officer and chief technology officer of Mobileye, an Intel company.

Source: Intel

Categories: General

Tags: , ,

Leave a Reply

78 Comments on "Mobileye Sees What Uber Didn’t, Cyclist Detected 1 Second Before Impact"

avatar
newest oldest most voted
mxs
Guest
mxs

That upcoming lawsuit will be one of the most watched in the recent history. It will dictate where does the industry and regulation go from here ….

At this point it does not look good for Uber or state of Arizona, in my distant view.

pjwood1
Guest
pjwood1

People will continue to die at the hands of AVs and drivers. The debate I’m interested in is whether it’s a foregone conclusion human driving should be illegal? Necessary for Level 5?

The fight over public real estate, known as “roads”, is moving faster than many realize. Those profiting from AVs won’t care if a car becomes a slow-moving advertising cage.
https://tinyurl.com/ycaps2nj

(⌐■_■) Trollnonymous
Guest
(⌐■_■) Trollnonymous

The Uber car failed 100% as well as the supposed driver.
The design is supposed detect objects in front of the vehicle, it did not.

Tragic and epic FAIL.

WadeTyhon
Guest
WadeTyhon

The failure of Uber isn’t surprising considering California kicked Uber out of the state for failing to adhere to regulations. No real surprise the company and their drivers continue to put safety second.

ItsNotAboutTheMoney
Guest
ItsNotAboutTheMoney

Actually, I wouldn’t say that the Uber car failed. The Uber didn’t react to the pedestrian.

But then, it’s not _actually_ an autonomous car. It’s an car with an development system, which is why there was a driver in it.

What failed was the driver, or the driver and Uber.

menorman
Guest
menorman

This sort of situation should’ve been tested on a closed course way before the cars ever made it onto public roads. The fact that it happened is absolutely a failure on Uber’s part.

TwoVolts
Guest
TwoVolts

I don’t think you can distinguish between the driver and Uber. The driver was hired, trained, and presumably managed by Uber. The combination of the driver, the car, and the training/monitoring of driver performance is 100% Uber.

u_serious?
Guest
u_serious?

Uber car failed to see the object.
The hired uber idiot didn’t see the object.
The car is supposed to be designed to see and avoid objects.
Complete and utter failure by Uber.
Not sure how dumb one has to be to think it’s not 100% ubers fault. Pedestrians will always be a factor PERIOD!

ItsNotAboutTheMoney
Guest
ItsNotAboutTheMoney

In a non-“autonomous” car when there’s a crash nobody would hesitate to blame a driver who was not just momentarily distracted, but focused more on their phone than on driving.

But here, oh no, it’s apparently 100% the fault of the car company and not the person whose entire reason for being in the car was for situations like this.

Given the crappy camera footage we can’t ever know whether the pedestrian would not have been killed had the “driver” been doing their job, but having experienced having to brake from 50mph to 0mph for deer and moose (notoriously hard to see) while driving at night, my instinct is that a reasonably attentive driver would have at least had a much better chance to catch a glimpse of something that would have primed them to brake and reduce the speed of the crash.

jimjfox
Guest
jimjfox

Uber Technologies Inc. disabled the standard collision-avoidance technology in the Volvo SUV that struck and killed a woman in Arizona last week, according to the auto-parts maker that supplied the vehicle’s radar and camera.
https://www.bloomberg.com/hyperdrive

Martin Winlow
Guest
Martin Winlow

You (and seemingly at least half the commentators on this issue including Mobileye) have completely missed the point. No system will *ever* be 100% effective at preventing collisions. Whatever autonomous vehicles we eventually end up with only have to be a bit better than humans, on average, to start saving lives – not to mention huge sums of money. I appreciate that won’t make the victims (or their friends and families) of this or any other collision between vehicle and pedestrian, where the latter is injured, feel any better.

scottf200
Guest
scottf200

And this camera only detection was using a very questionable source (and video options). Numerous other videos from that same spot and time of day show it is *much* brighter. The camera only detection would have seen the person and bike a much farther distance. Example:comment image

dan
Guest
dan

“Brightness” is not something you can determine from looking at an image. Camera exposure and aperture can make a dark night look bright or a sunny day look dark. What you’re really looking for is contrast. In low light settings, there is far less contrast to work with. The image you posted may have been taken when standing still facing away from traffic, whereas a front facing camera that needs to deal with headlight glare and other effects may have turned the gain way down. Humans have similar trouble driving at night with oncoming traffic. Our eyes have a far greater dynamic range than most cameras, though. So, we can quickly readjust after a car passes in ways that current technology cannot.

scottf200
Guest
scottf200

Yes, I understand that. Here is a video from that same area. There have been many. The one provided by Uber is suspect.
http://www.youtube.com/watch?&v=1XOVxSCG8u0

dan
Guest
dan

With a little bit of very basic image processing (something as simple as multiplying the intensity value by some factor) you’ll get the two videos to look identical. The fact that the MobilEye detected shapes in the dark makes it clear that the contrasting shades were there in that video.

menorman
Guest
menorman

If MobilEye was only able to detect the woman a second before impact, they definitely weren’t “detecting shapes in the dark” because she’d already become visible in Uber’s video by then.

Pushmi-Pullyu
Guest
Pushmi-Pullyu

Well said.

The question here shouldn’t be over how far away a camera can see at night, when things are illuminated only by the car’s headlights, and when a camera may be dazzled by the bright lights of an oncoming car.

The question here should be over why the car didn’t detect the obstacle (that is, the pedestrian) with its scanning lidar. That suggests a pretty basic failure which, as Menorman said in a comment above, should have showed up in closed-course testing before the test cars were ever let out on public roads.

I wonder if there was some “edge case” here that isn’t being recognized in descriptions of the event. Some unusual condition that caused a failure of the car’s ability to “see” the pedestrian with lidar.

TwoVolts
Guest
TwoVolts

Is it possible that the car’s Lidar did detect the pedestrian- but Uber’s software ignores detection inputs below a certain threshold – so as not to trigger false emergency braking for example?

Martin Winlow
Guest
Martin Winlow
(I do not intend to insult the victim of this collision or their families, here – if plain speaking offends you, please do not continue to read this comment). I’m afraid, for me (with 30 years of inner-city policing behind me) just as significant a question as “Why didn’t the car see the pedestrian” is the question “Why didn’t the pedestrian see the approaching car (and get out of the way)”. The answer, of course, is the same one that kills thousands of pedestrians every year. I’m not so much interested in blame, anymore – I just want to prevent these things from happening in the first place. I see autonomous vehicles as being a huge step forward in reducing road casualties (‘KSI’ – killed or seriously injured) and predict at least a 90% reduction once autonomous vehicles become widely adopted. Pedestrians who end up the victim of road traffic collisions generally fail to take adequate responsibility for their own safety in an obviously dangerous situation – usually through inattention, laziness or inebriation (mostly). You only have to stand near a controlled (with traffic lights) pedestrian crossing and watch people’s behaviour to learn this – if you haven’t found out… Read more »
Aaron
Guest
Aaron

When you increase the intensity, as you mention, you’re also amplifying noise, especially at night. This makes detection even more difficult.

dan
Guest
dan

No more difficult. You’re just multiplying everything by a constant. My point was that the information content is the same no matter how much you scale the values. Every value gets doubled and all of a sudden the human eye can make out what is in the shadows. That doesn’t change the information content that was always in those shadows.

Me
Guest
Me

“My point was that the information content is the same no matter how much you scale the values.”.

Wrong. There are threshold limits for color range in image bytes. Also, there is camera sensor sensibility that counts for losing details in the image.

dan
Guest
dan

You’re telling me that there is more noise in the series 30,35,30,15,10 than in the series 60,70,60,30,20 ?

Here is some advise: look for careers that don’t involve hard statistics or data science.

Pushmi-Pullyu
Guest
Pushmi-Pullyu

Pure math doesn’t have to deal with noise in the system. Your purely theoretical example isn’t relevant to the situation, which involves real-world electronic systems and random noise.

The real world is messy and hard to simplify, unlike mathematics.

pjwood1
Guest
pjwood1

“all of a sudden the human eye can make out what is in the shadows.”

I agree with others, that the human eye in this case would “expose” for areas around the light’s hot-spot. The camera “exposed” for the hot-spot, which means it under-exposed for the shadows. That, or Uber is playing up a “she came out of nowhere” factor.

Regardless, LIDAR being laser, it didn’t see what the camera did.

Pushmi-Pullyu
Guest
Pushmi-Pullyu

“My point was that the information content is the same no matter how much you scale the values.”

This is easily disproven. Turn up the gain too much on a stereo amplifier, and all you’ll get is white noise. Turn it down too much, and you’ll get no sound at all out of the speakers no matter how loudly you shout into the mic.

No doubt something similar applies to signal amplification in a video system.

dan
Guest
dan

You’re mistaking sensor sensitivity with image processing. If you’re simply manipulating the pixel data (which is the only difference between the two images being compared in the original post), you haven’t fundamentally changed anything in the underlying data other than scale the values by a constant number.

TeslaPlease
Guest
TeslaPlease

The Latest Tesla Crash Needs Investigating As Well!!!

With ZERO evidence the CHP ASSUMES the Tesla driver was not being aided by AutoPilot and ‘lost control.

I want to know if AutoPilot was engaged and operating at the time of the accident. Most 38 yo men do not drive into a fixed concrete object at highway speeds.

Tesla has been surprisingly quiet and has said nothing about whether AP2 was in operation.

‘The CHP wrote on Twitter to confirm Huang’s death at 3:42 p.m. Friday, approximately 40 minutes after crews had completely cleared and opened all lanes of southbound Highway 101.

‘Huang was traveling at freeway speeds on the split along the Highway 101 and state Highway 85 junction, lost control and struck the middle barrier causing his vehicle to catch fire, according to the CHP. A Sig-Alert was issued at 9:29 a.m., the time that the CHP was notified of the crash.’

Robert Weekley
Guest

I was only about 25-26 when I had my 1983 Mazda RX7 with Aftermarket Turbo, when i ‘Nearly’ nailed a Concrete Abutment Lead-in Section that Ramps Up to the Main Concrete Guard Berms! at 90-100 Kph! I missed by only a few feet as a Sudden Panic Steering Correction was the thing that save my Butt!

This Driver might not have been able to make such a correction, without a side collision with Traffic, and did not execute such a maneuver, in any case.

(⌐■_■) Trollnonymous
Guest
(⌐■_■) Trollnonymous

“With ZERO evidence the CHP ASSUMES the Tesla driver was not being aided by AutoPilot and ‘lost control.”

How do you know there was zero evidence?

I’m curious also if AP was on.

I’m also curious what the driver was doing that caused them to lose control. My thought is they were fiddling with the big ass screen in the car trying to get some part of a menu to or some crap like that.

TeslaPlease
Guest
TeslaPlease

1) The CHP officer was not a witness to the accident. 2) The driver is dead. 3) There was no passenger 4) Tesla has not shared any data results from the vehicle 5) CHP does not know if AutoPilot was operating

He/she should have stated “I am not a crash scene investigator and we will have to wait till a full investigation has completed’.

The police officer did the same damn thing in AZ and had to pull back her comments because they love to speculate and pass judgment prematurely.

These post crash on-site police officers need to ‘stay in their lane’

TwoVolts
Guest
TwoVolts
NTSB is investigating. NTSB tweet from today: “2 NTSB investigators conducting Field Investigation for fatal March 23, 2018, crash of a Tesla near Mountain View, CA. Unclear if automated control system was active at time of crash. Issues examined include: post-crash fire, steps to make vehicle safe for removal from scene.” Traveling at “freeway speeds” does not rule out the possibility that AP was active. The energy absorbing part of the crash barrier was reportedly not reset from the previous accident. If it had been set properly, the chance of survivability – especially in a Model X – would have increased significantly. There are two main competing theories as to what likely happened. There are two HOV lanes on the left that approach the barrier. The leftmost lane leads to 85, the rightmost HOV lane stays on Hwy 101. One theory involves the driver making a late attempt from the leftmost HOV lane to merge right in a failed effort to stay on 101. Apparently, late attempts to merge right from the leftmost HOV lane is common according to those familiar with the area. The other theory involves the Tesla being in the rightmost HOV lane (which proceeds on to… Read more »
TwoVolts
Guest
TwoVolts

Tesla crash theory #2 explained in a 36-second video:

https://www.youtube.com/watch?v=tvH7Z5bxBM0#action=share

TeslaPlease
Guest
TeslaPlease

Appreciate the post. The technology is so new and there is still so much to learn.

Get Real
Guest
Get Real

More attempts at diversion and false equivalency by serial Tesla basher TeslaPlease.

What a troll!

TwoVolts
Guest
TwoVolts

The ‘trolls’ and ‘serial Tesla bashers” on Wall Street are apparently seeing some level of equivalency as well – Tesla stock down $25 today.

Mark.ca
Guest
Mark.ca

You don’t need quotes in your post. Go to SA if you don’t believe there’s a very well prepared troll army always attacking Tesla …ever since around 2012.

TwoVolts
Guest
TwoVolts

Mark.ca
I don’t doubt that there are plenty of Tesla trolls and anti-EV trolls out there. However, I doubt that serious Tesla trolls that truly want to spread fear, uncertainty, and doubt are going to waste their time at InsideEVs. I have been accused here of being an anti-Tesla troll and FUDster on several occasions – simply because I have suggested that AP may have played a role in the recent tragedy in California.

I find these accusations annoying – especially when the accusers do not bother to refute points – instead preferring name calling and accusations.

I actually have a pretty strong bias in favor of Tesla, and it is not at all lost on me that the Chevy Volts that my family drives probably wouldn’t even exist if Tesla had not started the EV revolution. If you or others disagree with me when I criticize something related to Tesla, I encourage you to refute my arguments.

TwoVolts
Guest
TwoVolts

What is SA?

Ambulator
Guest
Ambulator

Seeking Alpha

Pushmi-Pullyu
Guest
Pushmi-Pullyu

The entire stock market has been down sharply of late. Surely that has had a much bigger effect on Tesla’s current stock price. Trying to blame any day’s drop on an accident report seems, at best, rather naive.

TwoVolts
Guest
TwoVolts

Just reporting what the financial publications are reporting. Here is one of many articles linking today’s stock drop to the NTSB investigation:
https://www.barrons.com/articles/teslas-stock-could-be-in-big-trouble-1522174990

TwoVolts
Guest
TwoVolts
windbourne
Guest
windbourne

Out of curiosity, whose software was running?
Was it Ubers? Or Volvo’s?

Rick
Guest
Rick

From the video, a human wouldn’t have been able to stop either. Not even one second between the time you see her and the impact. Pedestrians do too many reckless things, eventually this is the result. Maybe better cameras would see things the human eye doesn’t.

TwoVolts
Guest
TwoVolts

The Uber video looks suspiciously dark to me. Please review the video and photo above. Also, here is a very good analysis from Zack and Jesse at ‘Now You Know’. They explore the darkness of the Uber video in some depth. It might change your mind.

https://youtu.be/k5vbjl3TNEE

John M
Guest
John M

Also, a healthy human eye would have seen the cyclist much easier than that camera in those lighting conditions.

menorman
Guest
menorman

The Uber vehicle is adorned with equipment that doesn’t need visible light to operate, so the video is ultimately extremely misleading in that regard.

Someone out there
Guest
Someone out there

The video probably doesn’t capture nearly the same amount of detail that you would be able to see with your own eyes in the situation. The video is limited in dynamic range to begin with and then has been heavily compressed on top.

Pushmi-Pullyu
Guest
Pushmi-Pullyu
“Maybe better cameras would see things the human eye doesn’t.” In general, video cameras simply don’t have the visual acuity and sensitivity to contrast that the human visual system does. It’s not so much that our eyes are better than cameras. They are — but that’s not the main limitation. The real problem with cameras and software used in semi-self-driving cars is that with the current or near-future state of the art of technology, they will never approach the sensitivity and the data processing ability of the visual cortex in the human brain, which is highly developed and very sophisticated. Frankly, it’s stupid for a self-driving car to be designed to depend on cameras to “see” the surroundings at night. The Uber cars are equipped with scanning lidar, and that should be the primary sensor used to detect obstacles — including moving obstacles such as pedestrians. We shouldn’t be wasting time arguing about whether or not the car’s camera-based optical object recognition software should have recognized the pedestrian in time to prevent the accident. The problem isn’t so much the hardware of the camera; it’s the wetware of the human brain vs. the software of the self-driving car’s optical object… Read more »
AnonyMouse
Guest
AnonyMouse

Domenick Yoney said:
“Of course, if Uber hadn’t, reportedly, turned off the safety system in the Volvo XC90 PHEV, it’s quite likely it too would have detected the woman walking her bicycle across the road.”

There is a distinction to be made between daytime pedestrian detection and nighttime pedestrian detection with regards to Pre-Collision Systems.

I’m not so sure that the Volvo would have been able to detect and stop (or slow down significantly) for a nighttime crossing pedestrian. The very latest pre-collision systems are only now adding the nighttime pedestrian detection. Previous versions were capable of only daytime pedestrian detection, and were NOT marketed as having nighttime pedestrian detection.

For example, Toyota’s 1st generation SafetySense is able to stop for a crossing pedestrian in daytime, but not a crossing pedestrian in nighttime, and is not be able to stop for a crossing bicyclist in daytime or nighttime.

Toyota’s 2nd generation SafetySense, just released in November 2017 for for some 2018 model year cars is able to stop for a crossing pedestrian in both daytime and night time, and is able to stop for a crossing bicyclist in the daytime but not in nighttime.

See video below:
https://m.youtube.com/watch?v=0uq5OIwIQTg

Pushmi-Pullyu
Guest
Pushmi-Pullyu

Don’t all ABS (Automatic Braking Systems) use radar for detection? If so, then why would it make any difference if it was day or night?

Or does this “SafetySense” depend on video cameras rather than radar for its detection?

Color me puzzled.

Bill Howland
Guest
Bill Howland

It has been just released by Automotive News that Volvo’s collision avoidance system was disabled so that UBER’s AV system could be fully in control. A court will examine the wisdom of THAT decision.

(⌐■_■) Trollnonymous
Guest
(⌐■_■) Trollnonymous

+1

Chris
Guest
Chris

“Cut ins” and “cut outs” have long been a challenge for autonomous vehicles. Tracking objects accurately in a field of view as far and wide as a human does is computationally very difficult. The cyclist only comes into the system’s view at the last second giving the system very little time to evaluate the situation (ie validity of camera image data, verification by other sensors etc) and therefore little time left to apply the brakes.

This is an area I’m sure all autonomy researchers are working on but at present, as far as I am aware, there is no solution that suits all situations. The view that autonomous vehicles can prevent any chance of a crash and make driving totally safe is just not possible with the current systems and software available.

Pushmi-Pullyu
Guest
Pushmi-Pullyu
“The view that autonomous vehicles can prevent any chance of a crash and make driving totally safe is just not possible with the current systems and software available.” It’s just not possible, period. 100.000% safety is an impossible goal, no matter what future tech is developed. But we shouldn’t demand that impossible standard. We should not prevent autonomous, or semi-autonomous, driving systems from saving lives now, just because they are not perfect. They’re never going to be perfect, so the question — just as with air bags — should not be “has the system been perfected?”, but rather “are you safer with the system or without it”? Air bags are not perfect, but they do save lives. Semi-autonomous cars are arguably already in the same place; and if not, they soon will be. I don’t at all mean to minimize the tragedy of this Uber car killing a pedestrian. But tens of thousands of Americans are killed every year by human-driven cars, and even more worldwide. Yet we don’t order all human-driven cars to stop driving while we investigate a single accident. “The thing to keep in mind is that self-driving cars don’t have to be perfect to change the… Read more »
jimmuh
Guest
jimmuh

“The view that autonomous vehicles can prevent any chance of a crash and make driving totally safe is just not possible with the current systems and software available.”

So why are we not requiring a whole lot more responsibility from the GD ‘driver’?

No checking your phone, no checking the in-car video, no looking in the rear-view mirror talking to your pax, etc.

esp if mr/mrs driver has a felony record…….?

Pushmi-Pullyu
Guest
Pushmi-Pullyu

Yes, I think Uber bears some responsibility here. They are paying people to monitor the self-driving cars, with the intention of taking over in case of emergency; a situation which failed here, at least in part because the human monitor was texting on his phone.

The monitors need to be monitored! And if there is a pattern of monitors not paying attention, as they are paid to do, then Uber should respond to that, with shorter shifts, or with two human monitors per vehicle, allowing frequent swaps, and/or by firing any monitor who texts or otherwise takes his eyes off the road when he’s being paid to keep his eyes on the road.

James P Heartney
Guest
James P Heartney

Uber had also apparently stopped using two safety drivers at a time, presumably so they could get more mileage covered.

Uber should get out of the autonomous-driving development business. They obviously don’t have the corporate culture to do it safely, or even effectively.

At first I was in the camp that thought even a human driver would have hit the pedestrian. But I’m now persuaded that the Uber video is much less revealing than what a human would have seen in that situation, and so if the safety driver had been looking, the accident would likely have been avoided.

The silver lining to this terrible accident is that if the safety driver had stopped in time, we would not have been made aware of how substandard the Uber development process has been. So there might have been more accidents later due to Uber recklessness. Uber should have their permission to do testing revoked permanently, as a warning to others.

TwoVolts
Guest
TwoVolts

I just read that Arizona has revoked Uber’s testing privileges in the state.

CDAVIS
Guest
CDAVIS

Likely Mobileye/Intel and Tesla will continue to emerge as the two leading Driver Assist & Autonomous Driving platforms.

Currently nearly all the autonomous vehicles projects leverage Linux and ROS (Robot Operating Systems) which are both open source projects. So its possible the Mobileye and Tesla navigation projects will themselves evolve to be more open source community projects to further accelerate innovation and adoption of the respective platforms; this will require making the project’s underlining neural network API open source accessible… the open source community being able to spinal-tap into the learning brain.

Related:
Why is Baidu sharing its secret self-driving sauce?https://www.theregister.co.uk/2017/04/21/baidu_driverless_car_software/

bro1999
Guest

Lumping Intel and Mobileye in with Tesla is an insult to Intel and Mobileeye.

CDAVIS
Guest
CDAVIS

Why?

Get Real
Guest
Get Real

Because mental MadBro is a serial anti-Tesla troll and probable employee of GM’s Baltimore operations that makes their electric motors and therefore feels insecure and threatened by Tesla.

Pushmi-Pullyu
Guest
Pushmi-Pullyu

Because “MadBro” never passes up an opportunity for gratuitous Tesla bashing, no matter how untrue or off-topic it is.

Obviously MadBro, who is a General Motors fanboy, sees Tesla as a huge threat to GM. If he didn’t, then he wouldn’t spend so much time writing Tesla bashing posts!

Someone out there
Guest
Someone out there

This story is the perfect example of what happens when you deploy unfinished but important systems in real traffic. The person supposed to monitor it gets unattentive since it seems to be working OK, until it suddenly doesn’t.
All of these autonomous tests, including Tesla, Bolt and Waymo should be limited to closed circuits and dedicated vehicles until it can be proven that they reach a certain level of trustworthyness.

Tim
Guest
Tim

It seems people are missing a rather interesting angle to this story. Namely, will Uber be held to standards for a human driver or the likely higher standards that could apply to an automated system.

In this case, the fault under today’s standard is with the pedestrian for entering a roadway outside of a crosswalk. This is complicated by the fact that the safety driver is recorded as being distracted, so one might be able to argue shared liability.

The automated system that was on the car, however, should have detected the pedestrian and stopped. The fact that it was installed but didn’t function to design could potentially change the whole standard being applied. This is further complicated by the assertion that the Volvo at issue actually comes with a safety system that would have worked, but was disabled and replaced by a safety system that clearly didn’t.

It will be interesting to see whether adding autonomous systems to cars actually means the owner / driver / manufacturer will be held to that higher standard.

TeslaPlease
Guest
TeslaPlease

The state of AZ granted Uber the rights to test their vehicles on public streets. I suspect we are in for a surprise for what leniency they provided.

Pushmi-Pullyu
Guest
Pushmi-Pullyu

“It will be interesting to see whether adding autonomous systems to cars actually means the owner / driver / manufacturer will be held to that higher standard.”

In contract law, questions of responsibility and liability usually come down to what was promised by the parties involved. A contract is, after all, a formalized promise. Advertising by a company may or may not qualify as a “contract”, but similar ethical standards apply.

The whole idea behind autonomous driving, or even some of the semi-autonomous driving systems in use today, is that the car manufacturer promises, or at least holds out the possibility for, a higher level of safety than we have with human drivers. I think it’s reasonable to believe the courts will hold such autonomous or semi-autonomous cars to that higher standard. This would follow long-established precedent.

The problem, for any judge and jury, is that “safer” in such a case is a statistical situation, and trying to apply it to an individual case — such as the one under discussion — poses a very large risk of creating a Sweeping Generalization fallacy.

TwoVolts
Guest
TwoVolts

In a fully autonomous future in which traffic fatalities are cut in half (for example), the automakers will not likely be able to make the statistical argument that things overall are safer – therefore we are not at fault in any individual case. Instead, virtually every auto accident will be litigated with the automakers as defendants.

The key difference today is that individual humans – most without deep pockets – are overwhelmingly at fault and are today’s targets of any wrongful death lawsuits. And in many cases where an individual runs off the road and kills only himself, there is no litigation at all. That will all change drastically when the cars take over the driving. Every accident in the autonomous future will have the deep pockets of the automakers as the targets of litigation.

The obvious irony: automakers will be rewarded for their efforts to make safer cars by getting sued a lot more than today.

Pushmi-Pullyu
Guest
Pushmi-Pullyu

“With ZERO evidence the CHP ASSUMES the Tesla driver was not being aided by AutoPilot and ‘lost control.”

With zero evidence, Occam’s Razor certainly suggests that is the correct cause of the accident.

Cars under the direct control of Autopilot + AutoSteer are not known to drive into barriers and guard rails. People are known to do that from time to time.

Now, we should consider the “evidence” that you posted this Tesla bashing — or at least “concern troll” — post in response to an article having nothing to do with Tesla, because your agenda is to damage the reputation of a company which is trying to make the world a better place.

cab
Guest
cab

While I agree the variosu entities named should not be assumign any of thise stuff, I would point out that while we don’t hear reports of Tesla’s driving into barriers routinely, I would say that we DO routinely hear of owners having to intervene (take over from AP) to prevent such occurrences. At which point everyone jumps in and declares “it’s beta! You have to watch it and be prepared to take over at any second”…which, of course, totally ignores human behavior.
I mean honestly, if you have to watch it like a hawk and be prepared to take over in a fraction of a second (i.e. when accidents happen), that’s not really “better”. What it IS “better” than, of course, is driver’s that ALREADY aren’t paying attention…at which point it becomes “better than nothing” I suppose and more of a safety aid than a driving aid.

TwoVolts
Guest
TwoVolts

Such is the current state of affairs in our uncomfortable marriage with level 2 autonomy – that can at times perform at seemingly level 5 performance – thus inspiring overconfidence for many of us.

Human operators need to start ‘getting the memo’ that the current systems aren’t there yet, and can fail unexpectedly at the worst possible moment. But as you point out, that totally ignores human behavior.

The journey to level 5 autonomy is definitely worth it, but it will be an ugly road getting there.

TwoVolts
Guest
TwoVolts

It would be interesting to know how Uber, Waymo, and others are training their drivers to act in emergency situations. These are test vehicles after all, so the critical question becomes, at what point should the human operator act in response to a perceived emergency?

Should the human act with maximum proactivity and brake as early as possible – possibly preempting the autonomous system? Or should the human only intervene at the very last possible moment so as to allow for the best evaluation of the vehicle’s true response (or non response)? It would seem that even in testing, there might be a conflict between learning and safety.

zll
Guest
zll

“True redundancy of the perception system must rely on independent sources of information: camera, radar and LIDAR. Fusing them together is good for comfort of driving but is bad for safety…”

Yikes, that is the exactly what Tesla is doing….