Will The Self-Driving Car Sometimes Kill For Safety Purposes?
There are many questions plaguing the design and adoption of the self-driving car.
How can a vehicle make a life or death decision? Who is at fault if the “car” injures or kills someone? How will lawmakers and insurance companies handle these vehicles?
These are all excellent questions, and at this point, we don’t have any solid answers.
Several publications have visited this idea, and concluded that self-driving cars will have to choose to “kill.” Wired recently proposed that a self-driving car may save a group of children crossing the street, resulting in the driver killing him/herself by driving directly into an ice cream shop. Hmm … let’s put it into perspective.
Most people would choose to save the lives of others, even if there was a chance that their life would be at stake. In the spur of the moment, if a group of people ran out into the road, any sane driver would attempt to avoid, likely causing damage to the car, and harm to the driver.
What would the self-driving car do? It will need to be programmed to deal with such decisions. When given the choice between perhaps killing a group of pedestrians, or a single driver, the car may kill the driver. But it surely wouldn’t be “choosing” to assure the driver’s imminent death.
Keep in mind that most auto accidents today are a product of human error. About 30,000 people die each year on U.S. roads alone, and over a million worldwide. The National Highway Traffic Safety Administration found in its investigation of last year’s Tesla Autopilot fatality, that cars with such capability are actually saving lives.
Regulating bodies and insurance companies will spend countless hours deciding how to handle these cars of the future. No matter how many lives are saved, and how many accidents are prevented, there will still be crashes. Once self-driving cars begin to show up on our public roads, the majority of vehicles will still be driven by people. Until there comes a time that every car on the road is connected and autonomous – which is far, far off – there will be many complications.
Maybe it’s not so cut and dry. Is a computer really choosing to “kill” one to save another? We think not. Humans aren’t making this decision either. Of course, the human driver, just like the robot driver, will likely swerve to avoid bowling over the helpless pedestrians … but neither can predict the future. It is not as if the driver thinks, “Well I’m going to kill myself to avoid these people.” Instead, the driver or the autonomous vehicle will make a split-second decision to avoid, and hope for the best.
One might assume that with this new technology, the computer’s faster, more calculated, more controlled decision will lead to more desirable results. While there is no way for a human driver to know exactly how the car may react to the sudden veering tactic, the computer will use a mathematical and scientific set of information to choose the best move, and to control the aftermath. If the technology can assess all factors including speed, driving conditions, distance, other obstacles, etc., all within the blink of an eye, we can’t imagine that it wouldn’t choose a move that is the best course of action.
Obviously, programmers aren’t designing self-driving cars to “avoid children” then “proceed directly into brick wall at full speed.”
This is surely not the case. In the case of the human driver, it has been proven time and time again that when we swerve, there is a potential to oversteer, to lose control, to avoid the initial target, but then drive into a different one. An autonomous vehicle should – under most circumstances – be able to slow the vehicle, avoid the collision, not drive into another nearby obstacle, and regain and maintain some semblance of control over the car.
With all of this being said, it is going to be a long road before the technology can prove this, and mass adoption is underway.