Will The Self-Driving Car Sometimes Kill For Safety Purposes?


Self-driving car

Waymo’s self-driving car set on a Chrysler Pacifica Hybrid prototype.

There are many questions plaguing the design and adoption of the self-driving car.

How can a vehicle make a life or death decision? Who is at fault if the “car” injures or kills someone? How will lawmakers and insurance companies handle these vehicles?

Self-driving car

All current Tesla vehicles come equipped with full self-driving hardware, but functionality will require the owner to purchase the subsequent software “at some point.”

These are all excellent questions, and at this point, we don’t have any solid answers.

Several publications have visited this idea, and concluded that self-driving cars will have to choose to “kill.” Wired recently proposed that a self-driving car may save a group of children crossing the street, resulting in the driver killing him/herself by driving directly into an ice cream shop. Hmm … let’s put it into perspective.

Most people would choose to save the lives of others, even if there was a chance that their life would be at stake. In the spur of the moment, if a group of people ran out into the road, any sane driver would attempt to avoid, likely causing damage to the car, and harm to the driver.

What would the self-driving car do? It will need to be programmed to deal with such decisions. When given the choice between perhaps killing a group of pedestrians, or a single driver, the car may kill the driver. But it surely wouldn’t be “choosing” to assure the driver’s imminent death.

Keep in mind that most auto accidents today are a product of human error. About 30,000 people die each year on U.S. roads alone, and over a million worldwide. The National Highway Traffic Safety Administration found in its investigation of last year’s Tesla Autopilot fatality, that cars with such capability are actually saving lives.

An autonomous Nissan LEAF prototype goes out for a spin around London

Regulating bodies and insurance companies will spend countless hours deciding how to handle these cars of the future. No matter how many lives are saved, and how many accidents are prevented, there will still be crashes. Once self-driving cars begin to show up on our public roads, the majority of vehicles will still be driven by people. Until there comes a time that every car on the road is connected and autonomous – which is far, far off –  there will be many complications.

Maybe it’s not so cut and dry. Is a computer really choosing to “kill” one to save another? We think not. Humans aren’t making this decision either. Of course, the human driver, just like the robot driver, will likely swerve to avoid bowling over the helpless pedestrians … but neither can predict the future. It is not as if the driver thinks, “Well I’m going to kill myself to avoid these people.” Instead, the driver or the autonomous vehicle will make a split-second decision to avoid, and hope for the best.

One might assume that with this new technology, the computer’s faster, more calculated, more controlled decision will lead to more desirable results. While there is no way for a human driver to know exactly how the car may react to the sudden veering tactic, the computer will use a mathematical and scientific set of information to choose the best move, and to control the aftermath. If the technology can assess all factors including speed, driving conditions, distance, other obstacles, etc., all within the blink of an eye, we can’t imagine that it wouldn’t choose a move that is the best course of action.

Obviously, programmers aren’t designing self-driving cars to “avoid children” then “proceed directly into brick wall at full speed.”

This is surely not the case. In the case of the human driver, it has been proven time and time again that when we swerve, there is a potential to oversteer, to lose control, to avoid the initial target, but then drive into a different one. An autonomous vehicle should – under most circumstances – be able to slow the vehicle, avoid the collision, not drive into another nearby obstacle, and regain and maintain some semblance of control over the car.

With all of this being said, it is going to be a long road before the technology can prove this, and mass adoption is underway.

Source: Wired

Category: General

Tags: ,

33 responses to "Will The Self-Driving Car Sometimes Kill For Safety Purposes?"
  1. Pushmi-Pullyu says:

    “Several publications have… concluded that self-driving cars will have to choose to ‘kill.’ Wired recently proposed that a self-driving car may save a group of children crossing the street, resulting in the driver killing him/herself by driving directly into an ice cream shop.

    The biggest, most fundamental flaw in all such questions is in assuming the self-driving car is “smart” enough, and cognizant enough, to recognize when human beings are present, and also smart enough to weigh the relative value of human lives versus the odds of being able to save some of them but not all of them by selecting from multiple different actions.

    Nope. Our robots are, at best, as smart as a not-particularly-intelligent bug. Computers and robots have no conception of the world as humans see it. Trying to program a robot, or computer software, to recognize a “human being” as a general class of things, and be able to spot them and count them with a high degree of precision, would be a hopelessly complex task for today’s robotic systems. And trying to implement that in self-driving cars would dangerously slow down the decision-making process, resulting in less safe autonomous driving. As with a military officer in a combat situation, it’s generally better to make a decision quickly, even if it’s the wrong one, than to be paralyzed by indecision.

    Self-driving cars are being programmed to avoid collisions with objects above a given size. I would guess the software implementation is be rather similar to the collision avoidance routines from computer games. The idea that computer programs have an understanding of the real world sophisticated enough to weigh moral decisions, as described above, is a fantasy right out of a bad TV show that has a computer or a robot with god-like omniscience.

    A second flaw has been pointed out: People are going to buy cars that increase their own chances of survival in an accident. Even if it was possible to program self-driving software with the concept of (to quote “Star Trek II: The Wrath of Khan”) “the needs of the many outweigh the needs of the few or the one,” people simply would not choose to buy cars with such programming.

    * * * * *

    “The thing to keep in mind is that self-driving cars don’t have to be perfect to change the world. They just have to be better than human beings.” — Deepak Ahuja, CFO of Tesla Inc.

    1. Mr. M says:

      Trust me, you can train cameras to recognize humans. The quality depends mostly on operation power. A “cheap” (far less than 1000€) sensor can do this today.

      The harder part is to detect all of the surrounding. What is a plastic wall? Where is free space? How is it possible to drive trought the sidewalk with hurting as little as many people. What are the percentages of people going out of different doors?

  2. Mikael says:

    The big difference with self driving cars is that a large safety margin will be included in the driving.

    That human drivers would have to make such decision is extremely rare. That self driving cars with a lot larger safety margin and the possibility to communicate with other vehicles would have to make such decision is close to non-existent.
    And if they got into that position the car would just avoid to hit the people (if identified) and then brake as hard as possible, most like resulting in a full stop or low speed harmless impact.

    Or if you are assuming that those kids for some reason are on the autobahn or some other high speed location then they should just get the Darwin award that is coming there way, no loss for the world.

    1. Pushmi-Pullyu says:

      “…if they got into that position the car would just avoid to hit the people (if identified) and then brake as hard as possible, most like resulting in a full stop or low speed harmless impact.”


      I can see that the car would be programmed to leave the road to avoid a collision as long as there isn’t any solid obstacle that it will run into. I did that once myself, driving onto the gravel shoulder of the road to avoid hitting an idiot who pulled out right in front of me from a parking lot. I can even see a self-driving car possibly running itself into a ditch to avoid a collision, maybe. But the idea that the autonomous car would “choose” to run into the front of a building to avoid colliding with another vehicle or pedestrian… I can’t imagine that would be an option that any software engineer would include.

      As you say, the “solution” would be to brake as hard as possible to minimize the impact of the collision.

      The important point here is that these hypothetical “ethical question” scenarios involve decisions at a much higher level than any software engineer will attempt to deal with, using today’s technology. It’s like asking a parent shopping for a ring sorting puzzle for their infant to choose the one which would allow the baby to master calculus.

  3. William says:

    When a self driving car, coming around a sharp/blind corner at speed, can identify the difference between a baby crawling across the road, and a raccoon making the same fateful attempt, then humanity will have arrived at the self driving solution, that has the potential “to be better than human beings”.

    Until then, I will make a last minute unsafe attempt, risking my own life and limb, to steer clear of the infant only. When software, making those correct steering choices, is up to the task, I will be all aboard without hesitation.

    1. JIMJFOX says:

      When was the last time I saw “a baby crawling across the road”? Hmm… NEVER? Anyone else observed this familiar phenomenon?
      Might have happened… somewhere

  4. K L says:

    These “ethical questions” make a fatal assumption: that such a situation even needs to be considered!

    On roads that pedestrians might “jump out” at the driver, are also places where the speed limit is going to be under 30mph. At that speed, the driver will survive even if the car hits a brick wall to avoid those pedestrians. That plus the fact that you can generally stop a car within 30 feet or less at these lower speeds, thereby reducing the speed of impact even further.

    On highways where pedestrians shouldn’t be walking along the highways, there are far more open space to practice collision avoidance.

    Autonomous cars will be notorious for driving slower than human drivers, because that will provide the greatest margin for error. The passengers won’t care as much about the longer travel time, as they would’ve already been provided an ETA to destination by the autonomous car.

    The autonomous cars only need to focus on accident avoidance, as these ethical quandaries are contrived.

    1. ModernMarvelFan says:

      Windy road with cliff on 1 side.

      Coming around a blind curve, people stopped in the front due an accident and are on the roadway…

      Does the car tried to stop at 30mph and slightly injury the people on the road way or does it swerve to the cliff side hoping the guardrail would contain the car?

      1. K l says:

        Neither! The car will be driving 15 mph around a blind corner and stop within 5 feet when it sees the stopped cars.

        That’s my point about these questions being contrived.

      2. K l says:

        Just to rephrase it. The high-speed situations that beg these ethical questions are strictly human failures. The rules of the road is to NOT drive faster than is safe for the road conditions. If you can’t stop in time, then you’re driving too fast.

        1. Pushmi-Pullyu says:

          I don’t see people putting up with a car poking along at 15 MPH on every road with frequent hills or blind corners. There seems to be an underlying assumption here that no death or severe injury caused by a self-driving car will ever be acceptable. That’s not the right way to look at it.

          The right way to look at is that deaths and severe injuries due to auto accidents are inevitable, but that self-driving cars should have a lower overall accident rate. Repeat: a lower overall rate. We cannot expect that autonomous cars will have a lower accident rate under every conceivable set of circumstances, because autonomous cars don’t perceive the world the same way humans do, and self-driving software does not make decisions the same way humans do in all circumstances.

          To accept that autonomous cars will, in some rare cases, cause deaths where a human driver possibly (or even probably) would not, is going to be very difficult for many or most people to accept. But it’s the very same argument as wearing seat belts: You have to accept that there may be rare cases where the seat belt will cause a death that wouldn’t happen if the person was not wearing the seat belt. Nonetheless, you’re still far safer wearing the seat belt than not wearing it.

          Seat belts save lives. And autonomous driving will, too. That’s the reason we shouldn’t let “The perfect drive out the good” by demanding that autonomous cars must be safer than human drivers under every possible circumstance.

          1. K L says:

            They should, because those are the rules that humans are supposed to follow, but don’t. That’s why we have accidents.

            If someone has mechanical trouble ahead on a winding road, driving the recommended speed permits you to see them in time to stop and not have to worry about hitting them or driving off the ledge.

            1. Pushmi-Pullyu says:

              Clearly you have never driven over the Colorado Rocky Mountains. If you had, you would know better that to claim all highways have speed limits posted that ensure driving at a speed low enough for safe reaction time when you come around blind curves.

              Similarly, you have clearly never driven on back roads here in the Kansas City area. There are many places where the roads go over rolling hills. Some places even have warning signs reading “HILL BLOCKS VIEW”… idiot instructions for those who apparently don’t realize that it’s important to stay to the right as they crest a hill. Nobody, and I do mean nobody, creeps along at the 15 MPH speed which actually would be safe. Nor is the city foolish enough to try to impose a 15 MPH speed limit. Generally speaking, minimum speed limits are 25 MPH. That means even driving the speed limit, two cars approaching from opposite directions would approach each other at 50 MPH. That is far too fast for proper reaction time should one car be in the other’s lane when cresting a hill. And of course, it’s routine for American drivers to drive about 5 MPH over the limit, so realistically it’s going to be more like 60 MPH in a large percentage of cases.

          2. J. Quincy says:

            “The right way to look at is that deaths and severe injuries due to auto accidents are inevitable, but that self-driving cars should have a lower overall accident rate.”

            No. They are not inevitable nor should they be. That is the kind of thinking that has led to over 40,000 people being killed in the U.S. last year.

          3. J. Quincy says:

            “There seems to be an underlying assumption here that no death or severe injury caused by a self-driving car will ever be acceptable. That’s not the right way to look at it.”

            What death is acceptable?

        2. As I See it: These are Drivers Ed High School Class Questions, mixed with Hypothetical Math Class Questions, blended with 1 in 100,000 odds of Worst Nightmare Possibilities!

          So, going back to such a time: “At what rate is a Tesla Semi or a Model S/X/3 travelling, in the rain, on an oil and water slicked Black Ice covered road? What is its stopping distance on such a road condition? Can it see that far and beyond, and continue to see farther than that for twice as many feet or seconds as it takes to stop completely?

          If the answers to the last question are yes, continue. If the answer is no, adjust speed slower now!

          Not that hard, as computers are not impatient, unlike people! As Data said, “I don’t understand, my timing is digital!”

          Baaically, people constantly have no idea of how much traction they have access to, no matter if the roads are dry or wet, cold or hot, level or tilted, smooth or rough, banked correctly for a turn or not. These things can be researched for each autonomous vehicle, and programed to identify and test traction at frequent intervals, for braking, accelleration, or cornering, and adapt faster than we can!

          If Autonomous Vehicles can communicate with each other, such data can be shared for the benefit of all.

          (PS. At what speed is a vehicle travelling at 88 feet per second? And what is its stopping distance on a cool, dry, road?)

          1. Pushmi-Pullyu says:

            I absolutely agree that autonomous cars will be programmed to take weather and other driving conditions into account when regulating speed, much more than human drivers do. Every year, during the first snowstorm here in Kansas City, we see many drivers on the road continuing to drive as though the roads are clear and dry. Autonomous cars will of course be programmed to slow down and increase following distance under those conditions, which is what the human drivers should be doing.

            What I don’t agree with is the idea that autonomous vehicles won’t ever venture into a situation where they don’t have sufficient reaction time should another vehicle violate the rules of the road. For example, if you’re driving down any road with two-way traffic and another vehicle were to suddenly swerve into your path, would you be able to react in time to avoid an accident? In many cases, no.

            And altho the computer may in theory be able to react instantly, the car it’s controlling is limited by inertia and the ability of four rubber balloons to grip the road, each balloon having only about the same surface area in contact with the road as the palm of your hand. This does present certain limitations on the ability of autonomous cars to avoid an accident. But it’s absurd to suggest that no autonomous car should ever venture out onto a road with two-way traffic and a speed limit above, say, 15 MPH. Limiting autonomous vehicles to only driving in perfectly safe conditions would make them unusable by the average person. There’s no point to equipping a car with an autonomous driving system that’s so “timid” that humans will simply shut it off out of frustration, and drive themselves.

        3. Steven says:

          ^ That.

          And that simplifies the programming. Do not allow the driver’s assistance device to engage in an environment where an obstacle may impinge upon minimal stopping distance for the given speed.

          I wouldn’t expect the driver’s assistance device to engage in Mid-Town Manhattan where pedestrians and bicycle riders have unpredictable travel patterns.

      3. JIMJFOX says:

        Braking Power/Stopping Distances
        Speed Thinking Distance Total Stopping Distance
        30 mph 30 feet (9.1 m) 75 feet (23 m)

        Self-driving car, delete ‘thinking time’. Actual stopping distance- 13.9m

  5. Warren says:

    If the autonomous cars communicate with each other, they could travel slower, and still get you to your destination faster than we do now. I am sure drivers competing with each other to be at the front of the pack actually slow down the overall average speed.

    1. ModernMarvelFan says:

      What about the transition period before all the cars are self driving?

      1. Pushmi-Pullyu says:

        Then the human-driven cars will still be competing while the autonomous cars will be cooperating.

        We can hope at least some of the human drivers will notice the advantages of cooperative driving, and start imitating the driving patterns of autonomous cars. Of course, humans being the obstinate irrational beings we are, there will be some who will stubbornly cling to their self-defeating behaviors.

    2. Pushmi-Pullyu says:

      “I am sure drivers competing with each other to be at the front of the pack actually slow down the overall average speed.”

      That’s certainly true when traffic gets dense. Autonomous cars will be programmed to cooperate rather than compete, for much better traffic flow in dense traffic.

      The tradeoff will be, assuming “K L” is correct, slower travel when traffic is light. But maybe not. With faster “reflexes”, perhaps self-driving cars will actually travel faster. Speed limits may actually be increased for autonomous vehicles, if the accident rate drops drastically.

  6. W Leavitt says:

    This is a scenario that will never happen as self driving cars should never go faster than the conditions allow. The distance to make a safe stop will always be available or it should not pass the driving test

    1. Pushmi-Pullyu says:

      Autonomous cars will always slow to a crawl when approaching a blind corner? They’ll creep along at 15 MPH or less when cresting every hill? They’ll proceed at a timid 25 MPH or less going up or down any mountain, where the road constantly curves around bends so you can’t see ahead?

      No, no, and no. People are simply not going to put up with cars driving so slowly. We accept a certain amount of risk every time we use a car to travel somewhere, and that’s not going to suddenly change just because we start using robot drivers instead of human drivers.

      1. In such cases as those, just switch off the self driving mode, and carry on! Problem solved! Simple!

        Besides, many drivers would prefer to be driving themselves on such roads, even if they do drive over a cliff! (As one Tesla Driver apparently did! He may have had a Heart Attack, or other issue first, however, but I guess having his vehicle contontinuing along at some arbitrary slow speed is simply unaccepable to some impatient humans, thinking only about their needs!)

        1. Pushmi-Pullyu says:

          And that is exactly why autonomous cars will not be programmed to drive in such a timid fashion. Consumers won’t put up with it. As you say, many or most would simply shut off the self-driving feature and drive themselves if the car demonstrated such frustrating timidity to venturing out onto the roads.

          This situation is already occurring in every Tesla car with AutoSteer activated that’s driving on a two-lane road with a speed limit above… I dunno, let’s say 15 MPH. If a car were to suddenly swerve over from the other lane, would the self-driving car be able to react in time? Would it have room to swerve out of the way even if it wasn’t limited by inertia in serving? In many cases, no. On roads with two-way traffic, lanes with traffic moving in opposite directions are usually separated only by a double yellow line. Consider just how close that puts your car to one traveling in the opposite direction. We accept such danger because we’re used to it. And autonomous cars will have to be — in fact, already are — programmed to accept such danger and continue driving at the speed limit (or even a bit higher, depending on how the driver selects the speed).

      2. Nick says:

        I’ll happily “put up” with a self driving car crawling along.

        Who cares how long it takes if the car is doing the driving.

        I want it to show down enough so I can sleep without being disturbed​ by hard maneuvers.

        Sounds like an amazing future to me. 😀

  7. Someone out there says:

    I think the question is badly formed. The car isn’t going to “choose to kill” but instead fail to avoid killing.

  8. Kdawg says:

    Some self-driving cars like to kill humans 😀

    1. RC368@gmail.com says:

      There’s always good and evil. Which side are you on?

  9. Priusmaniac says:

    The prime directive of the car must always be protect the driver. The second directive must be avoid killing pedestrians. With that set of directives the situation of a car killing the driver to avoid killing pedestrians can never occur. It is a very simplistic solution but it nevertheless remains the most coherent. It is also the one that will not put a driver into a situation where his car suddenly change lane toward a frontal collision with another vehicle to avoid a pedestrian suddenly crossing the street, because then if the driver kills the other car driver he will in more be judged responsible for the killing. If he kills the pedestrian crossing it will be shared responsibility or even none at all if the crossing happened in a way that is recognized as completely unexpectable. For selfdriving software writers this is also the only way they can be safe from any legal pursuit against them. No choice is being made, it is always protect the driver, so no arguments can later be retained against them. Of course that is the extreme scenario of unavoidable collision which faster breaking reaction should prevent as much as possible.

    There should also be further improvement in cars for those cases where the inertia doesn’t allow a stop. I am particularly interested in having full car airbags coming out of the front bumper in case of an unavoidable collision. A bumper should be able to provide enough place to store a cylindrical air bag with the radius of the car and a 15 m length. It would be a huge thing but nevertheless it could effectively safe from a head on collision with a three, a wall, a car or safe a pedestrian’s life.

  10. M Hovis says:

    I think it should be a future argument and here is why.

    Already Tesla’s data suggest that their level 2 tools have cut accidents in half. If full autonomous drive reduces 30,000 deaths to 3000 or 300, then it is a win.

    Are we not doing this now? We know the use of fossil fuels kills a certain number of people but as a society, we have decided to accept that number for the sake of progress or the lack of a better answer. We did remove lead from gasoline and replaced it with ethanol due to the carcinogen but we still accept those dying from cancer, stroke, etc. from particulates produced in the burning of fossil fuels.

    So why would this scenario be any different? It deals specifically with improving transportation and this time it saves many many more lives. You can study the Kobayashi Maru forever, meanwhile, the ball moves forward.