Inevitably, and unfortunately, sometimes when there are big innovations which work with both humans and machinery, there will be fatalities.
This week, the first known fatality happened with something that will be affecting millions of us in a few years time: Automated driving by cars.
Yesterday, the driver of a Tesla S (which are known as having the best safety rating in the world) was killed while the car was using its Autopilot setting.
According to a blog post by Tesla:
“The vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.”
Due to the tall height of the trailer, the Model S drove underneath it. This caused the Model S’ windshield to collide with the trailer, killing the driver. According to the manufacturer, if the impact had been at the front or rear of the trailer, the safety system would likely have saved the driver.
What I want to highlight to everyone is that we often talk about innovations in only the positive sense, about providing value, embracing creativity and pushing technology forward. But often, the bigger the leap and the larger the impact, the more we also need to think about the impact on safety and other negative aspects that can come with rapid change.
And I think that cars which drive themselves, and the Artificial Intelligence systems behind them, are going to become a very challenging topic of discussion and regulation in the coming years.
Should a car decide who should live or die?
At the moment, the system available to some Tesla drivers is officially still in “beta” (meaning it is not finished), and a driver needs to confirm that they are ultimately still responsible for the actions of the car before it will turn on. This means that the driver can’t turn on the system and take a nap. But
But most of the large automakers and even technology companies like Google are all busily developing their own self-driving technology.
And in almost all cases, the reason to invest so much into technology is to improve the comfort, convenience and safety of the driver. Artificial intelligence systems never get tired, suffer from fatigue, become angry, become distracted by a phone call or suffer from heart attacks, all of which are human issues which result in thousands of road deaths every year.
The moral question doesn’t relate to that though. It comes into play when the car notices a scenario where the artificial intelligence is sophisticated enough to understand that an accident is imminent which will affect some combination of the driver / passengers, other drivers and pedestrians. Going beyond cars, these rules are likely to apply to all “social robots”.
Ask yourself the following question: should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?
Many car makers are already taking these philosophical questions very seriously, working with ethicists to come up with solutions. The matter could be complicated further by different laws in different countries though.
Daniel Hirsch, an automotive expert at PA Consulting asks:
“A child runs on the street and the car has only two options — killing the child or killing the old, cancer-suffering driver.”
The “correct” response to this situation in one country or culture might be different in another. It might even be illegal — both German and Swiss law say human lives cannot be weighed against one another.
And what about the position of big business, such as insurers? “There’s a significant number of these cases in which the insurance company would decide differently — for instance, to them a handicapped child is more expensive than a handicapped elderly person due to remaining lifespan,” says Mr Hirsch.
The difference is also going to be largely influenced by public opinion. A number of academics have been asking the public of their thoughts and have just published their findings in the journal Science. In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plough into and kill 10 pedestrians. They agreed, too, that it was moral for autonomous vehicles to be programmed in this way: it minimised deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.
However, when people were asked whether they would buy a car controlled by such a moral algorithm, they were not as enthusiastic. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.
The question it all comes back to is who would ultimately be responsible for any deaths caused by these cars? Would it be the driver who decided to buy the car? The car manufacturer? The technology company which holds the patents on the artificial intelligence software underpinning these decisions? Or even the victim? It will be a hugely complicated regulatory process to figure it out.
For now, one of the simplest proposed solutions is unfortunately also one of the most gruesome. Chris Gerdes, who runs the Stanford University’s Center for Automotive Research looking into questions like this, thinks that many car manufacturers will find ways for an autonomous car to quickly hand back control to a human driver, even if this happens at the moment the car needs to make a moral choice but cannot, and the driver is not ready to get control back.
Whatever we think of this technology, it is going to begin affecting more and more cities in the coming years. And whether you are in a self-driving car or on the pavement near one, we need to be having these discussions now.
Do you like insights into innovation like this?
Then sign up for your FREE account from Idea to Value to not only get great pieces of insight like this every week, but also free training on improving your creativity and company innovation capabilities from some of the world’s leading innovation experts.
What is your view on self-driving cars, and should they be able to make decisions? Let us know in the comments, and don’t forget to share and follow us on Facebook and Twitter.
Nick Skillicorn
Latest posts by Nick Skillicorn (see all)
- Made to last - November 21, 2023
- We could all benefit from more failures - November 15, 2023
- Self-Serving bias: Why you think nothing is your fault - August 9, 2023
- We are all sheep - August 2, 2023
[…] far would we use artificial intelligence? How far would we trust, for example into the logic of a self-driving car? Will it crash or not if we don’t look after the steering wheel […]
I think the discussion should focus on the fact that, like in so many problems in life, the rights of one conflicts with the rights of the other!
First is not the car, but the people who invested and developed it.
Second, the balancing act is this, as I see it. The person inside the car has all the safety apparatus in the car to protect the passengers. The people outside the car has NONE! So, who has a better survival chance in an accident?
For me, the “car” SHOULD chose to protect the people outside the automobile, even that this action may put the passengers at risks!