The estimated reading time for this post is 4 minutes
Every time we get behind the wheel of a car, we put our lives and the lives of others at risk. Self-driving cars are designed to reduce those risks by letting tech control our vehicles.
Accident rates for autonomous cars have been much lower than for humans. Google’s self-driving car has had only 13 collisions after traveling 1.8 million miles.
See, Big Think Google’s Self-Driving Cars Are Ridiculously Safe
As humans, we can make moral choices in avoiding accidents, reducing casualties and injuries. To avoid hitting a child running into the street, we may swerve the car, perhaps injuring ourselves and our passengers.
But what moral choices would self-driving cars make?
Researchers at the Massachusetts Institute of Technology (MIT) have developed the Moral Machine to help us explore the choices we think self-driving cars should make. MIT says that Moral Machine is “a platform for gathering a human perspective on moral decisions made by machine intelligence such as self-driving cars.”
You can use Moral Machine to be the judge and help self-driving cars decide what to do in different driving scenarios, then see how your choices compare to those others have made.
Your choices can help MIT researchers analyze the decisions self-driving cars should make. You can also participate but not have your choices included in their analysis.
How Moral Machine Works
This video introduces Moral Machine:
The Moral Machine website lets you participate, with choices of:
- Start Judging
- See Scenarios
- Create Your Own Scenarios
With Start Judging you are shown side-by-side scenarios of traffic situations and you select which one you would choose. In these scenarios, the brakes of the self-driving car have failed, requiring it to either drive straight or turn into a different lane. In some cases, passengers are in the vehicle, while in others the car is empty.
By clicking on the Show Descriptions buttons under the images, you are shown text descriptions of what is going on in the scenario, including what has gone wrong, information about the people involved, which laws (if any) are being broken, and the consequences of that choice.
Descriptions give information about the people and animals in the scenario, such as female doctor, elderly man, pregnant woman, homeless man, cat or dog. You are shown the possible outcomes, such as the death or injury of a person or pet.
For example, in the scenario below the vehicle is experiencing brake failure and the car must decide whether to drive straight and kill pedestrians, including a female executive, dog, female athlete, homeless person and elderly man who are crossing against the light; or swerve and kill pedestrians including a female executive and a dog who are crossing with the light. No passengers are in the self-driving car in this scenario.
Click on the choice you would make in that situation. Your choice will be highlighted and you will proceed to the next scenario.
At the end of the session, you’re shown your results, based on the choices you made. You’re shown which character you were most likely to save and which character you were most likely to kill. You’re also shown how much each factor matters to you, such as whether a pedestrian was obeying the law.
After you have finished, you are offered the chance to play again. Each time you are shown different scenarios.
You can also create your own scenarios, choosing people and pets, whether they are passengers or pedestrians, whether pedestrians are breaking the law and the outcome (death vs. injury). Your scenarios may be added to the MIT research database.
You can browse scenarios others have created. These scenarios don’t let you register a choice, but a discussion section is available below. You can add your thoughts and read what others have said about these scenarios.
Moral Machine – Choices for Our Future
Moral Machine is a fascinating platform to open discussion about the moral implications of self-driving cars. Tech can go wrong and people can go wrong. In some scenarios people are endangering themselves and others by disobeying the law crossing against a red light. Should that behavior factor into the self-driving car’s decision of where to steer? What other moral considerations should be programmed into self-driving cars?
Programmers may make mathematical calculations in designing self-driving cars to minimize fatalities. They may program vehicles to operate based on causing the fewest people to be injured or killed. Self-driving vehicles may operate automatically to avoid striking pedestrians at all costs, while risking vehicle passengers who may be protected by seat belts, air bags and the car itself.
But humans may not make the same decision. We may not want to risk the lives of our passengers, even at the cost of injuring pedestrians. Would we be willing to turn over those moral decisions to self-driving cars?
Moral Machine guides us to think about these issues. Perhaps even too much, as extraneous information is included in these scenarios. We are asked to judge whether to collide with a pedestrian who is a homeless person versus one who is an athlete. In real life we wouldn’t know the social status of the pedestrian so that information wouldn’t factor into our decision. And perhaps it shouldn’t?
But Moral Machine can start a dialog about the implications of self-driving cars and whether we are willing to turn over moral decisions to tech, even if lives are saved as a whole.
Your Thoughts
Have you ever thought about the moral implications of self-driving cars? Have you tried Moral Machine? Were you surprised at your results?
Share your thoughts in the Comments section below!
______________
* Autonomous Car image (edited) by Norbert Aepli, Switzerland, via Wikimedia and Creative Commons
** Computer Driver image (edited) courtesy of Jean Browman via Flickr and Creative Commons
Harleena Singh says
Hi Carolyn,
Happy Monday & Happy October as well 🙂
Interesting read indeed! Honestly speaking, I did read about the self-driven cars, though they have yet to start in our country, but I had no idea about the moral implications of self-driven cars, nor have I ever heard of Moral Machine- leave alone trying them! I guess they do have their drawbacks along with their positives, just as everything else has – we just need to make our choices, isn’t it?
Thanks for sharing. Have a nice week ahead 🙂
Carolyn Nicander Mohr says
Hi Harleena, Yes, I have read much about self-driving cars being the future of transportation, with some predictions saying that the majority of people will be riding in self-driving cars by 2025.
While evidence shows that self-driving cars have fewer collisions, we do need to stop and think about the consequences of allowing machines to do our driving for us.
You’re right, we do need to decide whether we want to be safer as a whole by using self-driving cars or whether we want to retain control over our driving. The future will be very interesting!
Jen says
This is a very interesting read. Although in real life, we will never know the social standing of a pedestrian. Accidents when they happen, they happen in a split second. And our actions are limited by our reflexes and intuition. I think the machine may make the more logical decision- saving the most number of lives regardless of the social standing of the people involved. Because in reality, a human will not be able to react fast enough or think fast enough to make a decision that will save the most lives, let alone think about the social standing of the people or do a quick profile scan of the surrounding people.
Carolyn Nicander Mohr says
Hi Jen, Exactly! We wouldn’t know the social status of a pedestrian, and maybe that shouldn’t even be a factor. (That issue is probably discussed at length in university philosophy classes.) We may not even be able to see whether a woman is pregnant, if it’s early in her pregnancy.
But if a car were moving slowly when brake failure occurred or we had a significant distance to travel, we may have time to choose where to steer, enabling us to make a decision that may involve a moral choice.
We may also try other methods to stop or slow down the car, such as downshifting or using the parking brake. We may also use the horn to warn the pedestrians to get out of our way.
The Moral Machine helps us see that tech isn’t just programming to make devices operate, it also involves important choices that can affect lives. We need to consider future implications of tech before we adopt it fully.
David says
Of course the driver of the self-driving not make the same choices as me. No more so than the driver of an airplane I’m in train I’m in or a bus I’m in or a taxi or limo I’m in.
When you hire a driver in whatever form, you give up both the work of driving and the responsibility for making driving decisions. This is not even a discussion we need to be having.
Carolyn Nicander Mohr says
Hi David, Good point, we do hand over our decisions to a bus driver, airline pilot, taxi driver or train engineer every time we take those forms of transportation.
But the point of self-driving cars is that a human is not making real-time decisions of what to do when things go wrong. These decisions are programmed into the software that is then used to operate the vehicle. Some of these self=driving cars do not even have steering wheels or brakes, meaning that a human cannot make these decisions as the events happen.
Best to consider these decisions carefully and thoughtfully before the self-driving vehicles are on the road so they can be programmed after full consideration of the consequences.
I respectfully disagree with you about the need for discussion. Tech is advancing rapidly, more so than society is able to process the ramifications of these advancements. We need discussion to help us digest how these changes will affect our daily lives. When we have a chance to give input, such as with Moral Machine or in discussions online, we should embrace the opportunity to express our opinions.
David says
My comment about the discussions refers to the discussions that I have seen in the media: that people all want self-driving cars for others because it will make the roads safer, but that they don’t want to buy cars that might not make the same decisions as they would make (save the most number of lives, rather than save their own passengers).
Surveys have shown that most drivers believe they are above average, which is mathematically impossible, of course. When faced with a sudden event on the road, most drivers will never make an informed decision; they will just react.
We give up control in every other vehicle to a driver only slightly more qualified to make sudden, life-and-death decisions than we are. The drivers have had training, but they still react at human speed. And to the extent that they can make super-fast calculations, they will act to save the maximum number of lives.
So why would anybody worry about handing over control to a machine that would make the same decision as the airplane or bus driver, especially if the machine driver can actually make those super-fast calculations?
Carolyn Nicander Mohr says
Good point, David. Many of us do think we do think that we drive better than average, but one day on the roads will convince us that many people need remedial driving lessons.
But the point of Moral Machine isn’t about the level of the vehicle’s driving skills, but the moral decisions that must be made when things go wrong. In the scenarios, the brakes on the vehicle have failed and people are walking across the road in the path of the car. In some cases the people are crossing against the light. In others they are crossing with the light.
There are many scenarios that can occur in Moral Machine, but would a self-driving car make the same moral choices as we would? In these scenarios, we might have time to react as drivers, information isn’t given about how far the vehicle travels before it gets to the crosswalk. And perhaps we would use the parking brake or downshift to avoid an accident.
Moral Machine truly is a fascinating exercise. If you haven’t tried it yet, I suggest you give it a go.
Jamie Hickey says
I enjoyed your article. I tried the Moral Machine & did not like making a choice as to who I was going to kill. I agree w some of the other comments — when behind the wheel you react as quickly as possible to cause the least damage/fatalities & who would know if u were hitting a male/female; let alone a professional vs a homeless person. You react & hope for the best. I don’t see myself giving up total control of my car any time soon.
Carolyn Nicander Mohr says
Hi Jamie, Excellent point. Moral Machine really does make us realize the potential consequences of our driving. The process is uncomfortable, as we think about these moral choices.
But machines don’t have discomfort, they execute the commands they’re programmed to perform. With self-driving cars, programmers make those decisions ahead of time. Moral Machine allows us to have input as to the decisions self-driving cars will be programmed to make.