Social Cues Impacts on Human Decision-making in Emergencies

Man at the wheel of a car. Photo by why kei on Unsplash.

A study showed that, when participants in a simulated crash of an autonomous vehicle were told that others had chosen to crash into a wall to save pedestrians, their own willingness to do so went up by two-thirds.

As autonomous vehicles become more commonplace, and the need to program them for safety emerges, a better understanding of how humans react in such situations is needed. Study author Jonathan Gratch, the principal investigator for this project, and a computer scientist at the USC Institute for Creative Technologies, said that current models for humans in life-or-death situations, humans think differently to how they do in reality. There are no moral absolutes, rather ” it is more nuanced”.

Seeking to understand how humans make decisions in life-or-death situations and how to apply them to the programming of autonomous vehicles and robots, researchers presented a modified trolley problem to participants using a modified ‘trolley problem’.

The trolley problem is a classic hypothetical scenario psychologists use to investigate human decision-making. Essentially, it involves the decision to divert a tram to hit one person or to leave it on its track and hit five, and it has a number of variations. In one medical variation of the trolley problem, one person could be killed and their organs harvested to save five terminally ill patients — a choice that is overwhelmingly rejected.  

In three of four simulations presented to them, the participants had to choose whether to tell their autonomous vehicle to hit a wall, risking harm to themselves, or hit five pedestrians. The higher the likelihood of injury to pedestrians, the more likely the participants were to choose hitting the wall and risking self harm. The authors showed that in so doing, people balance the risk of injury to self against the potential of injury to others as a guideline.

In the fourth scenario, a social element was added, where participants were told that their peers had chosen to save the pedestrians. In this case, the proportion of participants electing to save the pedestrian went up from 30% to 50%.

However, Gratch there is a reverse as well: “Technically there are two forces at work. When people realize their peers don’t care, this pulls people down to selfishness. When they realize they care, this pulls them up.”

The researchers showed that using the trolley problem as a basis for decision-making is insufficient, as it fails to capture the complexity of human decision-making. The researchers also concluded that transparency in the programming of autonomous machines was important for the public, as well as human operators assuming control in the event of an emergency.

Source: News-Medical.Net

Journal information: de Melo, C. M., et al. (2021) Risk of Injury in Moral Dilemmas With Autonomous Vehicles. Frontiers in Robotics and AI. doi.org/10.3389/frobt.2020.572529.