Enlarge / But what does that car think that the spectator is thinking?China News Service

Imagine you're trying to make a left turn onto a busy road. Car after car rolls past, keeping you trapped as your frustration rises. Finally, a generous driver decelerates enough to create a gap. A check of the traffic from the opposite direction, a quick bit of acceleration, and you're successfully merged into traffic.

This same scene plays out across the world countless times a day. And it's a situation where inferring both the physics and the motives of your fellow drivers is difficult, as evidenced by the fact that the United States sees 1.4 million accidents each year from drivers in the process of turning. Now imagine throwing autonomous vehicles into the mix. These are typically limited to evaluating only the physics and to make conservative decisions in situations where information is ambiguous.

Now, a group of computer scientists has figured out how to improve autonomous vehicle (AV) performance in these circumstances. The scientists have essentially given their AVs a limited theory of mind, allowing the vehicles to better interpret what the behaviors of their nearby human drives are telling them.

Mind the theory

Theory of mind comes so easily to us that it's difficult to recognize how rare it is outside of our species. We're easily able to recognize that our fellow humans have minds like our own, and we use that recognition to infer things like the state of their knowledge and their likely motivations. These inferences are essential to most of our social activities, driving included. While a friendly wave can make for an unambiguous signal that your fellow driver is offering you space in their lane, we can often make inferences based simply on the behavior of their car.

And, critically, autonomous vehicles aren't especially good at this. In many cases, their own behavior doesn't send signals back to other drivers. A study of accidents involving AVs in California indicated that over half of them involved the AV being rear-ended because a human driver couldn't figure out what in the world it was doing. (Volvo, among others, is working to change that.)

It's unrealistic to think that we'll give AVs a full-blown theory of mind any time soon. AIs are simply not that advanced, and it would be excessive for cars, which only have to deal with a limited range of human behaviors. But a group of researchers at MIT and Delft University of Technology has decided that putting an extremely limited theory of mind in place for certain driving decisions, including turns and merges, should be possible.

The idea behind the researchers' work, described in a new paper in PNAS, involves a concept called social value orientation, which is a way of measuring how selfish or community-oriented an individual's actions are. While there are undoubtedly detailed surveys that can provide a meticulous description of a person's social value orientation, autonomous vehicles generally won't have the time to be giving their fellow drivers surveys.

So the researchers distilled social value orientation into four categories: altruists, who try to maximize the enjoyment of their fellow drivers; prosocial drivers, who try to take actions that allow all other drivers to benefit (which may occasionally involve selfishly flooring it); individualists, who maximize their own driving experience; and competitive drivers, who only care about having a better driving experience than those around them.

Value-oriented

The researchers developed a formula that would let them calculate the expected driving trajectory for each of these categories given the starting position of other cars. The autonomous vehicle was programmed to compare the trajectories of actual drivers to the calculated version and use that to determine which of the four categories the drivers were likely to be in. Given that classification, the vehicle could then project what their future actions would be. As the researchers wrote, "we extend the ability of AVs' reasoning by incorporating estimates of the other drivers' personality and driving style from social cues."

This is substantially different from some game-theory work that's been done in the area. That work has assumed that every driver is always maximizing their own gain; if altruism emerges, it's only incidental to this maximization. This new work, in contrast, bakes altruistiRead More – Source