advertisement
advertisement

Autonomous cars need to think more like humans, less like machines

Perceptive Automata is trying to create an AI that reflects human intuition about pedestrian behavior.

Autonomous cars need to think more like humans, less like machines
[Image: Perceptive Automata]

On a recent day in San Francisco, Sam Anthony watched a cyclist pull up to a stop at a traffic light slightly past the white line at the intersection. From the other direction, a self-driving car had a green light, but didn’t move. Even though it was obvious to human drivers that the person on the bike didn’t plan to keep going–his foot was down, and he was resting on his handlebars–the car couldn’t tell.

advertisement
advertisement

This inability of machines to understand and anticipate human action is a problem that Perceptive Automata, the startup that Anthony cofounded, is attempting to solve. Right now, using a combination of cameras and infrared laser pulses and radar, autonomous cars can detect people or other vehicles on the road. But they struggle to predict behavior.

“Today’s systems are really good at knowing the geometry or the physics of the world around them, but they’re not good at the psychology of the world around them,” says Sid Misra, CEO of Perceptive Automata.

[Image: Perceptive Automata]
Human drivers make continual judgments about other humans on the road. “About 250 milliseconds after seeing someone, you’ve made all of these inferences about their state of mind, their intention, their awareness,” says Anthony. “Those inferences are something that humans are incredibly good at, and self-driving cars to this point have had zero ability to do.”

The startup, which launched out of work at a Harvard lab, built a model that tries to mimic human intuition. The founders took footage at street corners and asked groups of people what they saw in hundreds of thousands of clips. The judgments weren’t based on single cues, but a constellation of features suggesting whether someone is paying attention or not, or beginning to move or stay still. If someone had set a bag down and they’re reaching down to pick it up, for example–or if they’re tightly clutching a coffee cup, or if there’s a little tension in their shoulders–they’re probably getting ready to start crossing the street.

The AI model, trained on the data from those human judgments, aims to think the same way a human would. “We’ve developed extensive real-world test sets capturing situations that are both ambiguous and unambiguous, and the goal for our model development is to make judgments that are indistinguishable from human judgments on that entire set,” says Anthony. The model isn’t yet perfect, but can already provide information that other systems can’t. For a situation like the San Francisco cyclist stopped at an intersection, the current tech converges with human judgments 95% of the time.

In theory, self-driving cars hold the promise to eliminate the human errors that lead to most traffic deaths (in the U.S. alone, there were more than 40,000 traffic deaths in 2017; globally, there are more than a million road deaths each year). But if the cars can’t drive predictably, they can’t be widely used. Right now, if an autonomous car doesn’t know if someone is stepping into a taxi or starting to cross the street, the car may come to a sudden halt. It’s not uncommon for self-driving cars to be rear-ended.

advertisement

“If you reduce your speed insufficiently for a pedestrian that wants to cross the road, you run the risk of not being able to stop in time,” says Anthony. “On the other hand, if you come to an emergency stop for every pedestrian in the road, even if they obviously have no intention of crossing, then your vehicle will be infuriating for other drivers, nauseating for passengers, and dangerous, because its behavior will be unpredictable for other road users.”

In the Uber crash that tragically killed a pedestrian in March, parts of the car’s software were reportedly disabled, likely because the car moved too erratically and slowly with it on. Perceptive wants to help cars drive smoothly, without any compromise on safety (and without suggesting that humans need to change their behavior to accommodate driverless cars). “We believe–and we believe society is coming to agree–that the only way for self-driving cars to be accepted is to start from a premise of causing zero deaths or serious injuries,” says Anthony.

The company’s model focuses on pedestrians and cyclists now, but the team is expanding the AI to understand other drivers, along with people on scooters and motorcycles. The team will also develop models for different locations, since signals can vary by culture. The current model is now in testing with autonomous car companies, and it recently became part of a platform offered by Renovo, a mobility software technology company that is deploying tech at a large scale.

Ultimately, self-driving cars could communicate with humans that they understand what the person intends to do–perhaps along the lines of a prototype that Land Rover recently unveiled with cartoon-like eyes.

“From our perspective, that signaling has to be part of a communication process, and you can’t have a communication process if you don’t start with the car being able to understand the signals that are coming from people and pedestrians,” says Anthony. “A car that is just blaring, ‘I’m going to do this, I’m going to do that’–that’s not useful. But a car that can see that someone wants to cross and say ‘Hey, go ahead’–that becomes very, very useful.”

advertisement
advertisement

About the author

Adele Peters is a staff writer at Fast Company who focuses on solutions to some of the world's largest problems, from climate change to homelessness. Previously, she worked with GOOD, BioLite, and the Sustainable Products and Solutions program at UC Berkeley.

More