MIT’s whiz kids have been busying themselves with a tricky problem–how to build a semi-smart car that could take control when the situation ahead of a human driver looks too dangerous to be left to our weak biological instincts. It’s a slightly different approach to thinking about automated driving, and it’s one that may be more immediately useful when our roads have a mix of smart cars and older “stupid cars” driving on them, before the robot drivers fully take over.
Automated driving is definitely an upcoming tech one way or another, and we already have cars that can park themselves under a limited set of circumstances. Ultimately we may end up with totally automated driving systems, and a 100% reduction in crashes (as an almost impossible ideal, of course). But until then, automated cars will mix with human-driven cars. MIT thinks that one excellent solution to this–which is also potentially a little simpler than fully automated car systems–is an “intelligent transportation system” that models how other traffic is driving in real time, and can thus predict an imminent accident and warn a driver to take avoiding actions–and possibly force control decisions into the car’s systems in an absolute emergency.
According to the work of a visiting PhD student and two MIT professors, the trick is in building automated driving systems that don’t automatically assume every other vehicle is out to get you (remember that age-old epithet your parents may have told you when you were learning to drive: “assume everyone else is crazy.”) Instead they’ve built an algorithm that tries to predict how cars will accelerate and decelerate at intersections or corners and can thus compensate to move itself out of an area where it predicts the two vehicles could collide while maneuvering. It uses a game theory-like decision system, grabbing data from in-car sensors and other sensors in roadside and traffic light units, elements of the future intelligent driving system. Robot cars it’s not; you couldn’t sit and text while someone else drove. But if the end goal of driverless cars is to save lives–as Google’s Sebastian Thrun claims–then this might be just as good.
They’ve tested the idea on small model cars, some driven by a computer and some by a human driver. It worked, although, in the best traditions of science, there were a few crashes–which let the team adjust their model. These crashes (3% of the trials) were caused by delays in acquiring sensor data, and thus accurately reflect the real driving conditions that may be encountered in real life, which is subject to all sorts of unexpected complications.
They’re now moving into testing it in full-size cars, and are working to include fallible human reaction times into their system so the cars can decide when its safe to give a warning (which it mustn’t do too often else drivers will treat it as an unreliable system that cried wolf) and when it absolutely has to wrest control of the car from the driver. Which means it really may not be long before in the fractions of seconds before an accident your smart car wheel kindly but firmly lets you know that you are not doing a great job driving by suddenly steering itself and stamping on the breaks to save your life.
[Image: Flickr user ahmedrebea]