No one can be everywhere at once. That includes human rights groups and government monitors, which grapple with scarce resources to start.
A new algorithm developed for USAID and the nonprofit group Humanity United can now mine a huge database of news reports to predict where groups should focus their attentions, down to the state, city, or regional level. Developed by Xiaoshi Li of Beijing, a data scientist who took the top $12,000 prize in the organizations’ Tech Challenge for Atrocity Prevention, the model takes 23 factors into account to make predictions.
It doesn’t take an expert to predict that atrocities in places like Darfur or Syria are a risk. But the algorithm could be most useful in less obvious cases. “What was incredibly exciting about the winner was the ability to predict where mass atrocities would occur where they haven’t occurred in the past,” says Michael Kleinman, investments director at Humanity United. (The winning algorithm generally predicted events 62% more accurately than a “dummy” comparison model that assumed the status quo would perpetuate. But in cases where there had been no atrocity for the past six months, the model was 112% more accurate in predictions.)
Take the region around Tripoli in Libya in 2011, where previously everything had been calm. The winning algorithm begins spotting unrest in Libya and predicts a 5% per month risk for the region starting in early March 2010 (see red line in chart below). The “risk” proves true six months later, when between August 21st and August 24th, rebel troops entered Tripoli with heavy fighting, and Gaddafi’s forces collapsed. By comparison, the dummy model (in blue below) showed no risk until after everything was over.
What makes this kind of work possible is the existence of an unprecedented geo-tagged database–a comprehensive list of every political event from 1979 up to the present. The GDELT, or the Global Data on Events, Location, and Tone, was set up earlier this year by the University of Illinois and contains more than 250 million events, from riots to protests to elections, that are divided by “tone” and circumstance into more than 300 categories. Automated software scans news articles and updates it every night.
The atrocity prediction model is open source, so any organization could use it, improve upon it, or adapt it for its own uses. Kleinman says it’s no “silver bullet” but a decision making tool that could help organizations use their resources to prevent atrocities, such as ethnic cleansing or mass rape, to maximum effect.
The next step for USAID and Humanity United, a group founded by philanthropist Pam Omiydar, is to now increase awareness about the tool. “A warning that sort of falls on deaf ears isn’t much of a warning at all. And that can be a long process–sensitizing people to what these quantitative assessments of risk really mean,” says Kleinman. “Our hope is that by drawing public attention to a potential crisis that puts pressure on the government involved.”