Fast company logo
|
advertisement

MIND AND MACHINE

Watch this drone use AI to spot violence in crowds from the sky

Researchers say the technology can spot stabbings, shootings, and brawls but civil libertarians have warned that software like it is error-prone and could lead to mass surveillance.

BY Steven Melendez2 minute read

The right machine learning algorithms can let aerial surveillance systems spot when people are being violent on the ground, according to research from researchers in the U.K. and India.

The researchers trained a deep learning neural network using what it calls an Aerial Violent Individual dataset, where each of 2,000 labeled drone images includes between two and 10 people, with some of them punching, kicking shooting, stabbing or strangling someone. The system is more than 88% accurate at identifying the violent people in the images and between 82% and 92% accurate at identifying who’s engaged in which violent activities, according to a paper slated to be presented at July’s International Conference on Computer Vision and Pattern Recognition in Salt Lake City.

In an email, University of Cambridge researcher Amarjot Singh suggests the system could be used to automatically spot outbreaks of violence at outdoor events such as marathons and music festivals. The system—for now limited to a low-flying consumer Parrot AR Drone—hasn’t been tested in real-world settings or with large crowds yet, but the researchers say they plan to test it around festivals and national borders in India. Eventually, they say, they may attempt to commercialize the software.

To conduct the live video analysis, the researchers used Amazon Web Services and two Nvidia Tesla GPUs, after training a neural network using a single Tesla GPU on a local computer. To mitigate privacy and legal issues, the cloud-based software is designed to delete each frame it receives from the drone after the image is processed.

There are, however, lingering privacy concerns about how this and other AI-based technologies could be used. Civil libertarians have warned that when applied to photos and video, AI technology is often inaccurate and could enable unwanted mass surveillance. Deploying such technologies in warfare raises bigger questions, given that the software could inform the decisions of a human (or potentially a robot) to fire a weapon, for instance.

https://twitter.com/mer__edith/status/1004339248089231360

The system is reminiscent of the Pentagon’s controversial Project Maven, which aims to automatically analyze military drone footage to spot features of interest. Google said it would end its involvement in the project after a number of employees quit and others circulated a petition calling the company to abandon military contracts. Other tech companies, including Amazon, have been closemouthed about how they are contributing to the effort.


Related: Scientists are building a detector for conversations likely to go bad


Many companies and researchers are developing ways to automate the real-time analysis of torrents of video footage. Google and Facebook, as the Register notes, have already patented techniques for identifying human poses in images. Companies like Axon, formerly known as Taser, has worked on technology to identify people and activities in police body camera footage. And recently Amazon came under fire for selling facial recognition services to police departments—part of a fast-growing market for police surveillance technologies that are governed by few laws, if any.

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Steven Melendez is an independent journalist living in New Orleans. More


Explore Topics