Fast company logo
|
advertisement

Work done by its artificial intelligence research team shows its new method could be nine times faster and more efficient than the current state of the art.

Facebook Is Using AI To Make Language Translation Much Faster

[Source photo: courtesy of Facebook]

BY Daniel Terdiman2 minute read

Facebook, whose oft-stated mission is to connect the world, took one step further in that direction today with a new tool that could transform how its 1.3 billion daily users use the social network, making it easier for them to share content with their friends and family.

The company’s artificial intelligence research team (FAIR) announced this morning the completion of a yearlong project aimed at boosting language translation efficiency. The new method, which relies on what are known as convolutional neural networks, or CNNs, has successfully “achieved state-of-the-art accuracy at nine times the speed of” current systems, Facebook wrote in a blog post. It’s a vital development for the social network–after all, there are thousands of languages, and the company doesn’t want its users to have to worry that something they post will be ignored by others because they don’t understand the content.

CNNs, first developed by FAIR lead Yann LeCun, are considered the building blocks for developing scalable automated natural language understanding and image recognition tools, and even voice recognition or visual search systems, all of which are immensely valuable to Facebook. Yet, language translation has largely been the domain of what are known as recurrent neural networks (RNNs), which have tended to be a better choice for the task thanks to a high degree of accuracy, Facebook wrote in its blog post.

[Animation: courtesy of Facebook]But RNNs also have a shortcoming when it comes to language translation: They handle one word at a time before trying to predict the corresponding word in the target language. “This is a less natural fit to the highly parallel GPU hardware that powers modern machine learning,” Facebook wrote, “because the computation cannot be fully parallelized since each word must wait until the network is done with the previous word.”

CNNs, on the other hand, tackle all the words at the same time, and are more efficient, even as they process information in a hierarchical fashion, a step that does a better job than RNNs of capturing data’s complex relationships, Facebook says.

Historically, RNNs have done a better job at language translation than CNNs, but FAIR saw the potential of the way CNNs handle data architecture, and in the end, Facebook concluded that LeCun’s creation would be better at scaling translation between languages, and at handling more of the 6,500 languages used on Earth.

That’s why Facebook has settled on using CNNs as the basis for its translation efforts going forward, a step that will likely impact the way people across the world communicate on Facebook itself and, potentially, on subordinate services like Messenger, WhatsApp, and Instagram.

And befitting Facebook’s penchant for sharing its AI research, the company said that it is now open-sourcing the language translation research, making it available to others who might use it for their own translation purposes, the summarization of text, and other tasks.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Daniel Terdiman is a San Francisco-based technology journalist with nearly 20 years of experience. A veteran of CNET and VentureBeat, Daniel has also written for Wired, The New York Times, Time, and many other publications More


Explore Topics