Fast company logo
|
advertisement

“It’s not enough to build powerful tools. We have to make sure they’re used for good. And we will.” That was Mark Zuckerberg teeing off the day-one keynote yesterday at Facebook’s F8 conference, echoing a sentiment he’s expressed repeatedly (on Facebook, before Congress) in the wake of the Cambridge Analytica mess. For those words to […]

At F8, Facebook missed a chance to reassure us about its future

Mark Zuckerberg addresses the throng at F8. [Screenshot: Facebook]

BY Harry McCracken1 minute read

“It’s not enough to build powerful tools. We have to make sure they’re used for good. And we will.”

That was Mark Zuckerberg teeing off the day-one keynote yesterday at Facebook’s F8 conference, echoing a sentiment he’s expressed repeatedly (on Facebook, before Congress) in the wake of the Cambridge Analytica mess. For those words to mean anything, they need to apply not just to the tools that got abused last time around—involving profile information, Likes, and other core functionality of the Facebook service itself—but to everything the company does.

F8’s second-day keynote is traditionally a bit of a science fair. It’s devoted to research, technology, and the future, and as usual, this year’s edition was a lot of fun. Just a few highlights:

  • By feeding its machine-vision technology billions of photos, Facebook has made it uncannily accurate, letting it identify a picture of a pie not just as “food” but “pie,” for example.
  • The company is training computers to detect humans and their precise countours and poses in video, then recreate them in 3D.
  • It’s also getting really good at analyzing large sets of 2D photos and reassembling the scenes they contain into 3D worlds, and can even render virtual mirrors that reflect their surroundings.
  • Facebook researchers are working on technology to translate between any two of the world’s 6,000 languages—no training required.

As dazzling as these examples and others were, I was struck by the fact that the keynote didn’t touch on the matter of how Facebook will prevent bad guys from abusing features it may implement based on its new and future breakthroughs in AI and other areas. (Researcher Isabel Kloumann did preside over an excellent section on AI and ethics, but it focused on removing bias, not preventing misuse.)

advertisement

It would be unrealistic to expect Facebook to have all the answers about making its cutting-edge research safe for humanity. But it would have been in the company’s interest to make clear that it’s asking the right questions now, rather than merely scrambling to correct mistakes it’s already made.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Harry McCracken is the global technology editor for Fast Company, based in San Francisco. In past lives, he was editor at large for Time magazine, founder and editor of Technologizer, and editor of PC World More