The dating world has changed, and as more people meet via dating apps and online services, more predators are using the sites, too. Back in 2011, Match.com settled a lawsuit with a woman who claimed she was sexually assaulted by a fellow member of the site. While Match posts safety tips for online dating, critics say dating services can do more to keep their users safe.
Now, Match Group is taking a step toward doing just that.
The company behind popular match-making services like Tinder, OkCupid, and Match.com is forming a new board with a mission of preventing sexual assault, Axios reports. To give the new Match Group Advisory Council some teeth, they have conscripted six members, including Tarana Burke, who founded the #MeToo movement and serves as the senior director of Girls for Gender Equity.
The council, which will convene four times a year, will advise Match Group on how to improve safety on its platforms, study how safety plays out on online dating platforms and could advise Match Group on new safety features, like a possible RAINN hotline.
We reached out to Match for comment and will update if we hear back.
A cynic might point out that Match made the move in the wake of legislation like SESTA/FOSTA, which makes online platforms liable if they knowingly facilitate sex trafficking. Either way, it’s good when companies start to think about their customers as more than just dollar signs.
After earning its wings from Alphabet, Loon is setting sail for Kenya.
The company’s high-flying balloons that beam internet service down to underserved areas is heading to the African nation by 2019, according to Reuters. The new Google sibling will deploy its internet-relaying balloons in partnership with local provider Telkom Kenya.
Kenya is a good choice for Loon’s service as it has prioritized getting the country online and part of a “knowledge-based economy by the year 2030.” In 2013, Kenya launched its National Broadband Strategy to extend fiber optic cables across the country, and according to a 2017 report from the content delivery network, its internet speeds are already faster than those in the U.S. Kenya’s authorities are reportedly hoping that the technology can help the country achieve full internet coverage, even for citizens who live far from cities.
It’s the first step in what Google X’s Astro Teller wrote in a Medium post was Loon’s mission to “work with mobile network operators globally to bring internet access to unconnected and underconnected people around the world.”
Chance The Rapper dropped some major news in his new song “I Might Need Security.”
“I got a hit-list so long I don’t know how to finish, I bought the Chicagoist just to run you racist bitches out of business.”
And he wasn’t just flexing as some rappers are wont to do–he really did buy the Chicago-based media company.
“I’m extremely excited to be continuing the work of the Chicagoist, an integral local platform for Chicago news, events and entertainment,” said Chance The Rapper in a statement. “WNYC’s commitment to finding homes for the ‘ist’ brands, including Chicagoist, was an essential part of continuing the legacy and integrity of the site. I look forward to re-launching it and bringing the people of Chicago an independent media outlet focused on amplifying diverse voices and content.”
Chance The Rapper buying the Chicagoist is just another notch on his activist belt. The Chicago native has donated more than $3 million to the Chicago public school system, and, on a smaller yet still effective scale, has a habit of renting out movie theaters for people to support black-made films, including Get Out and Marshall.
“We are delighted that the Chicagoist assets are finding a new home in the hands of a proud Chicagoan,” said Laura Walker, president and CEO of New York Public Radio, in a statement. “WNYC has a strong commitment to local journalism and building community, and we are pleased that these assets will be used to support local coverage in the great city of Chicago.”
Fuchsia, a scratch-made operating system that some Google employees started building in 2016, is picking up steam within the company. Bloomberg reports that more than 100 engineers are now working on Fuchsia, and CEO Sundar Pichai has shown internal support for the project. If all goes to plan, the operating system could launch on connected devices such as smart speakers within a few years, and could replace Android within five years.
Why bother? As I wrote in 2016, Fuchsia is an attempt to create a truly modern operating system without the baggage of Linux, on which Android is based. On small-scale connected devices (like thermostats, connected cameras, and so on), it would require less code and would, therefore, be less prone to security vulnerabilities. On phones and computers, it could allow for faster updates and may avoid intellectual property disputes, like the one that’s dragged on for years between Google and Oracle over Android.
Still, Bloomberg cautions that Google’s leadership hasn’t committed to a roadmap for Fuchsia, and its engineers have reportedly clashed with Google’s ad team over privacy features, which might allow users to curb the data collection that Google’s business depends on. One source described Fuchsia as merely a way to hold the attention of senior engineers, who might otherwise defect to other companies for new technical challenges. And even if Google does bet on Fuchsia, replacing the existing Android ecosystem will be a slog.
Hide and seek could take on new meaning if schools take RealNetworks up on this offer.
The company is offering K-12 learning institutions free facial recognition software that can be downloaded via its website, Technology Review reports. The software is called SAFR, and according to a press release, it uses IP-based cameras and other hardware to “recognize staff, students, and visitors in real time to help improve school safety.” It also promises to “streamlin[e] entry, record keeping, campus monitoring, and guest check-in.”
The tech is currently being tested at a school in Seattle (specifically, the school that the founder’s kids attend) where kids can unlock a gate by smiling at a surveillance camera—which may sound chilling to anyone who has read Nineteen Eighty-Four. It will reportedly be piloted in the state of Wyoming later this year.
In the absence of reasonable gun control laws, the program is meant to improve safety in schools by allowing for real-time monitoring. However, facial recognition tech in general—and for kids in school in particular—is a growing area of concern for privacy advocates and parents alike. After it was reported that Western New York’s Lockport School District would be introducing “the invasive and error-prone technology,” the New York Civil Liberties Union sent a letter to the New York State Education Department urging it to consider students’ and teachers’ privacy.
“Students should think of schools as a safe place to learn,” the ACLU wrote. “They should not worry that their every move is being monitored or that their pictures could end up in a law or immigration enforcement database simply because they came to class.”
It’s not just privacy rights advocates: Last week, Microsoft revealed in a blog post that it is asking Congress to regulate AI-powered face recognition software.
For months, Comcast and Disney have been embroiled in a bidding war over Twenty-First Century Fox. The two had competing visions of what they’d do with the company. Most recently, Disney bid $71 billion, and all eyes were on Comcast to see what it would do.
It turns out, $71 billion was just a tad too rich for the media conglomerate. In a statement, the company said: “Comcast does not intend to pursue further the acquisition of the Twenty-First Century Fox assets and, instead, will focus on our recommended offer for Sky.”
While this bidding war is over, Disney is also looking to buy Sky, so the battle will continue. Currently, Comcast bid $34 billion on the British media company. We’ll see if Disney counters.
In another statement, Comcast chairman and CEO Brian Roberts said, “I’d like to congratulate Bob Iger and the team at Disney and commend the Murdoch family and Fox for creating such a desirable and respected company.”
“We stand here and it feels like we’re finally winning,” Tiffany Thomas Lopez said on stage at ESPN’s ESPY awards on Wednesday night.
Lopez is one of the so-called “Sister Survivors,” the more than 150 women who were sexually abused by disgraced USA Gymnastics and Michigan State team doctor Larry Nassar. They were awarded the Arthur Ashe Courage Award for their “strength and resolve” and for bringing “the darkness of sexual abuse into the light.”
Some 140 survivors of Nassar’s crimes came together on stage to accept the award, but survivor Sarah Klein said those present represented “hundreds more.” She added: “Make no mistake, we are here on this stage to present an image for the world to see: a portrait of survival, a new vision of courage.”
In January, Nassar was sentenced to 40 to 175 years in prison after the court heard seven days of statements from women who said he sexually abused them during his long tenure as team doctor, in what may be the biggest case of sexual abuse in the history of American sports.
Olympic gold medalist and survivor Aly Raisman said, “To all the survivors out there, don’t let anyone rewrite your story.”
InterContinental Hotels has teamed up with Baidu to create Smart Rooms fueled by artificial intelligence. The rooms don’t require an app or a special button. Instead guests at InterContinental Beijing Sanlitun and InterContinental Guangzhou Exhibition Centre can do things like tell their room they are “going to sleep,” and the room will recognize the phrase and know what to do with the information.
While it can’t tuck you into bed yet, it will shut the curtains and turn off the lights in the room.
Thanks to voice control technology, guests can just tell the room that they want the lights dimmed, the room a little warmer, the music a little softer, and order up some champagne and oysters, keeping their hands free for whatever they want.
There are only two hotels equipped with the AI Smart Rooms for now, but InterContinental Hotels plans to roll out the smart service to a total of 100 AI-powered suites across China within the year.
InterContinental isn’t the only brand rushing to embrace technology in the hopes of wowing business travelers and wooing millennials. Marriott is piloting a new facial recognition check-in program and high-tech showers, while Hilton is taking a phone-based approach to smart rooms.
Epic Games tweeted that the servers are back online.
Servers for the popular Fortnite video game have been down since 4 a.m. ET this morning, as Epic Games conducts scheduled maintenance and rolls out new content at the same time, an unusual combination.
Such disruptions usually only last a few hours, but as the game remained offline at 6:30 a.m., fans began sounding off on social media, with hundreds of new tweets pouring in per minute. Many wanted to know what was taking so long, while others expected to be compensated with free V-Bucks, the in-game currency.
It’s unclear exactly when the game will return, but you can check Epic’s public status page for regular updates. Good luck!
The Japanese and Chinese giants are teaming up to bring a ride-hailing service to Japan, reports Bloomberg. The new service will be called Didi Mobility Japan and will start trials this year in Osaka. Those trials will be followed by trials Tokyo, Kyoto, Fukuoka, and Okinawa. Though ride-hailing services are booming in other parts of the world, they have been slow to catch on in Japan. That’s because it is illegal in Japan for private-car owners to use their own vehicle to pick up and deliver passengers. As a result, other ride-sharers like Uber are essentially taxi and car-dispatch services.
The restriction on true ride-hailing services in Japan is something SoftBank CEO Masayoshi Son called “stupid”: “In Japan, ride-hailing is prohibited by law. It’s incredible that our national government is denying the future that is inevitable. Is there a country that is as stupid as that?”
SoftBank and Didi are hoping the Japanese government changes its tune–especially considering Japan’s 16 million foreign tourists are so used to ride-hailing services in other countries they travel to. Until that happens, SoftBank and Didi will operate Didi Mobility Japan as another car-dispatch service in the country but hope that the upcoming 2020 Olympics in Tokyo puts pressure on the government to allow true ride-hailing services in Japan.
Forty-nine years ago yesterday, man touched down on the moon for the first time. Two of the Apollo 11’s three-man crew, Neil Armstrong and Buzz Aldrin, landed on the surface of the moon while Michael Collins remained behind. We all know how history went, but what if things had turned out differently? What if Armstrong and Aldrin couldn’t get back to Collins and were left stranded on the moon? The White House had penned a speech titled “In event of moon disaster” should the worst have happened,reports CNBC. Thankfully, this speech was one Nixon never needed to give.
IN EVENT OF MOON DISASTER:
Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest in peace.
These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery. But they also know that there is hope for mankind in their sacrifice.
These two men are laying down their lives in mankind’s most noble goal: the search for truth and understanding.
They will be mourned by their families and friends; they will be mourned by their nation; they will be mourned by the people of the world; they will be mourned by a Mother Earth that dared send two of her sons into the unknown.
In their exploration, they stirred the people of the world to feel as one; in their sacrifice, they bind more tightly the brotherhood of man.
In ancient days, men looked at stars and saw their heroes in the constellations. In modern times, we do much the same, but our heroes are epic men of flesh and blood.
Others will follow, and surely find their way home. Man’s search will not be denied. But these men were the first, and they will remain the foremost in our hearts.
For every human being who looks up at the moon in the nights to come will know that there is some corner of another world that is forever mankind.
The company is said to have laid off roughly 70% of its 100-person U.S. workforce, reports Forbes. In addition, Ofo is reported to be shuttering its locations in multiple cities across the country–though its unknown just which cities will be affected. As of June, Ofo operated in 30 U.S. cities. In a statement, Off said:
“As we continue to bring bikeshare to communities across the globe, ofo has begun to reevaluate markets that present obstacles to new, green transit solutions, and prioritize growth in viable markets that support alternative transportation and allow us to continue to serve our customers.”
The downsizing comes as a big surprise to many, especially since Ofo had previously announced it had planned to expand to 100 U.S. cities by the end of the year.
Tom Grube was one of Siri’s original founders before Apple bought the technology in 2010. He was recently the head of Siri’s Advanced Development group at Apple before stepping down, reports The Information. According to sources close to Grube, he is retiring and plans to pursue other interests including photography and ocean conservation. Apple has also lost its head of search, Vipul Ved Prakash, who joined the company in 2013 after Apple acquired the search engine and analytics company Topsy. Their resignations come a week after Apple announced it had a new AI chief–John Giannandrea, Google’s former artificial intelligence research and search head.
An undercover investigation has shed new light into the often hidden process by which Facebook removes hateful or violent content from its platform. In some cases, according to the report, it takes a lot for the company’s moderators to remove even the most toxic posts or pages–especially if they are popular.
The investigation aired on Tuesday by Britain’s Channel 4 centers on a Dublin-based content moderation firm called CPL Resources, which Facebook has used as its main U.K. content moderation center since 2010. An investigative reporter got a job there, revealing how CPL’s content moderators decide to remove content reported by users to be hateful or harmful.
This graphic video depicting a child abuser beating a young boy was left on Facebook for several years, despite requests to have it taken down.#Dispatches went undercover to investigate why the social media network is leaving extreme content on its site.
Perhaps the most telling are the revelations about how Facebook polices racist or hateful political speech on the platform. This, after all, is the stuff that was used at massive scale to influence both the 2016 presidential election and the U.K.’s Brexit vote.
Normally, if a given page posts five pieces of content that violate Facebook’s rules in a 90-day period, that page is removed, a policy described in documents recently seen by Motherboard. YouTube, by comparison, allows user pages only three strikes in 90 days before deletion.
However, if the Facebook page happens to be a big traffic generator, moderators use a different procedure. CPL is required to put these pages in a queue so that Facebook itself can decide whether or not to ban them. The investigation found that pages belonging to far-right groups with large numbers of followers were allowed to post higher-than-normal numbers of hateful posts, and were moderated in the same way as pages belonging to governments and news organizations.
One post contained a meme suggesting a girl whose “first crush is a little negro boy” should have her head held under water. Despite numerous complaints, the post was left on the site.
One CPL moderator told the undercover reporter that the far-right group Britain First’s pages were left up despite repeatedly featuring content that breached Facebook’s guidelines because “they have a lot of followers so they’re generating a lot of revenue for Facebook.” Facebook confirmed to the producers that they do have special procedures for popular and high-profile pages, including Britain First.
CPL trainers instructed moderators to ignore hate speech toward ethnic and religious immigrants, and to ignore racist content. “[I]f you start censoring too much then people lose interest in the platform . . . It’s all about making money at the end of the day,” one CPL moderator told the undercover reporter.
On Wednesday, Denis Naughten, Ireland’s Communications Minister, said he had requested a meeting with Facebook management over the “serious questions” raised by the exposé, and that company officials would meet with him on Thursday in New York, where he is attending a UN meeting.
“Clearly Facebook has failed to meet the standards the public rightly expects of it,” he said in a statement.
Dispatches reveals the racist meme that Facebook moderators used as an example of acceptable content to leave on their platform.
Facebook’s complex moderation system–peopled by thousands of employees and contractors behind closed doors in offices around the world–has become an increasing focus of European reporters amid growing scrutiny by regulators. A series in the Guardian last year exposed a trove of the company’s content policies, which some moderators cited for their “inconsistency and peculiar nature.”
“The crack cocaine of their product”
One of Facebook’s earliest investors, Roger McNamee, told Channel 4 that Facebook’s business model relies on extreme content.
“From Facebook’s point of view this is, this is just essentially, you know, the crack cocaine of their product, right? It’s the really extreme, really dangerous form of content that attracts the most highly engaged people on the platform. Facebook understood that it was desirable to have people spend more time on site if you’re going to have an advertising-based business, you need them to see the ads so you want them to spend more time on the site. Facebook has learned that the people on the extremes are the really valuable ones because one person on either extreme can often provoke 50 or 100 other people and so they want as much extreme content as they can get.”
(McNamee was a mentor to CEO Mark Zuckerberg, and recruited Sheryl Sandberg to the company from Google to develop Facebook’s massive advertising business.)
This is what makes the Channel 4 exposé so remarkable. It suggests that Facebook not only hosted lots of racially and socially charged political content during events like Brexit and the 2016 presidential election, but that its leadership was aware of the share-ability of that content, and of the ad impressions it meant.
A Facebook representative took issue with McNamee’s assertion.
“Shocking content does not make us more money, that’s just a misunderstanding of how the system works,” he told Channel 4. “People come to Facebook for a safe secure experience to share content with their family and friends.” The spokesperson offered no numbers to reinforce his claim.
The Silicon Valley giant responded to the investigation in a blog post on Tuesday, saying it was retraining its moderation trainers and fixing other “mistakes.” In an interview with Channel 4, Facebook vice president of global policy Richard Allen described efforts the company was taking and apologized for the “weaknesses” the broadcaster had identified in the platform’s moderation system.
Separately on Wednesday, Facebook addressed growing criticism about the role of its platform in inciting deadly mob violence in some countries, telling reporters that it would begin to remove misinformation from Facebook that leads to physical harm.
“We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline. We have a broader responsibility to not just reduce that type of content but remove it,” Tessa Lyons, a Facebook product manager, told the New York Times. The new policy does not apply to Instagram or WhatsApp, which has also been implicated in spreading dangerous rumors.
In an interview with Recode published on Wednesday, CEO Mark Zuckerberg offered a head-scratching explanation for why Facebook should permit certain content. “I just don’t think that it is the right thing to say we are going to take someone off the platform if they get things wrong, even multiple times,” he told Recode‘s Kara Swisher.
In April an undercover video aired by Channel 4 helped expose the sometimes incendiary methods by which Cambridge Analytica tried to influence voters, including by using Facebook. The broadcaster’s new revelations help shed more light on the platform’s role in that equation, and reinforce the suspicion that when it comes to toxic but popular content, Facebook prefers to look the other way. Deleting social media content, be it fake news or hate speech or other terrible things, is a messy business. But from Facebook’s point of view, deleting content can look like bad business, too.
On Monday Federal Communications Commission chairman Ajit Pai said he had “serious concerns” about Sinclair Broadcast Group’s plans to acquire Tribune Media. The four-person commission today voted unanimously to send the $3.9 billion acquisition proposal to an administrative judge to decide its merits.
Maryland-based Sinclair specializes in owning and managing local broadcast news affiliates. The company’s management have been outspoken in their support of Donald Trump and his right-wing agenda, and have sought to use their network to spread its messages. In one instance earlier this year, the company forced local news anchors to issue a warning about mainstream media spreading fake news.
The acquisition of Tribune Media would give Sinclair even more control of local news markets. The combined assets of the companies would reach 7 in 10 U.S. households, including those in major markets like New York, Chicago, and Los Angeles.
Sinclair previously said it would sell 23 of its TV stations to satisfy government rules, but because of loopholes and communications law nuances, some of the stations would remain effectively under the control of Sinclair.
In an amendment to its plan today, Sinclair said it would sell stations in Dallas and Houston, in the hopes of giving its Tribune deal a better chance with regulators. It also promised that Tribune’s WGN in Chicago would be sold outright to Sinclair to make the station’s ownership more transparent.
Ultimately, however, the company’s amended proposal appeared to have little impact on the FCC’s decision.
Butterfingers rejoice: The next smartphone you buy might have a screen that’s meaningfully more likely to survive a tumble without shattering. Corning, whose Gorilla Glass has shipped on more than 6 billion devices since its 2007 introduction, held an event at its Silicon Valley office this morning to announce Gorilla Glass 6. The company says the material is twice as robust as 2016’s Gorilla Glass 5, and able to withstand 15 drops from one meter, or around three feet, on average. It’s due to arrive on new phones by the end of the year.
According to Corning research, damage due to drops is a primary concern of smartphone owners, so the company focused on improvements in that area for Gorilla Glass 6. The new material offers about the same resistance to scratches as Gorilla Glass 5.
Corning’s event included demos of the gadgetry the company uses to test Gorilla Glass, much of which involves dropping or slapping dummy phones against 120- and 180-grit sandpaper, which helps the company approximate contact with surfaces such as rough asphalt in a consistent manner. “Nobody’s broken more phones or glass than Corning,” said John Bayne, Gorilla Glass’s general manager.
Bayne acknowledged that lab-test results don’t convey the range of real-world possibilities when a phone suffers a fall. Some folks may crack their phone in minor mishaps on the day they buy it, he said, while others could knock it from a building’s second story without shattering the glass. (Tell me about it: A factory-fresh Samsung Galaxy S8 Plus with Gorilla Glass 5 fell out of my shirt pocket onto a linoleum floor when I bent over, and it cracked.) But Corning’s data shows that smartphone owners typically drop their phone seven times a year, well under the 15 drops that Gorilla Glass 6 is rated to endure on average.
The Corning event wasn’t just about durability. Many new phones now ship with glass backs—which allows for wireless charging and is helpful for cellular reception—so the company also showed off new options for spiffing up phone backsides. Phone makers can use Gorilla Glass and inkjet printing to achieve surprisingly convincing simulations of materials such as wood and snakeskin, not just visually, but also down to the texture.
At Fast Company, we’ve been tracking the growth of the natural beauty industry, as brands like Beautycounter and Juice Beauty snag major funding to develop clean alternatives to skincare and cosmetic products. But startups are now popping up overseas, eager for a share of the American clean beauty market.
Today, Australian brand Crop launches with a range of high-quality products that are certified organic and non-toxic. The brand is releasing everything from lipstick and eyeshadow to face masks and cleansers.
Crop is founded by Charlie Denton, whose family has been in the beauty business for 35 years, creating products for other brands. As he set out to launch his own brand, he worked with suppliers all over the world–including factories that make products for luxury beauty brands–to create brand new formulas using the latest natural beauty technology. His goal was to create products that actually perform, without the harmful ingredients.
All of Crop’s products are certified by the European Cosmetic Organic Standard, which establishes minimum common requirements about what constitutes natural beauty products, and ensures that both the production and manufacturing processes are environmentally sound, and safe for human health.
China is a hugely important market for Apple; the country is seen as an engine for the future sales growth of iPhones and related services. One of those key services is iCloud storage, and the Chinese government informed Apple earlier this year that in order to sell the service in China the user content would have to be hosted by Chinese companies.
So Apple made a deal with the the Chinese government-controlled hosting company Guizhou-Cloud Big Data to provide the data center space. Apple said it built in safeguards that would ensure the security and privacy of the data.
Today China Telecom, a state controlled utility, put out a press release saying it is providing hosting space for the iCloud data while the Guizhou-Cloud data centers are being built. This has some on Chinese social media worrying aloud that China Telecom might snoop into their content.
An Apple spokesperson assured Fast Company that the China Telecom hosting arrangement is only temporary.
Apple fought the Chinese government’s hosting requirement but ultimately failed. Here’s the company’s statement from February:
“Our choice was to offer iCloud under the new laws or discontinue offering the service. We elected to continue offering iCloud as we felt that discontinuing the service would result in a bad user experience and less data security and privacy for our Chinese customers.”
Also in February, Amnesty International released this statement about the forfeiture of the iCloud data to Guizhou-Cloud: “By handing over its China iCloud service to a local company without sufficient safeguards, the Chinese authorities now have potentially unfettered access to all Apple’s Chinese customers’ iCloud data.”
The press release from China Telecom, in some ways, couldn’t come at a worse time. The U.S. trade war with China has heated up. Steep tariffs are in place, and more could be coming. One of the key arguments for tariffs on Chinese goods by the Trump administration is a belief that the Chinese government is actively working through Chinese companies to steal intellectual property from U.S. companies.
Correction: An earlier version of this story implied that China Telecom was the permanent host of the iCloud data, and that Apple is no longer holds encryption keys for the effected content. Apple expects says China Telecom’s hosting service is being used only as a stop-gap measure, and that Apple still holds the encryption keys.
A group of top companies and researchers in the artificial intelligence field, including Alphabet’s DeepMind, Clearpath Robotics/Otto Motors, Tesla founder Elon Musk, and University of California at Berkeley Professor Stuart Russell have signed a pledge not to participate in or support the development of lethal autonomous weapons, colloquially called killer robots.
“There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others–or nobody–will be culpable,” they write in the pledge. “There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”
More than 170 organizations and 2,464 individuals have signed the pledge, according to the Future of Life Institute, which organized the campaign.
“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” said Future of Life Institute President Max Tegmark. “AI has huge potential to help the world–if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”
This isn’t the first effort by many of the signatories to stand against lethal AI: Musk previously cofounded OpenAI, a nonprofit seeking to research safe general AI, and Google recently announced a set of AI principles, where it pledged not to develop AI weapons. That announcement came after Google was revealed to be participating in the Defense Department’s controversial and secretive Project Maven, which involves automated processing of drone footage with machine learning algorithms.
The United Nations has also held talks on banning or restricting lethal autonomous weapons, which could pick targets and fire at them with minimal human intervention.
Recode just published a doozy of an interview. Kara Swisher, its cofounder and current editor at large, sat down with Mark Zuckerberg for its Recode Decode podcast. Among other things, Swisher brought up the recent controversy over Facebook’s refusal to ban the conspiracy theory site Infowars from its platform. Facebook’s rationale has been that it doesn’t want to ban dissenting views, and that if it cracked down on Infowars, it would be taking a partisan stance.
But Infowars isn’t merely partisan. It willfully misleads readers. For instance, it has peddled claims that the Sandy Hook shooting was fake, that the government controls the weather, and that Hillary Clinton and her fellow democratic party leaders ran a clandestine child sex ring in a pizza parlor’s basement. These claims are all plainly fake—even dangerous—and yet Infowars has continued to spread them. So were Facebook to ban the site, it would essentially be fulfilling its pledge to crack down on disinformation.
But in this new interview, Zuckerberg not only doubled down on the choice to leave Infowars on the site. He took it a step further. Here’s the full quote (emphasis added by me):
Let’s take this a little closer to home. So I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong—I don’t think that they’re intentionally getting it wrong. It’s hard to impugn intent and to understand the intent. I just think as important as some of those examples are, I think the reality is also that I get things wrong when I speak publicly. I’m sure you do. I’m sure a lot of leaders and public leaders who we respect do, too. I just don’t think that it is the right thing to say we are going to take someone off the platform if they get things wrong, even multiple times.
Let’s unpack this a little. Essentially Zuckerberg is claiming that every disagreement boils down to a difference of opinion. In his example–that some people believe that the slaughtering of around 6 million Jews and millions of others didn’t happen–Zuckerberg says this view is okay if the people saying it are not “intentionally getting it wrong.”
In other words, if someone espouses a view that’s both dangerous and factually incorrect, it’s totally okay as long as they believe what they are saying. The implication is that facts don’t matter because everything is up for interpretation.
Holocaust denial is one of the more powerful tools that white supremacists use to prove their ideology’s worth. Zuckerberg giving such sentiments a pass on his platform paves the way for all kinds of hate speech to run rampant (which it surely already does).
Ultimately, Zuckerberg’s thoughts on this matter shouldn’t come as a shock. Facebook isn’t a platform for the free exchange of ideas–it’s a business whose purpose is to serve ads to as many people as possible. What he wants is to make money, and for that, Facebook needs engagement. By allowing for holocaust deniers and Infowars fans, Zuckerberg is simply making sure he has as more eyeballs to which he can serve ads.
The debate over Infowars gets at Facebook’s broader stance on fringe content. The platform is dependent on engagement, which is why hate, outrage, and fear-mongering are so integral. They are its lifeblood. Whether he admits it or not, Zuckerberg knows that Facebook needs this content for revenue generation much more than it needs fact-checking or hand-wave-y pledges to crack down on misinformation.
Perhaps this quote isn’t exactly what Zuckerberg intended to say. But even if he does walk back his claims, we finally got a raw look at how the Facebook founder views the biggest problem plaguing his company.