Today’s discussion: Do robot editors lack the “serendipity factor” that human editors add? What kind of publications need a human at the helm? Are readers being done a disservice, or is this an improvement on the public’s access to information?
That guarantees its content will be the brain-cell-killing kind of stuff that publishers call “content,” instead of real stories and, you know, real journalism. An algorithm that uses social media metrics to curate its content is the equivalent of giving equal control of editorial to anyone in your audience–even that one reader that gets a kick out of Honey Boo Boo. Think I’m being cynical? Check out this infographic detailing the real news (i.e., Syria) and how much it was shared on social media versus competing non-news like Miley Cyrus. —Michael Grothaus
Nearly every major media company is already using algorithms to highlight high-quality or trending content.
The only difference here is that the Guardian is firing up the (literal) presses because “reading from the screen is fucking awful.”
But that’s not quite true. Reading a long-form news story on a mobile-enabled website is often frustrating–but that’s because of the user experience, not the screen itself. You have have to be online; you have to have time to read at the same time you discovered the article; and you have to plow ahead in the face of distracting ads, extras, and navigation elements. That’s why I rely on Pocket, which is similar to Instapaper, but superior: It enables me to save long-form content for offline reading, when I have the time, and it presents the text in a clutter-free interface that is reminiscent of iBooks and other apps that need to display words by the thousand.
The Guardian is solving the wrong problem. Sure, use an algorithm to identify the stories I should read during a moment of quiet with my coffee. But don’t kill trees in order to address a user experience issue. —Ainsley O’Connell
There’s scientific evidence that supports the claim: Scientists say that the human brain interprets groups of paragraphs as a physical landscape, one that engages the parts of the brain that make mental maps. You are built to read words in 3-D space. That’s not to mention that the feel of paper has a poetic quality to it. Last year in Scientific American, Ferris Jabr wrote, “Turning the pages of a paper book is like leaving one footprint after another on the trail–there’s a rhythm to it and a visible record of how far one has traveled.”
The Long Good Read newspaper went live last fall at the #GuardianCoffee café, in London’s hipster Shoreditch neighborhood, with the understanding that you just want to take your eyes away from a screen sometimes. Ainsley hit it on the head when she said that some digital magazine apps do provide a good user experience when reading those long pieces. But I will diverge and say that paper is neither a waste of trees, nor is it a bad excuse for improving user experience. It just gives you a chance to occasionally take your eyes off the screen. —Tina Amirtha
Those people don’t just have “editor” in their job title. The Guardian hasn’t published the full details of the robot’s algorithm but the main selection parameters beyond length appear to be social media shares and comments.
NewsWhip tracks the world’s most shared news and the company recently published a selection of “people-powered front pages.” It renders what the front pages of newspapers around the world would look like if organized by reader popularity. Among other gems, it revealed that on the day the analysis was done, the readers of both the U.K.’s Daily Mail and the Guardian–two publications with very different readerships–chose the same top story.
“Our people powered front pages, and all of NewsWhip, is driven algorithmically,” NewsWhip’s cofounder Paul Quigley told me recently. “But what we’re aggregating and processing is 100% human decisions and signals of quality: decisions to share, tweet, and comment on stories.”
I asked Quigley what a truly automated news machine would look like. “I think the perfect human-less process for distributing and filtering stories to the people who want them has a few constituent parts,” he explained. “First, amazing entity extraction and semantic analysis: knowing what a story is about and how it relates to other stories; second, quality assessment: Is it of interest generally? To leaders in your niche? To 35-year-old Manchester United fans who live in London? Who’s been sharing it and reading it and arguing about it? Third, Machine Learning: Is it of interest to you based on your history? Based on the news machine’s map of what you’ve engaged with before, will you want to read this story?”
Quigley, however, still has a soft spot for the old-fashioned, fleshy, human editor. “At a personal level I still love a good curator like The Browser,“ he says, “which in a sense was an inspiration for NewsWhip.” —Ciara Byrne
Sports briefs, crime blotters, obituaries, weather–this is the bread and butter of “cyber stringers.” And it’s getting more and more common.
A couple key companies behind what’s called auto-writing are Narrative Science, founded as a partnership between Northwestern University’s schools of engineering and journalism, and StatSheet. Each use algorithms to sift through data sets and craft original stories and graphics. Machines are editing, too; Wikipedia has used robot editors for years. And it’s not just news. Machines pen books and even poems. And there’s applications stretching far beyond the word, into the world of science and manufacturing.
Reactions to the Guardian‘s move (which began in December) are mixed because people are worried that robots lack the institutional knowledge, savvy, wit, and human emotion to write. The bottom line is, well, the bottom line: If the Guardian and similar organizations can save money by using machines as scribes, it will continue. And chances are no one will even notice. That is a disservice to the men and women reporting and writing the news, but likely another casualty in the push toward digital media, and improved revenues. —Adam Popescu
We should be exposed to all types of stories. And human editors can make sure that happens, although they don’t always follow through. It’s not quite the same as following partisan politicians on your personal Twitter feed, but there’s a filter-bubble issue here: just because something is popular on social media doesn’t mean it’s newsworthy, educational, or important–or accurate. Small factions could also rally behind a story and skew the topic’s relevance. —Bryan Lufkin