Hello and welcome to Modern CEO! I’m Stephanie Mehta, CEO and chief content officer of Mansueto Ventures. Each week this newsletter explores inclusive approaches to leadership drawn from conversations with executives and entrepreneurs, and from the pages of Inc. and Fast Company. Sign up to get it yourself every Monday morning.
Reddit, the digital community platform, prides itself on corporate transparency. While the company lets members assign themselves usernames that can mask their real identities, Reddit’s biannual transparency report highlights the categories of content it has published, and what’s been removed in an effort to keep the platform “safe, healthy, and real.” It recently launched an online “transparency center” that houses information on safety, security, and policies.
A similar ethos applies to artificial intelligence. Reddit’s chief technology officer says he approves of AI-created content in its forums—as long as the bots are forthright about being inanimate. “Reddit is full of self-identified bots that try to be helpful,” says Chris Slowe, who spoke with me last month at a forum organized by Fast Company and Deloitte Cyber & Risk Strategies. (An example of a useful bot might be one that stabilizes and crops video submissions to a forum.) “We actually think that that’s a good use case for AI,” he adds.
Chatting with chatbots
Slowe’s comments on useful bots reminded me of Deepak Chopra’s work with the Never Alone initiative, an alliance supporting mental health. The group has developed an emotional well-being chatbot called PIWI (pronounced “pee-wee”), which stands for people interacting with intent. The bot is programmed to respond to people in crisis, and high-risk individuals are immediately transferred to a live counselor. I’ve heard Chopra say young people actually prefer chatting with the bot because they don’t feel judged, he says.
The idea that people feel more comfortable revealing certain information to robots is well documented. We often prefer machines to humans when divulging sensitive information such as net worth or illness.
Reddit’s Slowe cites ELIZA, the 1960s therapy “chatterbot,” as another example of computers gaining human trust, but Reddit can be an all-too-human counterpoint to seemingly neutral bots. The company notes that people come to the platform seeking feedback from strangers. Says Slowe: “I feel like on Reddit we have people who actually want to be judged,” he says. “We have an entire community—Am I the Asshole?—and it is literally people asking [to be critiqued].”
Bots still need bosses
He’s half-joking, but he makes an interesting point. Substitute “judgment” for “judged” and the shortcomings of AI come into focus. While many AI proponents believe the technology will eventually match human capabilities, even the most sophisticated AI programs aren’t ready to “make unsupervised decisions,” as Harvard Business Review notes. At best, AI today is a tool that many companies and leaders are already using to help assess situations.
In the meantime, if you’re just looking for some good-old fashioned denigration, may I suggest a certain provocatively named Reddit forum?
Bots—or not?
Are you using AI in your organization? Let me know how you’re doing so and the benefits and challenges you’ve found. And if you’re facing other challenges, I’d like to hear that, too. Drop me a note at stephaniemehta@mansueto.com.
Read, listen, and watch: Bots gone mild
Duolingo’s chatbot won’t judge your lousy French. Read more
3 time-saving AI tools for founders. Read more
What’s a human’s place in an AI-driven workplace? Listen here
We Are the Nerds: The Birth and Tumultuous Life of Reddit, the Internet’s Culture Laboratory Order here
Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.