Fast company logo
|
advertisement

Meta will reportedly release AI chatbots with personalities in hopes of luring young users. But a new report says treating AI like humans has downsides.

Should we fear the rise of ‘anthropomorphic’ AI?

[Source photo: Rawpixel]

BY Mark Sullivan4 minute read

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

Meta to use AI bot characters to attract younger users

In 2021, Meta CEO Mark Zuckerberg, painfully aware of Facebook’s inability to attract young users, directed the various teams in his company to double down on their efforts of attracting a Gen Z crowd and focus less on its existing, significantly older active user base. AI, it turns out, plays a big part in that initiative. The Wall Street Journal reported this week that Meta will soon announce that it’s availing a line of AI chatbots with defined “personas” across its social apps. A bot called “Bob the Robot” can help you write code, but also is a self-described “sassmaster general,” according to the Journal. Another bot, “Alvin the Alien,” is very curious about users’ lives and habits. Meta has also worked on letting celebrities and creators interact with fans and followers via their own persona-driven bots.

Imbuing a chatbot with a sassy persona is nothing new. Microsoft did it long ago with Tay. More recently, Snap did it with MyAI, which was built on top of OpenAI’s ChatGPT large language model. Character.AI lets users create persona bots or engage with chatbots based on famous people.

Should human beings interact with bots as they would other humans? A new report from the advocacy group Public Citizen offers an emphatic no. “The human mind is naturally inclined to infer that something that can talk must be human and is ill-equipped to cope with machines that emulate unique human qualities like emotions and opinions,” Public Citizen research director Rick Claypool writes in the report. “Such systems can manipulate users in commercial transactions and isolate users by taking on social roles ordinarily filled by real people.” 

Despite any such trepidations, the days of bland, all-purpose LLM chatbots are probably numbered. The basic, generalized knowledge that chatbots gain from training will be table stakes, and the bot will be distinguished by the layers of knowledge, skills, and “personality” layered on top. Let’s hope that the bots of the future won’t pose as humans or use their surprising command of the language to persuade or manipulate.

ChatGPT can now comprehend images and sounds, not just text

Not only are AI chatbots gaining personas, but, far more importantly, they’re gaining new senses. The first large language models that powered chatbots were trained only on text and had no way to comprehend sounds or images. This week, OpenAI announced it has given the AI model underneath ChatGPT the ability to process both aural and visual data. A ChatGPT user, for example, can show the bot an image and then enter into a verbal conversation about it with the bot. 

It’s important to note that ChatGPT isn’t just classifying images or converting the spoken word to text. The bot is comprehending the meaning of images and sounds. In another use case, ChatGPT is given an image of the contents of a refrigerator, then suggests a number of dinner ideas using the available ingredients, and provides the recipes. Maybe ChatGPT truly has gained “eyes and ears,” as OpenAI says. 

Note that OpenAI suggests using the newly multimodal ChatGPT to “request a bedtime story for your family, or settle a dinner table debate.” This sure sounds a lot like the kinds of things Amazon wants us to do with Alexa. But Alexa is not based on an LLM, much less one that processes both aural and visual data. Right now if you wanted to play a song for ChatGPT you’d need to manually input the audio into your chat. But put ChatGPT in a free-standing device with sensitive ambient mics and suddenly you’ve got an AI-powered smart speaker that goes far beyond what the Alexas and Siris of the world are capable of—at least, in their current forms.

Fast Company’s AI 20: The other man behind Microsoft’s $10 billion OpenAI deal 

Fast Company writers have been hard at work this summer assembling the inaugural AI 20 list, spotlighting the most influential people building, designing, regulating, and litigating AI. We’ve been profiling the winners throughout September, and now the feature is nearing its end, but we saved some of the best stuff for last. We recently profiled Microsoft’s Kevin Scott, who played a central role in bringing about his company’s $10 billion investment deal with OpenAI. Here’s an excerpt about Scott’s first serious look at the AI startup:

[H]e was soon impressed by how clearly Altman and company could see the future implications of their research. “It wasn’t just ungrounded hypothesizing about the future,” [Scott] says. ”Something was really happening.” OpenAI could already calculate the eye-popping performance levels that could be reached by scaling up the size of its models and the underlying compute power. Scott could see the potential functionality that such performance increases could enable in Microsoft products. “That’s where we got very deliberate and serious about trying to figure out how to do this partnership,” he says.

You can read the whole profile here.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics