Fast company logo
|
advertisement

The deal sent shockwaves through the AI world Monday, and has angered some regulators who worked to pass the AI Act.

Why Microsoft’s stake in buzzy AI lab Mistral is raising eyebrows in the EU

[Photos: Danil Shamkin/NurPhoto via Getty Images; Guillaume Périgois
/Unsplash]

BY Mark Sullivan5 minute read

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

In Microsoft’s Mistral stake, some EU regulators smell betrayal

The French unicorn AI lab Mistral said Monday that it’s sold a small share of the company to Microsoft for $16 million. Founded by Google DeepMind and Meta alumni, Mistral has billed itself as an open-source-friendly company, releasing its first two models via Hugging Face last year. But on Monday, it also announced that its biggest model yet—Mistral Large—will be closed (available only through a paid API), and will be distributed through Microsoft’s Azure cloud. Mistral also announced a new ChatGPT-like chatbot called Le Chat

While Microsoft didn’t buy nearly as big a stake in Mistral as it did in OpenAI, there are similarities between the two deals. Both OpenAI and Mistral needed massive computing power to train their respective GPT-4 and Large models. The graphics-processing unit normally used to train them (made by Nvidia) cost $200,000, and the waiting list is eight months long. By using Azure’s cloud, Mistral can spend its time and money developing and training models. 

But Microsoft’s involvement in the company may already be driving Mistral down the same road OpenAI has traveled. As OpenAI took on more and more investment money, including a $10 billion chunk from Microsoft, it became less “open” about how its models work. The technical details of LLMs changed from being treated as scientific research to being treated like valuable intellectual property to be hidden and protected. The open-source community believes that development of big AI models should occur in public, not just within the walls of wealthy tech companies. 

Mistral was also credited with influencing the EU to water down some of the strictest provisions in the new AI Act. Politically, Mistral was just the right player to do that work, because it was viewed as Europe’s best chance to have its own horse in the AI race. Now that the AI Act has been passed, and suddenly Mistral is tied up with an American tech giant, some members of Parliament are wondering if Mistral was negotiating for Mistral’s interests, or Microsoft’s.

Sundar Pichai addresses Google employees on Gemini’s ‘woke’ problem

How many times can Google step on that rake? The company has been plagued by missteps ever since it began its breathless hustle to catch up to OpenAI early last year. The company seemed to have a real triumph on its hands with its new Gemini Ultra model, which bested OpenAI’s GPT-4 model on a broad range of performance benchmarks. And then the images of the Black Founding Fathers showed up. 

The image-generation function in Gemini had been trained and fine-tuned with such a paranoia of bias against people of color that it went too far in the other direction. When people typed prompts asking for pictures of Vikings or popes or Nazis, they received historically inaccurate images featuring only people of color. Gemini also reportedly equated Elon Musk’s influence on society with Adolf Hitler’s.

Semafor reports that Alphabet CEO Sundar Pichai addressed the company about the error in a memo Tuesday night. Pichai called the blunder “completely unacceptable.” Pichai said teams of Google people have been working “around the clock” to address the issues and claimed the company was “already seeing a substantial improvement on a wide range of prompts.” But he also promised “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations” to prevent a similar mishap from happening again. 

I’m sure the Google researchers had the best, most progressive, intentions in mind (AI training data sets have been terrible about racial bias), but the mistake provided right-wingers with a political gift. What’s alarming about the blunder is that such sloppiness could happen around such a major product launch. Google has said that it “red teams” new models, meaning it employs people to intentionally try to manipulate the AI into generating incorrect or toxic content as a training technique. Google has even claimed to use a second AI model to continuously prompt the new model in ways that might be harmful, again so that guardrails can be installed. 

So what happened with Gemini? Are we to believe that, after all that work, Google never once happened to prompt the model to output racially inaccurate images? It’s troubling because screwing up something so basic causes one to wonder about the rest of the safety work the researchers did around Gemini. What other surprises might it have in store for us?

advertisement

This AI app knows you’re depressed before you do

Researchers at Dartmouth say they’ve built the first smartphone app that can reliably detect the onset of depression . . . before the user even knows something is wrong. The app, called MoodCapture, uses a phone’s front camera to capture a person’s facial expressions

and surroundings during regular use (not selfies), then evaluates the images using artificial intelligence and facial-image processing software to detect clinical cues associated with depression. 

The researchers studied 177 people who were diagnosed with major depressive disorder and found that the app correctly identified early symptoms with 75% accuracy. With fine-tuning, that number is likely to go up. The researchers believe the app could be ready for public use within the next five years. They published their results Tuesday on ArXiv.

“We think that MoodCapture opens the door to assessment tools that would help detect depression in the moments before it gets worse,” said study coauthor Nicholas Jacobson, an assistant professor of biomedical data science. “These applications should be paired with interventions that actively try to disrupt depression before it expands and evolves. A little over a decade ago, this type of work would have been unimaginable.”

More AI coverage from Fast Company:

From around the web: 

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics