Fast company logo
|
advertisement

Margrethe Vestager has coordinated the EU’s work on the AI Act since it was originally introduced in 2021.

The EU’s top AI regulator explains why we need risk-based rules to build trust in AI systems

European Commissioner Margrethe Vestager addresses the audience during a press conference at the EU headquarters in Brussels on January 8, 2024. [Photo: KENZO TRIBOUILLARD/AFP via Getty Images]

BY Mark Sullivan8 minute read

In December, the three branches of the European Union came to a hard-fought agreement on the world’s first set of rules on the development and use of artificial intelligence. The so-called AI Act uses a “risk-based” approach, applying a light touch to more benign systems (recommendation engines, for example) while applying stricter transparency rules to more dangerous systems (like those dealing with loan qualification), and outright banning others (surveillance). As has been the case with other EU regulations, it’s possible that the AI Act foreshadows actions eventually adopted by U.S. lawmakers.

If there’s a face associated with the AI Act, it’s that of European Commission Executive VP Margrethe Vestager, who leads the EU’s agenda on tech and digital issues. Vestager has coordinated the EU’s work on the AI Act since it was originally introduced in 2021. Fast Company spoke to Vestager during her recent trip to the United States to visit regulators in D.C. and tech companies in Silicon Valley. This interview has been edited for clarity and brevity.

The EU is typically ahead of the U.S. on tech regulation, and AI seems like no exception. Many of the lawmakers I’ve spoken with acknowledge that they missed the boat on regulating social media and don’t want to repeat that mistake with AI. How do you interact with people in Washington, D.C., and what types of questions are they asking you? 

The thing that shines through a lot is an awareness that we want to get this right. When we started the Trade and Technology Council and I first got to know Gina Raimondo and Anthony Blinkin, one of the first things that we discussed was artificial intelligence. And we were very quick to agree on a common approach. That it wasn’t about the technology as such, it was about use cases. And it should be risk-based.  

In Europe, obviously we’ve learned a lot about how to regulate sectors over the years. And we were very respectful of the fact that we would not be able to foresee what would happen in six months, or in two years, or in six years, but also having learned from the first big chapter of digitization how fast market dynamics change. That you cannot leave this untouched because once you get there, the effects are entrenched, and it becomes so much more difficult to get a handle on it.  

So, very early on we had this common approach based on use cases based on risks. We’ve been working with the stakeholder community to use different [regulatory] tools to be able to assess [whether] this technology in these use cases is fit to be marketed in the U.S. and in Europe. So when we started the AI Act, you know, our U.S. counterparts, they would know everything about it. And we of course stayed loyal to our initial approach.

Have you received criticism from people in the U.S. about how the AI Act will effect U.S. companies? 

Where there’s been a conflict previously, it has not at all been the same when it came to AI. When we had the first Google cases, there was a bit of unease. You know, we had letters from members of Congress [saying], “What is it that you’re doing? You should not regulate the U.S. companies.” And we have, of course, taken this very seriously because we are not law enforcers against where a [company’s] headquarters is geographically placed. We are enforcers against the behavior in our marketplace. So we’ve taken that very seriously all the time, but when it came to AI, I think it has been a completely different discussion. 

Is that because there’s a greater sense that AI regulation is, in a sense, a borderless issue?  

I think the risks are so much more obvious here. The risks are greater. And because in the U.S., you’ve had societal movements that have made it very clear that some groups have been discriminated against to a very large degree. I think it’s the combination of societal movements and a technology that would pose a risk that such biases could become even more ingrained in your systems. 

How does the AI Act deal with bias, specifically biases in the huge training data sets that all of these companies use. Is there language about transparency about what is in that data set and where it came from? 

Yes, we worked with the metaphor of the pyramid. We think that in the bottom of the pyramid you have recommender systems—things where it’s easy for the consumer to see: “This is something that an algorithm has found for me. If I don’t want it, I can do something else.” Completely no touch [no regulation]. Second layer, you get to customer service where you will have more and more bots coming in. It will be increasingly difficult to distinguish if this is a human being. So there’s an obligation to declare this is not a human that you’re talking with. But other than that, hands off.  

Then you get to the use cases where, for instance, can you get an insurance policy? Will you be accepted to the university? Can you get a mortgage here? You need to [show] that the data that your algorithm has trained on actually reflects what it’s doing, that this will work without bias for these specific situations. And then the top of the pyramid, which are the prohibited use cases [such as] state surveillance point systems, or AI embedded in toys that could be used to make children do things that they would otherwise not do, or blanket surveillance by biometric means in public spaces.  

advertisement

I’ve seen studies proving that AI used even by law enforcement in this country has over-indexed on minorities, or generated more false positives to minorities. The technology hasn’t been very reliable. 

It’s been interesting to see that in some jurisdictions where police have started using AI, they have left it again. Exactly because of too many false positives. Of course we have followed that very closely. The thing is that technology will improve, and it’s mathematically impossible for a technology not to have bias. But the biases should be what they’re supposed to be. For instance, if you have a a system for accepting people to university, the system should select people who fulfill the requirements to be accepted to the university. The problem we are trying to prevent is that all our human biases become so ingrained in the systems that they will be impossible to root out. Because even if AI did not exist, we would have work ahead of us to get rid of the biases that exist in our society. The problem of AI is that if we are not very precise in what we do, unsolved biases will be entrenched and ingrained into how things are done. That’s why the timing of [the AI Act] is essential. 

On the subject of copyright there’s a big court case coming up, with the New York Times v. OpenAI and Microsoft. The case may influence whether AI companies should be able to scrape data, including copyrighted content, from the web to train their models. There just isn’t much case law on this yet. How is the EU thinking about this issue?

We don’t have case law, either. And this was, for obvious reasons, very important for Parliament. We had not addressed it in the [2021] proposal of the AI Act because ChatGPT was not a thing yet. We had loads of artificial intelligence, but not the large language models. And because of that, the AI Act does not modify European copyright law. It stands as it is.  

So when you train a model, you have the obligation to put in place a policy to respect Union’s copyright law, and you have to draw up and make publicly available a sufficiently detailed summary about the content used for training general purpose AI. 

It will be very interesting to watch because copyright law is not that old in Europe. It has been reassessed relatively recently, but it was not thought of for these [AI training] purposes.

Emmanuel Macron made a comment about the AI Act, saying that if you pass such strict rules in the EU, you will disadvantage EU AI companies versus U.S. companies. He said it might reduce investment in European AI companies. How do you respond to that?  

It’s been a central discussion because of this necessary balance between developing technology and creating trust in technology because people see that risks are being mitigated . . . Because all that we have done over the last years—we call it our digital decade—is based on a fundamental insight that for the use of technology to be pervasive, then you need to trust it. To trust that technology is being used for the benefit of people. Because if you trust that things can be well done, then you’re also not afraid of new things. 

I can see how having the government out there helping build trust might ultimately be a good thing for the market, especially when we’re talking about something as powerful as AI. 

I have learned in these years working closer with U.S. colleagues, there is such a difference in thinking about governance and legislation and how we see market dynamics and how we see the role of the state. 

There are some very powerful people in Silicon Valley who truly believe that the federal government isn’t competent to regulate the tech industry, and it should leave tech companies to regulate themselves. 

But that’s the beauty of democracy. That you have people who have the honor to represent people, and how they choose to regulate doesn’t come from being tech savvy or having done business themselves, but from representing people who want a society where it makes sense to live. 

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics