Fast company logo
|
advertisement

As a Global Forum on the Ethics of AI opens today in Slovenia, Gabriela Ramos sits down with ‘Fast Company’ to talk about the need to consider AI’s impacts beyond those discussed by first-world countries.

Act now on AI before it’s too late, says UNESCO’s AI lead

Gabriela Ramos [Photo: Marc Piasecki/Getty Images for Laureus]

BY Chris Stokel-Walker6 minute read

Artificial intelligence has the potential to change the world. By many measures, it already has. But while policymakers and political leaders worry about the impact of generative AI tools such as ChatGPT on their population, and the leaders of the companies behind the tech worry their innovation is about to be trampled by regulation, some are concerned about the deeper issues.

Starting today, delegates are gathering in Slovenia at the second Global Forum on the Ethics of AI, organized by UNESCO, the United Nations’ educational, scientific, and cultural arm. The meeting is aimed at broadening the conversation around AI risks and the need to consider AI’s impacts beyond those discussed by first-world countries and business leaders.

Ahead of the conference, Gabriela Ramos, assistant director-general for social and human sciences at UNESCO, spoke with Fast Company. The interview has been edited for length and clarity. 

Why is it important that UNESCO brings together all these people from different parts of the world to discuss the ethics of AI?

This was part of the implementation plan of the global standard that we have: The Recommendation on the Ethics of AI, that was adopted in 2021. When we adopted the Recommendation, we saw there were already 100 principles. We wanted to move away from principles, and do something much more practical. We created the Readiness Assessment Methodology (RAM) to measure where countries were in terms of their commitments. Part of it was to have a global forum every year to take stock of where we are.

At the end of 2021, people were very entrenched in the narrative that it’s better not to regulate because you’ll break it: “Governments never know anything, and the private sector knows better.” But this was before generative AI.

Because of the capacities of generative AI, you have seen how the changing narrative has really moved into: “Yes, we need to do something.”

The forum is not going to be looking at the technologies. We are seeing where are the real policy benchmarks? What are countries doing? Looking at the Recommendation we set in 2021 and seeing how much countries are delivering on outcomes. It’s the most global event. I have Switzerland and Japan calling me. I have South Sudan calling me. It’s great that we have all these people coming together.

Why are so many countries sending people here? What is it about the generative AI moment?

Everyone is very worried. One thing is the question of existential risks. That sounds ominous. But we know that the concerns we have with traditional AI can be magnified by ChatGPT and generative AI. The question of discrimination. The question of a lack of accountability. Lack of transparency. The question of property rights. The question of mass disinformation at scale. Attacks on women . . . all these things can be magnified.

I think people are concerned. And I think we are coming to a moment where the question is not: “Let’s understand the risks.” That’s important, and we’ll continue to do that. But we also need to start learning more about how we develop, for example, liability frameworks? How exactly do you ensure that it’s not only geeks that are developing the tools that are going to be assessing the impacts?

Countries want to learn from each other. Ethics have become very important. Now there’s not a single conversation I go to that is not at some point referring to ethics—which was not the case one year ago.

You mentioned a whole load of ethical questions. And yet you mentioned this conversation around existential risks. Do you think conversations around the former have been overlooked as we’ve seen the eye-catching existential risk conversation taking place?

advertisement

I think in the public discourse, yes, because of the visibility of people like [Geoffrey] Hinton or Yoshua Bengio, who were the ones that sounded the alarm bell. Everybody got a little bit more worried about that. But if you talk to policymakers, they are looking at the whole package.

It’s not that they are going to be not caring about the fact these technologies are being used by hundreds of decision-makers about whether to have access to a loan or the financial sector. It’s not only about facial recognition. It’s a question of the fact that you can be discriminated against. You might not even be able to know that you were denied a loan for renting a house because the machine used whatever definitions that excluded you.

These things are as important because they’re existential for people. If your daughter doesn’t get into university because, at some point, somebody thought your neighborhood is not worth taking into account, [that’s a problem]. We’re looking at the whole spectrum.

When I was working for another organization, I did a review of the telecoms sector in Mexico, and it was fascinating because we had got this big monopoly. And then you have the reform to introduce competition decisions, which are not easy. I’m saying it because it’s very similar. It seemed that it worked. But who’s going to implement that? There was no strong regulatory office on telecoms. We had to create it. This is the focus we’re having in our work at UNESCO. The very nitty-gritty learning with each other in terms of institutions, governance, laws, regulations, and incentives.

 What’s the risk if you don’t go into the nitty-gritty?

A continuation of unsustainable trends we already have with us. You will have half the world not connected to the internet, and then you will have this super-high concentration of power and technology in a few hands, countries, and companies producing suboptimal outcomes. Besides being very unfair, it’s not sustainable. We’re seeing it now, as we speak. Who’s going to be watching out for elections and misinformation that can be created? Who’s in charge? What’s at stake here is the rule of law—and that’s too big.

Tech companies have previously said they can regulate themselves. Do you think they can with AI?

Let me just ask you something: Which sector has been regulating itself in life? Give me a break. Of course, I’m a policy person, but the decision of not regulating these markets is a government decision. Having responsible business conduct, of course, is something that we all praise. But at the end, the ecosystem and the way that companies behave and the incentives are set up by the government. And if governments decide they want free flow, what happens is you have these very skewed outcomes. We need to be super careful in how to do it to establish a very good balance between responsibility and accountability, and innovation and creativity.

You’ve got 50 countries putting together their RAMs in 2024. How important is it that this is done this year, given we’re more than a year into the generative AI revolution, and companies have had more than 12 months’ head start?

I think it should have been done yesterday. But we are proud that we were working with Colombia in 2022 because they already had a national AI strategy. With Senegal, the same: They had a national strategy, and we’re reviewing it. Morocco is producing, as we speak, a white paper drawing from the recommendations of the RAM to get very concrete on what they’re going to do to implement it.

I think this is the way to do it. Countries are at a very, very different position in terms of this technological revolution. But it’s also true that it’s not great that the Global South is a witness or user of technologies developed in one place. You also need to have them developing their own technologies.

Because in the end, if you only have the U.S. and China producing 80% of all the developments, you’re missing a lot of the cultural diversity of the world. So yes: As usual, governments and institutions are always lagging behind market developments. But when we catch up, it works.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Chris Stokel-Walker is a freelance journalist and Fast Company contributor. He is the author of YouTubers: How YouTube Shook up TV and Created a New Generation of Stars, and TikTok Boom: China's Dynamite App and the Superpower Race for Social Media. More


Explore Topics