Gabriela Ramos: “The question is not whether AI should be regulated, but how” | technology

Gabriela Ramos arrived at UNESCO, the United Nations Educational, Scientific and Cultural Agency, in 2020 with a mission: she had to implement a kind of global declaration on artificial intelligence (AI). The document, eventually called the Recommendation on the Ethics of Artificial Intelligence, was introduced in 2021 and has been signed by 193 countries, though only 24 are implementing it. Non-binding in nature, it provides guidance for action on issues such as data management, mass espionage techniques, the abuse of cognitive biases, and the control of neurotechnology. It has, among others, the approval of the European Commission or Japan and companies such as Microsoft or Telefónica.

UNESCO’s initiative runs the risk of being irrelevant. But then ChatGPT came along. The concerns raised by this tool led to the publication of a letter three weeks ago signed by thousands of AI experts calling for a halt to the development of this technology. The wake-up call from some parents has renewed interest in the UN agency’s work. “There has been an exponential growth in consultations from countries wanting to meet with us. We’ve had advanced conversations with 18 countries. On request, we’re developing a measure of the ethical impact of AI. We’re helping to make diagnoses, evaluate government teams and think about what kind of organization they should oversee. Develop these regulations,” Ramos, who was born in Michoacán 59 years ago, explains via videoconference.

The Mexican has had an extensive career as an international civil servant, developed primarily in the Organization for Economic Co-operation and Development and the G20. Since 2020 she has been the Deputy Director-General of UNESCO. “All the uncertainty surrounding ChatGPT regarding its impact and development is helping us raise awareness around a key issue. This is the only silver lining to all of this.”

See also  Golden Globes: Between inspiration and confusion

Ask. What do you think of ChatGPT and the boom in generative AI?

Answer. ChatGPT confirms what we said: that there is an exponential growth of these technologies. Before, we were very interested in understanding machine learning algorithms [aprendizaje automático] They were strong in their definitions and in making sure of the quality of the data they used. The large language models that chatbots rely on make it more difficult to understand how they work. I think this is the main issue. It is a pity, because we are faced with an amazing technology. But it suffers from the same problems as less massive AI: When it hits the market, it isn’t always safe, trustworthy, or transparent. All of these developments are taking place in a general organizational vacuum. Europe provides guidance. President Biden himself has now called a consultation to see if these developments should be adopted before they hit the market. China has established regulations for those who wish to release products based on this technology.

s. Is it necessary to regulate artificial intelligence?

R was found. We need a framework that allows us to measure pre-. There is a need to assess the moral impact on freedoms, rights, and overall outcomes, all of which must happen before a product is even on the market. There must be certain procedures that allow us to ensure that these developments are fully tested and that we at least understand their impact. But we continue in the world upside down: first you let them go and then you wonder what their consequences are. It seems absurd to me to say that we need systems. All markets are regulated. Imagine if pharmaceutical companies could market any drug without any kind of examination. Or if you can open a restaurant and serve any quality food you want. The issue is not whether or not there will be regulation, but what kind it is.

See also  This is a prefabricated house suitable for rural land that can start from 7000€

s. Thousands of experts signed a letter three weeks ago calling for a halt to generative AI research. Do you agree with her?

R was found. What this message assures us is that we do not feel capable of dealing with these systems. I think the message makes sense. Everyone focused on the pause, but what is also required is that there will be no more developments before we have strong regulatory frameworks. UNESCO has been working on this for the past two years, since its 193 member states agreed on a recommendation on AI ethics. The relevant question here is whether governments have the powers, institutions, and laws to moderate and manage AI. The expert message means that many people are now getting more information on this topic. That these gentlemen, who have developed this technology, say that a pause is needed, means that they themselves do not trust themselves to be able to handle it. I don’t think a moratorium is a realistic option. What we have is accelerating regulations. And there I agree: we need AI governance mechanisms.

We want to lay out a roadmap on how to understand AI, how to deal with these developments, how to prevent negative impacts, how to identify them, and how to strengthen regulations and institutions.”

s. What does the UNESCO Recommendation on Ethics in Artificial Intelligence suggest?

R was found. We say technologies must support human rights, they must contribute to climate change and they must deliver results that are just and powerful. It must be transparent and there must be accountability. 60% of these technologies are developed by US entities, and another 20% by Chinese companies. This focus then stems from a lack of diversity, discriminatory results, and with biases. The entire business model must change.

See also  Historic protest by Treasury inspectors to stop Montero's "rapid" promotions

s. Is UNESCO’s approach to AI regulated by each country or is it handled by a supranational body?

R was found. Our recommendation is not binding, but it has been signed by 193 countries. Ultimately, governments have to define their own regulatory frameworks. What we are doing now at UNESCO, building on the definition we already have of standards and best practices, is thinking about the institutions and regulations that help countries come together. The United States, which is considering returning to UNESCO, said our discussion about what kind of international rules should govern AI is important. When someone sees their basic rights being attacked, when someone is discriminated against and isn’t offered a job offer because AI wasn’t in their databases, when facial recognition technology doesn’t detect you because you’re a person of color or a woman so, no matter how many multilateral agreements Governments have a responsibility to act.

Gabriela Ramos, Deputy Director-General of UNESCO.
Gabriela Ramos, Deputy Director-General of UNESCO.Kristel Alex

s. Is it realistic to try to lobby for international regulations to regulate technology like artificial intelligence?

R was found. Millions of AI-powered decisions are made without any transparency. If you get discriminated against, you don’t even know if it’s because of a person or an algorithm. It is up to us to provide the context, and then countries will move forward in their decision-making process. In twenty years of experience in multilateral organizations, I have learned that progress can be made with concrete evidence, showing predictions of some developments and pointing out that countries with good regulations are not lagging behind in technological competition.

s. The Cold War Nuclear Non-Proliferation Treaties made sense because the United States and the Soviet Union were involved. What will happen in the case of artificial intelligence if a key player is excluded?

R was found. When I got to UNESCO three years ago, many people said to me: What good is working within an ethical framework for AI if the United States, which is the main developer, is not a member of UNESCO? The recommendation was signed by 193 countries, including China. The United States will notice because what we’re doing is not imposing a single model, but raising awareness. We want to lay out a roadmap on how to understand AI, how to deal with these developments, how to prevent negative impacts, how to identify them, and how to strengthen regulations and institutions.

s. The geopolitical level is important in the development of artificial intelligence.

R was found. Yes, we are in the middle of a technology race. The type of technology to be adopted is now being decided. All countries are getting AI packages to manage education, health or security. How do you make sure they understand what they are buying? Those who produce these technologies and have an interest in having more users of their technology consider what is happening in terms of regulation. China is part of the UNESCO consensus. Will you comply with the agreement? Well, they signed it, right?

s. Barely six months old, ChatGPT places generative AI among the great topics of the day. How long do we have to develop proper governance mechanisms to regulate this technology?

R was found. We’re already on it. The EU has already gone far enough with its directives, with its risk-based approach. It is a different approach from UNESCO’s, but very complementary to analyzing the type of developments that carry the greatest risks. I would say that if the EU directives were already fully in effect, ChatGPT would not have entered the market. because? Because it will have the characteristics of high-risk developments that require special attention from the regulator. What happened with ChatGPT made us give that sense of urgency to what we were already doing. UNESCO recommendation adopted between 2020 and 2021. European Union directives, between 2020 and 2022. We are doing well.

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button