Autonomous and nuclear weapons, robotics and artificial intelligence: how to regulate technologies with unforeseen consequences for humanity | technology


Two visitors to the Tekniska Museum in Stockholm saw the Impossible, a sculpture created by artificial intelligence, on June 8.
Two visitors to the Tekniska Museum in Stockholm saw the Impossible, a sculpture created by artificial intelligence, on June 8.Jonathan Nakstrand (AFP)

This is not the first time that humanity has faced a technological development that has unforeseen consequences for its existence. Writer Isaac Asimov actually grew up in The vicious circle, a story published in 1942, Three Rules to Protect People from Robots and its rule is still used as a reference. The IAEA was created in 1957 “in response to the deep fears and expectations inspired by the various discoveries and uses of nuclear technology,” according to the organization itself. International humanitarian law (commonly known as the law of war) has spent years seeking to effectively regulate lethal, autonomous weapon systems, which can attack without human intervention. Europe is now beginning to tackle the world’s first regulations on artificial intelligence (AI), a technological development capable of accelerating progress in essential areas such as health or energy, but also threatening democracies, increasing discrimination or breaking all boundaries of privacy. “Sowing baseless panic does not help, on the contrary. Artificial intelligence will continue to work and we must improve it and prevent it from happening, ”defends Cecilia Danesi, publisher and lawyer specializing in artificial intelligence and digital rights, professor at several international universities and author The Empire of Algorithms (Just published by Galerna).

The first thing to understand is what an algorithm is, the basis of artificial intelligence. Danisi, a researcher at the Institute for European Studies and Human Rights, describes it in her work, an essential compendium for understanding the scenario facing humanity, as “a systematic set of steps that can be used to make calculations, solve problems, and arrive at decisions.” In this way, the algorithm is not the calculation, but the method. And it is this that can include the exact model of identifying cancer in images, discovering a new molecule with pharmacological uses, making the industrial process more efficient, developing a new treatment or, conversely, creating discrimination, misinformation, a humiliating image or an inappropriate attitude. fair.

See also  Intimate stories hidden in YouTube comments | technology

OpenAI director Sam Altman, Turing Award winner Jeff Hinton, AI researcher Joshua Bengio, and Elon Musk, among others, have called for organizing and urgent action to address the “existential risks” that AI poses to humanity. These include the amplification and amplification of misinformation (such as the abundance of false and malicious content on social platforms), biases that reinforce inequality (such as the Chinese social credit system or the automated viewing of people as potential dangers because of their race) or breaking all boundaries of privacy to collect data that feeds into an algorithm and which remain hidden.

The European Union has begun negotiating what is called, if the deadlines are met, to be the world’s first artificial intelligence law. It could be approved during the Spanish Presidency of the European Union and its aim is to prevent uses considered “unacceptable risks” (random facial recognition or manipulation of people’s behaviour), regulate their use in sectors such as health and education, as well as sanctions and prevent the sale of systems that do not comply with the legislation.

UNESCO has developed a voluntary ethical framework, but this very character is its main weakness. China and Russia, two countries using this technology for mass population surveillance, have signed up to the principles.

“There are basic rights involved and it’s an issue that we have to preoccupy and worry about, yes, but with a balance,” Danesi pleads. It’s a similar standard to the one revealed by Johan Lebassar, Executive Director of the European Agency for Cyber ​​Security (Enisa for its English acronym):If we want to secure AI systems and also ensure privacy, we need to look at how these systems work. ENISA studies the technical complexity of AI to better mitigate cybersecurity risks. We also need to find the right balance between security and system performance.”

See also  El ‘hermano’ maligno de ChatGPT y otras amenazas de la inteligencia artificial generativa | Tecnología

One of the risks revealed so far is the replacement of people by AI-powered machines. In this sense, researcher Cecilia Danesi asserts: “Machines will replace us and they are already doing so. There are many who replace us, enhance work or complement us. The question is what and where we want to replace and what requirements these machines must meet to make certain decisions. First we have to identify a problem or need that justifies its use or not.”

In robotics, Asimov had already anticipated this problem and established three principles: 1) a robot will not harm humans or allow them to be harmed through inaction; 2) a robot obeys commands it receives from a human, unless those commands conflict with the First Law; 3) A robot protects its own existence to the extent that such protection does not conflict with the First and Second Laws.

Permanent and preventive supervision

“Looks good. Done: AI can never harm a human being. Divine. The problem is that in practice it’s not that clear,” explains Danesi. The researcher recalls “a case where two machines were programmed to improve negotiation and the system realized that the best way was to create another, more efficient language. Those who designed the program could not understand that language and separated from them. The system was dealt with within standards, but AI can go beyond what is Imagined.In this case, the machine did not harm its programmers, but rather excluded them from the solution and its consequences.

The key, for Danesi, is “permanent oversight, auditing of these high-risk systems, which can have significant impact on human rights or security issues. They must be assessed and audited to verify that they do not violate rights, that they do not take sides. And this must be done on An ongoing basis because systems, as they continue to learn, can acquire bias. Preventive measures must be taken to avert harm and create ethical systems that respect human rights.”

See also  Gabriela Ramos: "The question is not whether AI should be regulated, but how" | technology

Another major risk of the uncontrolled use of AI is its use for military purposes. The proposed EU regulation excludes this aspect in its first formulation. It is one of the most dangerous uses of artificial intelligence. Often the laws prohibit something that, later, in practice, continues to operate and where it can cause the greatest harm to people, ”the researcher laments.

“Do we fear machines? The answer is no! We must, where appropriate, fear people for their use of technology,” Danesi defends in his work The Empire of Algorithms.

Respecting citizen data

Manuel R. spoke. Torres, professor of political science at the Pablo de Olavide University and member of the advisory board of the Elcano Royal Institute, put it in similar terms. The problem is the proliferation of technology that must be prevented from getting into the wrong hands. It is not knowledge that is released into the world and anyone can benefit from it.

Torres adds a problem to the technology scenario and to a proposal for European regulation, which he defends as a regulatory force: “The conflict is in how to develop this technology in other areas that don’t have any kind of scruples or restrictions regarding respect for the privacy of the citizens who feed all of that with their data.”

The political scientist offers the case of China as an example: “Not only is it in that technological profession, but it has no problem with the heavy use of the data its citizens leave to fuel and improve those systems. If this happens globally, it would also be dangerous.”

Torres concludes, “We find ourselves in an area where there are few references that we can draw on to know how we need to address the problem and where, in addition, there is a problem understanding the implications of this technology. Many lawmakers are not aware of these developments.”

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.

Subscribe to continue reading

Read without limits



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button