We need to harmonize the regulation of artificial intelligence | technology

Alphabet and Microsoft are two of the most betting companies on artificial intelligence.
Alphabet and Microsoft are two of the most betting companies on artificial intelligence.Given Rovik (Reuters)

We are at a historical moment when, given the advent of artificial intelligence, there is a general consensus on the need to regulate it. But how do we do it right? In the past decade, the debate over the management of this technology has gained momentum, multiplying the political proposals. A recent analysis at Stanford University put the numbers down for regulatory fever: From 2016 to 2022, the world went from one law on the matter to 37.

The presence of AI in legislative proceedings in 81 countries has increased nearly sevenfold. Spain leads the list with 273 signals, followed by Canada (211), the United Kingdom (146) and the United States (138). Not to mention, one of the initiatives with the greatest impact is yet to come: European AI law. What is clear is that there is a growing interest in the social impact of these new apps and a public outcry for borders to guide their development. The race for regulation is on and the world is looking to the superpowers for guidance because the fragmentation of rules affecting the most relevant technology of this century could have unintended consequences for almost everything, including international trade and the competitiveness of markets.countries.

With the advent of ChatGPT and the first steps towards artificial general intelligence (AGI), the technical discussion is focused on how to ensure that machines do not gain control. In the world, alignment or coordination It tries to make out what you want the systems to do and what they will actually do. This concern is what has led scientists such as George Hinton to ask, “If there is any way to control AI, we should figure it out before it’s too late” or even OpenAI researchers to share their concerns about technical development that is not compatible with humans. interests and moral principles.

See also  Lina Khan is a thorn in the shoe of technology | technology

Those developing this technology are asking for it to be stopped, but curiously only for 6 months, while many other prominent researchers such as Timnit Gebru contend that what is needed is regulation that promotes transparency rather than a downtime. Google CEO or Bill Gates sees this as an impractical proposal to tackle the real problems this progress presents with full speed. They consider that we are facing the “most important advance” since the creation of computers and mobile phones.

Although there is no universal definition, artificial general intelligence is understood as a computational system capable of performing any human task and generating new knowledge. It would be more appropriate to call it GodAI. Nearly 40% of experts believe it could lead to a nuclear-type catastrophe, which is why even the most liberal businessmen are calling for regulation. However, rather than obsessing over regulation to contain something that has already spilled over, we should open up a global conversation, not just between governments, allowing us to review the incentives that shape technological development itself and agree on minimum guidelines for the years to come. A tangled regulatory solution that is difficult to implement and comply with with complete deglobalization does not appear to be the best solution

To get an idea of ​​the diversity of reaction from regulators to a technology application like ChatGPT, we just have to look at the movements of recent weeks. China has introduced a set of rules for services based on generative AI. Beijing’s intention, as reported by Reuters, is for companies to conduct security assessments before bringing their products to market. Its guidelines place the responsibility on service providers, who must ensure the legitimacy of the data used to train their technology solutions, as well as implement safeguards that avoid discrimination when creating algorithms and when using collected information.

See also  CES: Technology Coming in 2023: Web 3, Smart Cars and Home Telemedicine | Technique

Italy decided to ban ChatGPT something that was later modified after privacy adjustments by the company. Along these lines, the European Data Protection Board (EDPB) has set up a working group of all national data protection authorities announcing new investigations into the most popular company of the moment: OpenAI. Is there just one provider of these wonderful language models? In contrast, countries such as the United Kingdom or India have chosen to avoid strict regulations at this discovery stage. It seems that they are betting on allowing technology to develop freely in order to unleash their technological offering that can boost their economies.

Given the diversity of strategies in different countries, with different national and corporate interests, the idea of ​​a unified regulatory framework for AI seems more like a dream than a feasible option. What are the alternatives between them? One of the frequent proposals in international forums is to create a global neutral international agency for artificial intelligence (IAI), with the guidance and participation of governments, large technology companies, non-profit organizations, global academics and society. As international governance builds dialogue to reach consensus on how to move forward at this technological inflection point, it may be interesting to update existing regulations and, in those new regulatory developments, to collaborate between countries so that their approaches and requirements align. It will be the only way to facilitate compliance.

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button