The race to make money with AI — unleashed by big tech — is now being followed by another to regulate these tools. Above all, because of obfuscation about the consequences of its use and the origin of its data. Two weeks ago, Italy decided to ban the use of ChatGPT for breaching data protection regulations and lacking filters for minors. Today, the governments of the two superpowers, the United States and China, announced steps towards regulating these programs based on artificial intelligence. The Joe Biden administration introduced a 60-day period to collect ideas on how to legislate against the unwanted effects of these programs, which pose a risk in areas as diverse as privacy, misinformation, or the labor market. For its part, Beijing has announced a regulatory proposal that requires security and legality from providers of such apps.
The US Department of Commerce has submitted a formal request for proposals for impeachment measures, and has moved on Wall Street Journal, which includes whether new, potentially dangerous AI models should undergo a certification process prior to launch. “It’s amazing to see what these tools can do even at their initial stage,” Alan Davidson, director of the National Communications and Information Administration, explains in the US newspaper. “We know we need to put some safeguards in place to make sure it’s used responsibly,” adds Davidson, who leads the initiative.
China’s Cyberspace Administration on Tuesday unveiled draft measures to regulate generative artificial intelligence services and said it wants companies to provide security assessments to authorities before releasing their products to the public, according to Reuters. The rules drafted by this regulator indicate that service providers will be responsible for the legality of the data used to train their generative AI products and that measures must be taken to avoid discrimination when designing algorithms and training that data.
In addition, this body ensures that China supports innovation in the field of gadgets, but the resulting content must be in line with the country’s core socialist values. The announcement comes after a slew of Chinese tech giants, including Baidu, SenseTime, and Alibaba, showed off their new apps, which range from chatbots to image generators. They thus join companies like Microsoft and Google, which already want to integrate these tools into their services.
Doubts in Europe
The announcements come as several European governments consider how to mitigate the risks of this emerging technology, which has exploded among consumers in recent months after the launch of ChatGPT, from OpenAI, initially backed by Elon Musk and now touted for $10. billion from Microsoft. Brussels wants AI-generated content to carry a specific warning, as declared by the European Commissioner for the Internal Market, Thierry Breton: “In everything generated by AI, whether it is text – everyone now knows ChatGPT – or images, there will be an obligation to notify It was created by artificial intelligence.
After the blockade announced by the guarantor of personal data protection in Italy, France, Ireland and Germany admitted the contacts to analyze whether they would follow suit. Privacy regulators in France and Ireland have contacted their counterparts in Italy for more information on the reasons for the ban, Germany’s data protection commissioner confirmed to the newspaper. Handelsblatt Which can follow Italian steps and ban ChatGPT due to data security risks.
Now, the National Commission for Informatics and Liberties (France’s privacy watchdog) has announced that it is investigating various complaints about ChatGPT. Meanwhile, Spain’s data protection agency, as well as its Italian counterpart, has been asked to discuss potential regulation of generative AI systems at Thursday’s meeting of the European Data Protection Committee, the body in which subsidiary agencies of member states coordinate.
The debate about the dangerous capabilities of these tools goes beyond the legislative sphere, as demonstrated weeks ago by more than a thousand specialists who demanded that the development of these programs be halted for six months. The letter warned that “AI labs have entered an uncontrolled race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict or control.”
You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.