Europe wants to place more commitments on generative AI such as ChatGPT | technology



no Big brotherno Minority Report And if possible, a world less Orwell from what is already. The European Parliament on Thursday took a first step, a giant leap in fact, to make Europe the first region in the world to regulate the largely unknown possibilities and risks of artificial intelligence (AI) and its more advanced versions, both those already known and those not yet developed. Two years after the committee submitted its proposal on regulating AI, MEPs want production models, at that time unknown to the general public and capable of generating both images (see the famous fake image of the Pope wearing a stunning white feathered coat) as text, especially modern ChatGPT. And, very worryingly, they are obliged to comply with additional transparency measures, to make it clear above all that they have been created with artificial intelligence.

Concerned about the potential negative effects on the rights and freedoms of citizens that this technology might have in the hands of unscrupulous companies or governments, MEPs have also expanded the restrictions contained in the European Commission’s original proposal: they therefore propose banning “uses of interfering and discriminatory artificial intelligence”, especially systems of measurements Vital in real time or later in public, with very few exceptions for security reasons. One of the drafters of the legislative proposal, Italian MP Brando Benifi, summed up the idea of ​​providing Europe with legislation with an “ethical and anthropomorphic” approach in order to “win citizens’ trust” in these systems that affect and will. Continue to influence their lives without being hindered, at the same time, by the advancement of new technologies.

The text was approved Thursday by a huge majority – 84 in favour, 7 against and 12 abstentions – at a joint session of the Committee on Civil Liberties and the Internal Market, and must be ratified by the European Parliament’s plenary, possibly in the middle of the session. from june. From that moment on, negotiations with the Council (that is, the States), which had already defined its position in December, and with the Commission, the so-called trial discussions, could begin to agree on a final text. The commissioner behind the legislative proposal, Thierry Breton, has expressed the hope that the so-called Artificial Intelligence Act (Artificial Intelligence Act) will enter into force in the 27th century at the latest in 2025.

See also  This is MAVE: a "k-pop" group of avatars that collect millions of views thanks to artificial intelligence | technology

“We are on the verge of achieving a legislative milestone in the digital landscape, not only in Europe, but for the whole world,” Nevi celebrated the legislative text, which in the short year it was worked on in the European Parliament had more than 3,000 amendments. The head of Spain’s Commission on Civil Liberties, Juan Fernando López Aguilar, asserted that artificial intelligence would be something that “will haunt us in some way for the rest of our lives.”

Organizations grappling with AI’s ability to conduct mass biometric surveillance have particularly welcomed the privacy call for the amendments that were passed. This expands the list of uses of AI systems already banned in the agency’s original proposal, which vetoed so-called social credit systems. Going forward, it should also include both real-time remote identification systems in public places (such as facial recognition) and post recognition, with the sole exception of their use by authorities to investigate serious crimes, as long as they have jurisdiction. to delegate.

MEPs also include in the list of prohibitions biometric classification systems that use “sensitive characteristics” such as gender, race, ethnicity, religion or political orientation, except for “therapeutic” use. Predictive monitoring systems for assessing the risk of a person or group of people committing an offense or crime (based on the classification of said persons, their location, or their criminal past), as well as systems for the recognition of emotions, should also be prohibited by both. police and border guards or in workplaces or schools. Finally, the indiscriminate tracing of biometric data from social networks or security cameras to build or expand facial recognition databases is also prohibited, which is one of the main demands of civic organizations concerned with the potential big brother of these new technologies.

See also  Google integrates its powerful artificial intelligence laboratories amid a commercial race with competition | technology

“From a biometric monitoring point of view, we’re quite happy with the script,” says Ella Jakobuska, an analyst with European digital rights organization EDRi. Your organization has been very active in Brussels in support of the restrictive stance on these tools.

“Face recognition cameras on the streets or live analysis of surveillance videos must stop,” Jakubowska explains. Searching for people from photos will not be allowed, closing the doors to Clearview-type apps. “This is very good news,” the British consider.

Police will still be able to operate these systems, but with new limitations: always retrospectively (not in real time) and in the context of specific crimes, to search for specific people and by issuing a warrant. “There is still a long way to go until the final text is approved, which may change. But today we must rejoice in this important victory for human rights,” says Jakubowska.

Regulation by use

In order to ensure that regulation like this, which regulates technologies in constant transformation, does not become obsolete even before it comes into effect, regulation of AI is based not on the regulation of specific technologies, but on their uses, creating categories ranging from “unacceptable” risk ( prohibited by law) to other minors admitted, although subject to strict controls so as not to affect the liberties and rights of citizens. In their proposal, the MEPs expand in this sense the classification of “high risk areas”, which are permitted but subject to strict obligations before a product in this category is put on the market. Thus, they include artificial intelligence systems that may affect the health and safety of citizens, as well as their basic rights or the environment. They are also classified as “high risk” AI systems that can “influence voters in political campaigns”, as well as recommendation systems used by social platforms.

See also  The United States and the European Union will propose a voluntary "code of conduct" for generative AI | technology

So that European regulations can be applied to new technologies such as generative systems of ChatGPT, which must enter the “high risk” category, MEPs are also adding new definitions: on the one hand, they include the concept of “foundational model”, or large model, to cover artificial intelligence Generative – capable of creating new, original audio, text, or image content from observing other data. They also add the definition of “general-purpose AI systems,” which they explain as “an artificial intelligence system that can be used and adapted to a wide range of applications for which it was not designed.” intentionally and specially postponed”, which should include those “foundational models”.

The applicants of these constituent forms must, according to the proposal of the MEPs, ensure “strong protection” of citizens’ fundamental rights, for which they must “assess and mitigate risks, comply with European design, information and environmental requirements” and be “registered” in the EU database.

Generative models such as ChatGPT must meet additional “transparency” requirements: thus, they would have to make it clear that the content was generated by an AI system. In addition, they must design their models in such a way as to prevent them from creating illegal content, and in the interests of copyright protection, they must publish the data used to train these systems, so that if the author believes that your rights have been infringed by the use of algorithms for your material, you can resort to The legal channels that exist in the European Union to claim or claim compensation.

You can follow country technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button