Deepfakes: The Threat of Millions of Views Preying on Women and Defying Security and Democracy | technology

“This is the face of pain.” This is how creator QTCinderella begins a video in which she denounces that she is the victim of posting a hyper-realistic AI-generated porn sequence. The pictures you refer to are wrong, and the damage caused is not. The 29-year-old internet celebrity joins a long list of people affected by this type of creativity, known as Deepfake. AI applications make it increasingly easy to produce and difficult to identify as bogus while regulating and monitoring lags behind the development of these technologies. The number of such online content doubles every six months and accounts for more than 134 million views. More than 90% of cases are pornography without consent. An investigation by Northwestern University and the Brookings Institution (both in the US) warns of its potential security risk. Other studies warn of the dangers of interference and manipulation in democratic political processes.

Sonia Velazquez, a university student from Seville, was curious about how the videos work Deepfake She asked her boyfriend, a graphic design student, to show her proof and he agreed to do it with his picture. “It started as a game, but it’s not. I saw myself being vulnerable. I knew it wasn’t me, but I imagined all my friends sharing this video and joking and God knows what else. It felt dirty, and even though we deleted it immediately, I can’t stop thinking.” How easy it is and the damage it can cause,” she says.

“Seeing yourself naked against your will and it spread on the internet is like feeling violated. It shouldn’t be part of my job to have to pay money to have these things removed, to be harassed. The constant exploitation and objectification of women as something that ‘stresses me’” says QTCinderella after the video was broadcast. deep who was a victim of it. And before that, British actress Helen Mort was attacked by the same pictures that she took from her social networking sites. “It makes you feel powerless, like you’re being punished for being a woman of public voice,” she says. “Anyone in any walk of life can be a target for this and people don’t seem to care,” lamented another victim of the hyper-realistic videos who asked to remain anonymous to avoid network searches, though she believes she managed to. to delete all trace.

See also  Adrian Hone, Video Game Designer: “Companies and governments use games to control us” | technology

Hoaxes are as old as mankind. Fake photos are not recent either, but they have exploded at the end of the last century with easy and popular still photo editing tools. Latest video manipulation. The first public complaint from the end of 2017 against a Reddit user who used it to strip celebrities. Since then, he has not stopped growing and has moved on to creating ultra-realistic sound.

Techniques deep present significant ethical challenges. It is rapidly developing and getting cheaper and more accessible day by day. The ability to produce realistic-looking and sounding video or audio files of people doing or saying things they haven’t done or said provides unprecedented opportunities for deception. Politics, citizens, institutions and companies can no longer ignore setting up a set of strict rules to limit them,” sums up Lorenzo Dami, professor at the University of Florence and author of a study published in Search portal.

Non-consensual pornography

False reality videos primarily affect women. According to Sensity AI, a research company that tracks fake reality videos online, 90-95% of them are non-consensual pornography and nine out of ten of them feature women. “This is a problem of sexual violence,” Adam Dodge, founder of EndTAB, a nonprofit organization for education in the uses of technology, told the Massachusetts Institute of Technology (MIT).

The European Institute for Gender Equality considers this as well and includes these creations in its report on online violence against women as a form of sexual assault.

Regulations lag far behind the technological developments that make creativity possible Deepfake. The regulation of artificial intelligence (artificial intelligence law) of the European Commission is still a proposal, and in Spain the development of the government agency to oversee artificial intelligence is still pending. “Once a technology is developed, it cannot be stopped. We can regulate it and mitigate it, but we are late “, warns Felipe Gomez Pallet, President of Calidad y Cultura Democráticas.

See also  Ana Bernal uses Trevino Lorca to denounce how women continue to be subjugated

Some companies have gone ahead to avoid being part of the criminal business. Dall-e, a digital creation app, warns that it has “limited the ability to produce violent, hateful, or adult images” and has developed techniques to “prevent photorealistic images of the faces of real individuals, including public figures.” So do popular AI applications. other audiovisual constructions or video playback. A group of ten companies have signed a catalog of guidelines on how to responsibly create, create, and share AI-generated content. But many others jump into simple mobile apps or roam freely on the web, including some kicked off their original servers, like the one that became popular with the slogan Undress your friend And resort to other open source platforms or messages.

The problem is complex because freedom of expression and creativity goes hand in hand with protection of privacy and moral integrity. “The law does not regulate technology, but rather what can be done with technology,” warns Borja Adsuara, a university professor and expert in digital law, privacy and data protection. “Only when technology has only bad use can it be banned. But the only limitation on freedom of expression and information is the law. Technology should not be banned because it can be dangerous. What needs to be pursued is abuse,” he adds.

A frame from a QTCinderella video in which the creator denounces being a victim of a fake porn video.
A frame from a QTCinderella video in which the creator denounces being a victim of a fake porn video.QTCinderella on Twitch.tv

The virtual resurrection of Lola Flores

In this sense, the Italian professor Lorenzo Dami identifies the positive and negative uses of this technology. Among the former, it stands out for its use of audiovisual production, better interaction between machines and humans, creative expression (including satirical), medical applications, culture and education. An example of this is the hypothetical resurrection of Lola Flores for an advertising campaign, which was carried out with the consent of her descendants.

On the other side of the scale are hyper-realistic creations of sexual blackmail, insults, revenge porn, intimidation, harassment, fraud, defamation and denigration of reality, damage to reputation, and attacks of an economic nature (altering markets), judicial (falsifying evidence) or against democracy and national security.

On this last aspect is Venkatramanan Siva Subrahmanian, cyber security professor and author Deepfakes and international conflictswarns: “The ease with which it can be developed as well as its rapid diffusion towards a world in which all state and non-state actors have the ability to deploy ultra-realistic audiovisual creations in their security and intelligence operations,” he warns.

In this sense, Adswara argues that more “dangerous” than false pornography, despite its prevalence to the majority, is the potential harm to democratic systems. There is no time to deny it or, even if it is denied, virality cannot be stopped. problem Deepfakeas it happens with hoaxes, not only are they perfect to make them seem credible, but people want to believe them because they coincide with their ideological bias and redistribute them without disparity, because they like them and they want to think it’s true.”


The current regulation, according to the lawyer, focuses on the outcome of the proceedings and the intent of the offender. “If the scene never existed because it is fake, you are not revealing a secret. It should be treated as a case of insult or a crime against moral integrity when posted with the intent of publicly insulting another person.”

“The solution could be to apply the figure presented in cases of minors, which is child pornography,” adds Adswara. “This will allow crimes against privacy to include not only real videos but also real-life clips that contain intimate images that resemble those of a person.”

Technological identification systems can also be applied, although they are getting more difficult because AI is also developing formulas to avoid them.

Another method is to require that any kind of factual content be explicitly identified as such. “It’s in the digital rights bill, but the AI ​​bill hasn’t been approved yet,” the lawyer explains.

This type of identification is common in Spain in cases of fake pornography to avoid legal problems. “But these women,” warns Adsuara, “have to put up with the incorporation of their image into a pornographic context and interfere with their right to have their image, even if it is not real, but realistic.”

Despite the obvious harm of these videos, complaints in Spain are few compared to those registered in the United States, the United Kingdom or South Korea, although the prevalence of these videos is relatively similar. The Spanish expert in digital law believes that it is not given importance because its falsehood is obvious and because the complaint sometimes only acts as a mouthpiece for the criminal looking for exactly that. “In addition, this society is so sick that we do not consider it bad, and instead of defending the victim, they are being humiliated.”

Josep Coll, Director of RepScan, a company dedicated to eliminating harmful information on the Internet, also confirms the paucity of complaints from public figures affected by false videos. However, he points out that they handle many extortion attempts with them. He recounts the case of a businessman who broadcasts a false video that includes photographs of a country he had recently visited in order to arouse suspicion among those in his circle who knew the trip had taken place. Based on RevengeHe comments. “We get cases of these people every day.” And he adds: “In cases of extortion, they are looking for a heated reaction, in order to pay for their withdrawal, even if they know that it is not true.”

You can write to [email protected]will follow country technology in Facebook And Twitter Sign up here to receive The weekly newsletter.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button