Uncategorized

Why did we believe the image of the Pope in the white coat? | technology


Image generated by generative artificial intelligence of Pope Francis.  It is not a real picture.
Image generated by generative artificial intelligence of Pope Francis. It is not a real picture.Pablo Xavier via Medjourney

“You can no longer believe in anything that comes from a digital device. In contrast to everything,” says journalist Robert Scoble in a tweet Accompanying an alleged photo of Elon Musk holding the hand of US politician Alexandria Ocasio-Cortez, better known as AOC. It’s one of many alert messages seen in a few full days Fake on the Internet. Images of Donald Trump’s arrest and eventual escape from prison and Pope Francis in a sophisticated Balenciaga feather coat, faux and hyperreal images circulating on the networks since last week have shown that the information is entering a new era.

Generative artificial intelligence (AI) is already developed enough to make us fall, at least visually, for scams that don’t require sophisticated techniques and users who are experts in digital manipulation. Today, keeping us from getting engrossed in screens will be increasingly difficult, according to experts. The blockchain is presented as a complex and long-term solution, but at the moment there is only one cure for a lifetime: check the sources, pay attention to the details, and be suspicious of everything.

This isn’t the first time images generated by the Midjourney AI tool have gone viral, but none have been as widespread as Balenciaga’s Pope. Firstly, because it seems much more plausible than other assumptions of people with an artificial appearance, as if from a video game or with many filters. Also because the context surrounding the Pope would suggest that he might dress in such a way.

See also  The problem of "programmers" culture: "The female view is excluded from the technological solutions that will shape the future" | technology

However, when you do Zoom inYou can notice the size of the ear, the severed hand that does not quite grasp the cup of coffee, the distortion of the glasses or the crucifix where you cannot see Jesus Christ carved and without part of the chain that supports it. These details suggest that it is not a real image, but rather a categorizing AI flaw: a tool that knows the surface of reality well, but not how physical objects interact or all the properties of the human body. At least for now.

However, they are factors that go unnoticed, especially if you are scroll Fast and in mobile format. In some photos of Trump’s false arrest, which were circulated on the networks a few days ago, the evidence that A forged They were sharper. The first aspect is about context, too: The arrest of the former US president will be covered in the mainstream media, and it wasn’t. Secondly, as some artists have claimed, generative tools lack the ability to render details of the human body, Especially the hands. In some photos of Trump, his body proportions may appear distorted or even melted and there is a blurry appearance. In other cases, distorted texts are observed.

But these tools are getting better and it’s only a matter of time before they stop failing at these details and start creating false images that look too real, according to the professionals. The new version of Midjourney is already capable of generating realistic human hands, eliminating what was until now the easiest way to define an artificial image.

See also  ChatGPT is a stupid machine, we have to reprogram it technology

It is the speed at which improvements are occurring that is of concern. Elena Verdú, member of the Artificial Intelligence and Robotics Research Group at the International University of La Rioja, explained to Maldita.es that only a month is enough for some of these recommendations to identify false images to become outdated. We face a collective concern about the power these tools have over society. So much so, that AI experts demanded in an open letter on Wednesday that ChatGPT’s “uncontrollable racing,” which also uses generative AI, be stopped for six months.

Journalist Robert Scoble explains it in the tweet Other ways to get acquainted Deepfake. The first thing is to check multiple reputable sources and verify their credibility. Also, assess the context. “Use critical thinking: Analyze the information you receive and consider the context, consistency, and logic behind it,” the author suggests.

When content goes viral, it’s also helpful to check what can be seen online. Google, for example, provides a reverse image search tool where you can upload an image and check where it was actually shared and what people are saying about it. If a photo supposedly taken by a photojournalist is first published on the Internet by an unknown stranger, this is a huge reason for suspicion.

On Twitter, there is also an option to label content as false, although for many people it is necessary to do so in order for the post to be alerted. According to the magazine timesthere is a group of Programming in the market that it claims to be able to discover Deepfake. However, there are few or no reliable free tools available to the end user. One of the detectors, which is hosted on the Hugging Face AI platform, was able to say “with 69% certainty” that the image of the Pope was generated by AI. But when presented with the alleged image of Musk with OAC, the tool failed to answer that the image was correct by 54%.

See also  Deepfakes: The Threat of Millions of Views Preying on Women and Defying Security and Democracy | technology

Other experts talk about how blockchain can help combat it. Known for being the technology behind cryptocurrencies like Bitcoin, the World Economic Forum cited its ability to provide authentication verification and a clear chain of custody that “make it effective” for tracking and screening all types of content, not just financial ones. The key is that this technology has a mechanism that does not allow the message to be changed, nor the moment of publication or the origin. But it is not a solution for the present: from theory to practice, there is still a long way to go.

The article points out that the blockchain also has its limitations. While it is able to verify the existence of a document, it cannot prove intellectual property, for example. Furthermore, to be truly effective for the end user, it must be built into the chips that power smartphones and computers. Something that also depends on the cohesion between international communities, governments, companies and civil society to form a governance model for digital content consumption.

You can follow country technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button