In just six months, searches for “artificial intelligence” on Google have multiplied by five. ChatGPT, which was launched on November 30, has tens of millions of users. Sam Altman, CEO of OpenAI, the creators of ChatGPT, has already gone to Congress to explain himself about the impact of AI. It took Mark Zuckerberg 14 years to go to Washington to talk about Facebook’s role in society. And Altman was not cut short: “My worst fear is that this technology will go wrong. If things go wrong, it could go wrong.”
Heavy and explosive phrases about the explosion of artificial intelligence have already spawned their own memes. He also popularized the term “critical hype” [criti-bombo]And Created in 2021 to identify critique of new technology Which ends up making it even more hype: a leading example of “critical hype” It was Cambridge Analytica that, with its criticisms, gave Facebook the power to choose presidents.
The culmination of these statements was the departure of Jeff Hinton, the godfather of artificial intelligence, from Google to be able to speak freely about its dangers: “From what we know so far about the functioning of the human brain, our learning process is probably less efficient than that of computers,” he said in a statement. EL PAÍS. The UK government’s outgoing chief scientific adviser has just said that artificial intelligence could spark a new “industrial revolution”. There are already groups trying to regulate so that this technology does not get carried away in their trading. In the case of Altman, some analysts point out that these words about his “concerns” can also orchestrate the entry of new competitors into a market where they are already in a winning position.
This shortlist is just a few examples of all of the AI prophecies and fears. But the effect may also be more bearable: What if everything ends up being slower, with fewer concerns or more digestible than it seems now? It’s a valid option, though it’s currently less explored. It’s hard to deny the impact in many areas, but changing the world is a daunting process. Similar previous great eruptions have profoundly changed our way of life, but humans have gotten used to it without the earthquakes. Could artificial intelligence end up the same?
“At the very least, it’s a major structural change in what software can do,” says Benedict Evans, an independent analyst and former partner at Andreessen Horowitz, one of the leading venture capital firms in Silicon Valley. “It probably allows a lot of new things to be achieved. This makes people compare it to an iPhone. It could be more than that: it could be more comparable to a computer or ‘GUI'” [conocido como GUI en sus siglas EN inglés]’, which allows interaction with the computer through the graphic elements of the screen. It’s an unusual effect, but it gives more context.
These new technologies had a clear weight in action. “My concern is not that artificial intelligence will replace humans,” says Meredith Whitaker, president of Signal, the messaging app. But I am deeply concerned that companies will use it to degrade and lower the status of their workers today. The danger is not that AI will do workers’ work, but rather that the introduction of AI by employers will be used to make those jobs worse, exacerbating inequality.”
Still needs to be improved, but to what extent
With more or less power, its working effect will be noticeable. But something is still not clear about these AI systems: they still make a lot of mistakes, the so-called hallucinations. It is one of the coolest themes. Professor at the Polytechnic University of Valencia and researcher at the Leverholm Center for the Future of Intelligence in Cambridge (UK) José Hernandez Urallo has been studying it for years: “At the moment they are at the level of knowledge – all their sons-in-law, in the future they will be at the level of a good expert, maybe more in some subjects More than others: this is what causes us anxiety because we do not know in what subjects it can be relied upon. It is impossible to make a system that never fails, because we will always ask more complex questions. Systems are capable of better and worse, and they are unpredictable, “he explains.
If they weren’t mature, why did they have such a sudden and wonderful impact in these months? There are at least two reasons, says Hernandez-Orallo: First, commercial pressure. The bigger problem comes because there is commercial, informational and social pressure on these systems to always respond to something, even when they are unsure. If higher thresholds are set, these systems will fail less often, but they will almost always answer “I don’t know.” There are thousands of ways to summarize a text and do it well and the probability of each of them is very low,” he says.
Second, human cognition: “We have the impression that the AI system has to be 100% correct, like a combination of a calculator and an encyclopedia,” says Hernandez-Orallo. But this is not the case. “For language models, creating plausible but false text is easier. The same thing happens with audio, video, and code. Humans do it all the time, too. It’s especially evident in children, who respond with statements that sound good, but may not make sense and we tell them, ‘That’s funny.'” But we don’t go to the pediatrician because “my son hallucinates a lot.” Behind both cases, children and certain types of artificial intelligence, he explains, is the objective function of imitating as much as possible.
And if that makes us suspicious?
The big impact on employment will fade when there are things that AI doesn’t finish well or, similarly, we don’t know if it does well. It will be difficult when we ask her about a book we have not read, if the answer is completely reliable. It might be. or not. In some cases the suspicion will be acceptable. In other cases it will be a serious problem. It is possible that we take part of the mistakes as much as possible and accept them. But we are not there yet.
This limitation of its impact does not limit the main potential fear: so-called artificial general intelligence or, better, artificial intelligence systems that are more advanced than we have today. In the collective imagination it has become a concept similar to “machine taking over the world’s software and destroying humans”. “People use this concept for everything, like when you tell kids coconuts are coming,” says Hernandez-Urallo. The question is, say, a general purpose system like GPT-4, how much capacity does it have and whether it needs to be stronger than human, all human, average, smarter, and for what tasks. It’s all very poorly defined, and it’s impossible to verify at this point.”
Although this fear of “coconuts” is difficult to pin down, it can be a useful concept for thinking about the future from today: “Because we’ve created machines that can replace us, we’ve been afraid of them,” says Matt Bean, a professor at the University of California, Santa Fe. Barbara: “We have strong evidence that we need both criticism and fearlessness, as well as imagination and assertiveness when it comes to thinking about new technologies.”
Right now, in the past, this fear would recur. We seem to be falling into a kind of trance in relation to these systems [de IA]Reflexively, Whitaker says, We believe They are human, and we begin to assume that they are listening to us. If we look at the history of systems that preceded ChatGPT, it is noticeable that although these systems were much less sophisticated, the reaction was often the same. People locked themselves in a surrogate intimacy with these systems when they used them, and as now, experts predict that these systems will soon (always “near,” never “now”) be able to completely replace humans.
You can follow country technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.
Subscribe to continue reading
Read without limits