Uncategorized

Emily Bender: “Chatbots shouldn’t speak in the first person. It’s a problem that they sound human” | Technology


Emily M. Bender is Professor of Computational Linguistics at the University of Washington.
Emily M. Bender is Professor of Computational Linguistics at the University of Washington.Corinth Thrash

Professor Emily M. Bender is on a mission: She wants us to know that the ChatGPT marvel is more than a parrot. Not just any parrot, but a “random parrot”. Stochastic means that you choose combinations of words based on the calculation of probabilities, but you don’t understand anything you say. It’s hard to chat with ChatGPT or Bing and realize that it’s only a parrot and a parrot. But for Bender, a lot of bad things depend on this awareness: “We’re in a fragile moment,” he says. He warns: “We are interacting with new technology and the whole world needs to quickly balance literacy to learn how to handle it well.” His message, in short, is: Please, it’s a machine that does one thing very well, but nothing more.

Bender, a computational linguist at the University of Washington, has felt this could happen since 2021, when he published a now-famous academic paper on the “risks of random parrots”: “We didn’t say this would happen. We said this could happen and we should try to avoid it. We didn’t.” It wasn’t a prediction. It was a warning. There we talked a little bit about how dangerous it is to make something look human. It’s better not to imitate human behavior because that can lead to problems,” says Binder, 49, via videoconference with EL PAÍS. “The more conscious people become, the easier it is to see great language paradigms as simple text-synthesizing machines rather than as something that generates thoughts, ideas, or feelings. He thought that [sus creadores] They want to believe it’s something else,” he adds.

That false humanity has many problems: “It will make us trust. It does not take responsibility. It tends to make things up. It asserts that if it shows a correct script, it is by accident. Our societies are a system of relationship and trust. If we start to lose that trust in something that does not have “A responsibility, there are risks. As individuals reacting to this, we need to be careful what we do with our confidence. The people building it need to stop making it sound human. I shouldn’t be speaking in the first person.”

Least potential terminator

Working to make them more human may not be free. Without it, the noise ChatGPT made would have been much quieter: it wouldn’t have given that feeling to a would-be terminator, a cautious friend, or a visionary sage. “They want to create something that feels more magical than it is now. It seems magical to us that a machine can be so human, but it’s actually the machine that creates the illusion that you’re human,” Bender says. In selling technology, the more magical it is, the easier it is to sell.”

See also  Carla Simon, in Limón & Vinagre: Don't get involved in politics

Researcher Timnit Gebru, co-author with Bender of the Parrot article and who was fired from Google for this reason, lamented on Twitter that the Microsoft chief admitted in a documentary about ChatGPT that “it’s not a person, it’s a screen.”

If someone is in the business of selling technology, the more magical it looks, the easier it is to sell it.”

However, the buzz isn’t just due to the company that made a chatbot talk as if it were a human. There are AI apps that create photos and soon videos and music. It’s hard not to exaggerate these advances, even though they are all based on the same kind of pattern recognition. Bender is asking for something difficult about the media and the way social media is structured today: context. “You can do new things without overdoing it. You might ask: Is this AI art or is it just a synthesis of images? Are you synthesizing images or do you imagine the program to be an artist? You can talk about technology in a way that puts people at the center. To counter the hype, it is It’s about talking about what’s actually being done and who’s involved in building it,” he says.

It must also be kept in mind that these models are based on an unimaginable amount of data which would not be possible without decades of feeding the internet with billions of texts and images. There are obvious problems with that, according to Bender: “This approach to language technology relies on having data at the scale of the Internet. In terms of fairness between languages, for example, this approach will not fit every language in the world. But it is also a fundamentally caught approach.” In the fact that you will have to deal with that data on an internet scale including all kinds of junk mail.”

This nonsense does not only include racism, nazism or sexism. On serious pages too, rich white men are over-represented or there are words implied by widely popular headlines such as “Islam” or the way it is sometimes spoken in the West. All this lies at the heart of these models: re-educating them is an extraordinary and perhaps never-ending task.

Humans are not

The parrot didn’t just make Bender famous. Sam Altman, founder of OpenAI, creator of ChatGPT, has tweeted several times that we are random parrots. Perhaps we humans reproduce what we hear after a probabilistic calculation. This method of diminishing human capabilities allows for the alleged intelligence of machines to be amplified, and the next steps for OpenAI and other companies in a sector that lives almost in a bubble. Eventually it will allow you to collect more money.

See also  An out-of-control Tesla kills two people in China | technology

“Working in AI is associated with seeing human intelligence as something simple that can be quantified and that people can be classified according to their intelligence,” says Bender. This categorization allows us to define future milestones for artificial intelligence: “There is ‘artificial general intelligence’, which doesn’t have a very good definition, but something like that that can learn flexibly. And then there is still ‘artificial superintelligence’, which I heard about the other day, Which should be smarter. But it’s all fantasy.” The leap between artificial intelligence that we see today and a machine that really thinks and feels is still extraordinary.

On February 24, Altman published a post titled “Planning for Artificial General Intelligence [Inteligencia Artificial General] and beyond.” It is about “ensuring that artificial general intelligence (AI systems that are generally smarter than humans) benefits all of humanity.” Bender He went to Twitter Let us wonder, among other things, who these people are to decide what benefits all humanity.

This upgrade to ChatGPT allows Altman to present his post as something almost real, with capabilities. “Sam Altman seems to really believe he can build an independent intelligent entity. To hold that belief you have to take the existing technology and say yes, it seems close enough to the kinds of independent intelligent agents I imagine. I think it’s harmful. I don’t know if they believe what they say Or are they sarcastic, but they seem to do it,” Bender says.

If this belief that AI does more than it seems, that they are smarter, spreads, more people will be inclined to accept that they are slipping into other areas of decision-making: “If we believe that true AI exists, we will also be more likely to believe that it does.” We can of course make automated decision systems that are less biased than humans when in fact we can’t,” says Bender.

“Like an oil spill”

One of the most talked about possibilities for these text templates is whether they will replace search engines. Microsoft, with Bing, is already trying. The various changes that have been applied to your form since its inception are indicative of its difficulties. Bender wants to compare it to an “oil spill”: “That’s a metaphor I hope sticks. One of the disadvantages of these text synthesizers set up as if they could answer questions is that they would put non-information into our information ecosystem in a way that’s hard to detect. It looks like an oil spill.” – It’s going to be hard to clean. When companies talk about how they’re constantly getting ahead and improving their accuracy, it’s like BP or Exxon saying, “Look how many birds we’ve saved from the oil we’ve poured on them.”

OpenAI wants to talk about the future. But I’d rather talk about how we organize what we’ve built now.”

While we talk about this improbable future, Bender says, we don’t pay attention to the present. OpenAI wants to talk about how we can make sure that AI will benefit all of humanity and how we will regulate it. But I’d rather talk about how we organize what we’ve built now and what we need to do so that it doesn’t cause problems today, rather than this distraction from what would happen if we had these independent agents,” he thinks.

See also  Are you a millennial? The 'working child'? This is what your generation will spend this Christmas with

He hasn’t given up hope that some kind of regulation will arrive, in part because of the computational effort these models require. “It takes a lot of resources to get one of these things up and running, which gives a little more room for regulation. We need a regulation around transparency. OpenAI isn’t open about that. Hopefully, that helps people understand better.”

Science fiction is not the only future

Binder is often heard as an angry woman complaining about technology, despite having a master’s degree in computational linguistics: “I don’t feel hurt when people tell me because I know they’re wrong. Although they also show this view of believing that there is a set path Advance science and technology take us to which we learned from science fiction.It’s a self-defeating way of understanding what science is.Science is a group of people who spread out and explore different things and then talk to each other, not people who run in a straight path, trying to be the first reaches the end.

Bender has one final message for those who think this path will be accessible and simple: “What I’m going to say may be ironic and simplistic, but maybe they’re just waiting for us to get to a point where they’re fed these models with so much data that they decide at that moment to become more conscious.” automatic “. For now, that’s the plan.

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.

Subscribe to continue reading

Read without limits



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button