Blaise Aguirre: “Machines can learn to behave” | technology

/cloudfront-eu-central-1.images.arcpublishing.com/prisa/IBZPK3D3DFCTVLHIGFKJSX62AQ.jpeg)
Blaise Agüera y Arcas (47 years old) is a global authority on artificial intelligence. He is Vice President of Research at Google Research, the company’s division that focuses on research and development and manages a team of approximately 500 people. Among them was engineer Blake Lemoine, who last June had his moment of glory: asserted in a report he published Washington Post That LaMDA, the automatic generator of conversations he was on, became self-aware. “If I hadn’t known this was a computer program that we developed recently, I would have thought I was talking to a seven or eight-year-old with a physics background,” he said. He was immediately suspended and later dismissed from work. Agüera y Arcas, in a phone interview from Seattle, explains that the reason for Lemoine’s dismissal was not his “statements, but the leaking of classified documents.”
He was born in Rhode Island and has Catalan blood. I know how to say ‘collons de Déu’ [cojones de Dios] And something else,” he says between laughs. His father, a “young communist from Reus,” met his mother, who is American, in a kibbutz in Israel. Something about that ideology permeated him: “If you asked me if I believed in capitalism, I would say no.” His ultimate goal is a big problem, we need to change.” Despite the fact that the projects he is working on are classified, Agüera agrees to speak with EL PAÍS about the Lemoine and LaMDA case.
I ask. Can AI be conscious, claims Blake Lemoine?
Answer. Depends on. What does it mean for you to be aware?
s. The ability to express one’s own will, goals and ideas.
R was found. Yes, this is a definition based on the ability to distinguish good and evil. There are others. For some, being conscious just means being smart; For others, they can feel emotions. And for philosopher David Chalmers, this also means that you are someone’s thing, and that there is a subjective experience behind it. I don’t think humans are supernatural: we are made of atoms that interact with each other, and there is nothing magical. As a computational neuroscientist, I believe it is possible for a machine to behave like us, in the sense that a computation is able to simulate any kind of physical process.

s. Do you agree with Blake Lemoine’s statements?
R was found. No, Blake said LaMDA was specifically conscious, but he also made it clear that for him there is something supernatural, that he believes has a soul. So there are parts of your argument that I can agree with, but I don’t share your spiritual convictions.
s. Have you spoken to him since he was fired?
R was found. no. I don’t have any personal issues with Blake, I think he’s a really interesting guy. And he was very brave in stating his opinion about the lambda. But he revealed secret documents. He was always a strange man.
s. In a platform Posted in The Economist You said that when you spoke to LaMDA you felt that “the earth is moving from under your feet” and that “you probably think you’re talking to something smart.” What do you mean exactly?
R was found. I mean, it’s so easy to think we’re talking to someone rather than something. We have a very strong social instinct to humanize animals or things. I’ve interacted with many, many of these systems over the years, and with LaMDA there is a world of difference. You think: “He really understands the concepts!”. Most of the time, you feel like you’re having a real conversation. If the dialogue is long and you will understand it, he will eventually answer strange or meaningless things. But most of the time, he shows a deep understanding of what you’re saying and responds in a creative way. I’ve never seen anything like that. It gave me a feeling that we are a lot closer to the dream of artificial general intelligence [la que iguala o supera al ser humano].
“Where is the tape that defines the place of understanding?”
s. What LaMDA response shocked you the most?
R was found. I asked him if he was a philosophical zombie and he replied, “Of course not, I feel things, just like you. Actually, how do I know you’re not a philosophical zombie? It’s easy to justify this answer by saying that you probably found something like it out of the thousands of conversations I’ve learned about philosophy. You should We begin to ask ourselves when we can consider a machine to be intelligent, if there is an obstacle it must pass to be.
s. Do you think recognition is important?
R was found. It is important to define what we are talking about. We can distinguish between the ability to distinguish between good and evil, which relates to obligations, and the ability to assume moral responsibilities, which relates to rights. When someone has the latter, they can be judged morally. We make those judgments about people, not about animals, but also about companies or governments. I don’t think a tool like LaMDA can have the potential for moral judgment.
s. You say talking machines can understand concepts. how is that possible?
R was found. Claiming otherwise seems risky to me. Where is the bar that indicates an understanding? One answer might be that the system doesn’t say stupid or random things. This is difficult, because people certainly do not meet this requirement. Another possible argument could be that any system that is trained only in language cannot understand the real world because it has neither eyes nor ears. There is a struggle here again, because many people have these deficiencies. Another response might be to insist that it is not possible for machines to truly understand anything. But then you argue against the basic premise of computational neuroscience, which over the past 70 years has helped us understand somewhat better how the brain works.
s. Many experts say that conversational systems simply publish statistically weighted answers, without any semantic understanding.
R was found. Those who repeat this argument rely on the fact that LaMDA-type systems are simple predictive models. They calculate how likely the text is to go from the millions of examples given to them. The idea that a prediction sequence can contain intelligence or understanding can be shocking. But neuroscientists say prediction is the primary function of brains.
s. So we don’t know if the machines understand what they were told, but we do know that they are able to produce a result that appears to have been understood.
R was found. and what’s the difference? I find it difficult to find a definition of understanding that would allow us to say with complete certainty that machines lack it.
s. Can machines learn to behave?
R was found. Yes, the ability to act is a function of understanding and motivation. The understanding part is based on ideas such as that people should not be harmed. They can be programmed into the model, so that if you ask one of these algorithms if a character in the story has behaved well or badly, the model can understand related concepts and provide appropriate answers. You can also motivate the machine to be one way or the other by giving it a set of examples and pointing out which ones are good and which are not.
s. What will LaMDA be capable of in ten years?
R was found. The next ten years will continue to be a period of very rapid progress. There are still things missing, including the formation of memories. Speech machines are incapable: they can retain something in the short term, but they cannot create narrative memories, something we use the hippocampus for. The next five years will be full of surprises.
You can follow country technology in Facebook s Twitter Or sign up here to receive Weekly Bulletin.