DeepMind: Oriol Viñales: “Our generation will see artificial intelligence equal to or greater than human intelligence” | technology

Since he saw it as a child 2001: A Space OdysseyOriol Viñales knew he wanted to dedicate himself to artificial intelligence. “I was very interested to see how the computer speaks normally. Could we achieve something like that?”, this Sabadell teen was already wondering. Today, at age 39, he is a global authority on deep learning (Deep learning), one of the most advanced artificial intelligence (AI) technologies. His scholarly articles have been cited tens of thousands of times, and his research has contributed to the improvement of machine translation systems, or the way machines interpret and classify images. Elon Musk himself, a figure who does not stand out for his modesty, responded with gratitude to a tweet from the Catalan, in which he blessed the Tesla project.
Fennells is Director of Research at DeepMind, a British company that was acquired by Google in 2014 and has made great strides in this field. The start It made its first headlines in the international press thanks to AlphaGo, the program that managed to beat the world champion Go, the thousand-year-old Asian game whose board allows more tiles to be placed in arrangements than there are atoms in the universe. Not only has the program outdone the best, but it has invented never-before-seen plays along the way.
Catalan joined Google in 2013, after earning his PhD from the University of Berkeley. Less than a year later, he landed in the newly acquired DeepMind. And in 2016, he led the team responsible for the company’s next big achievement: AlphaStar, a sim capable of beating skilled StarCraft II players. It is a real time strategy video game with imperfect information (each player only sees what happens on the part of the map that he has explored) where it is necessary to have intuition, imagination and cognitive skills to try to guess what the opponent is doing. These are qualities that AI has not yet shown how to master.
Since then, he’s been part of or supervised the teams behind AlphaFold, an artificial intelligence that predicts the structure of all known proteins (about 200 million molecules), or AlphaCode, an automated program capable of writing code at the level of the best programmers. That same week, DeepMind introduced a new advancement in the gaming environment: AlphaNash, an algorithm capable of playing Stratego like an expert human, a board game more complex than Go. VINYLS EL PAÍS AT THE London offices of DeepMind, located in Kings Cross and coincidentally a stone’s throw from Google’s archenemy Meta. “From my office window I can greet them,” he says with a laugh.
Ask. When you have to tell someone what you’re doing, what are you going to tell them?
Answer. hard to explain. We are developing machines that can learn on their own to play games. Previously, AI consisted of programming a series of specific instructions, for example, for the machine to say a series of sentences. Now, with deep learning, what you do is tell it that when it sees the words “the sky is” it knows to say “blue” next. You teach him to predict this word by showing thousands or millions of examples. You keep shaping the system with an automatic algorithm until it is able to piece meaningful sentences together. The magic is that when you give it input that is not part of the examples it analyzed, that brain generalizes and is able to make reasonable extrapolations.
s. be able to apply Deep learning For almost any field, why did you start with gaming?
R was found. Games are very useful in research because they provide a controlled environment for testing, because if you win or lose nothing happens, and it is also easy to set goals, which is to win the game. You can run 1,000 games in parallel at no cost, for example, and put 1,000 bots to do things. And simulations can be accelerated, so that they progress faster than if you were running in real time.
/cloudfront-eu-central-1.images.arcpublishing.com/prisa/W25ZFACXR5A3HAPOJCUJBQFLAM.jpg)
s. Why were you assigned to Project AlphaStar?
R was found. When I was young, I played a lot of StarCraft in internet cafes in Sabadell. And at Berkeley, a colleague and I developed a rudimentary simulator for that game. When I came to DeepMind I came from Google Brain, the company’s research project focused on deep learning. He had worked on text translations and image classification systems, among other things. And while it may not seem like it, the algorithms behind those machines have a lot to do with gaming simulators. For example, the first step in AlphaStar is to learn from the games humans play. You ask the algorithm, after studying many games and seeing what happened in the current game, to tell you at a specific moment where the human will click next. This first step is identical to what is used in text translation or to generate natural language: after analyzing millions of words or phrases, you ask it to tell you which letter or word is most likely to be next in the conversation at any given time. .
s. Then came AlphaFold and AlphaCode. Are they related outside of name?
R was found. They are completely different projects, despite the fact that what we discover in one we pass on to the algorithms of the other. We have applied the lessons learned using AlphaStar to constructs and systems optimization in natural language models or in AlphaFold, which allowed us to reveal the structure of proteins. The algorithms we develop in each project are like tools that you collect and can apply in other applications. Everything we’ve done so far helps us, for example, with some of the work that we’re doing on nuclear fusion.
s. Nuclear fusion?
R was found. Yes, achieving integration is simple. The hard part is extracting more energy than you invest. In nuclear fusion, a kind of empty doughnut-shaped tube with controlled electromagnetic fields at very high frequencies is used. Inside the donut is the plasma, which heats it up so much that energy is generated, because there comes a time when the atoms start to fuse. Our contribution here is part of controlling these EMFs: you have to make sure that they never touch the wall, that they are where they need to be. To do that, you have to balance it very precisely, very quickly. It’s a very complex system. It’s like a game: it’s about optimizing the systems so that the plasma is in good shape. We use reinforcement learning algorithms. There are promising results, but we’re still at a very early stage.
Creating an artificial intelligence equal to or greater than our own will be the greatest scientific advance that humanity will make.”
s. What else are they working on?
R was found. We also try to improve weather forecasts by studying how clouds move. If we can make climate forecasts on the planet in a week, which we can do now, we will be able to better understand the consequences of the climate emergency. It’s a new field for us. As a researcher, the most exciting thing about deep learning is that it’s really beyond science: it can be applied to biology, physics, or whatever you want. Deep learning has countless applications.
s. They are also developing an AI system that specializes in performing not just one task, but several. Is it your most ambitious project?
R was found. It is often criticized that AI specializes in something, even if it is infinitely relevant, such as nuclear fusion, but does not understand anything beyond its mission. We want to change that. What we have achieved so far is 101% performance in Go playing, combining proteins or programming. The future goes through multiple means, for returns of 10 or 20%, but in many or all tasks. This is what we want to achieve with our Gato neural network. Right now, you can strike up a conversation with her by asking her out with a text message or showing her a photo to comment on. He is also able to play simple video games and control a robotic arm. The tasks he performs are not perfect: sometimes he makes mistakes in simple things, such as determining the location of the right and left. But this will get better. We’ll succeed in developing one algorithm that does everything.
s. Is a cat a first step towards an artificial general intelligence equal to or superior to a human?
R was found. Yes, clearly. I think language processing is currently the most promising area towards truly AI. And this is achieved through algorithms that will create systems that are more general than the ones we use today. AlphaCode is another good example: having systems that understand code language means that they can create much more general complications than we’ve seen before.

s. Do you think our generation will see one of these artificial general intelligences?
R was found. Yes, I think we’ll live it up. But I also think that at first it won’t be something that changes everything overnight. The transition will be gradual, and in fact in the field of artificial intelligence there is already a tangible development. We’ll see a series of jumps or transitions that won’t be amazing, but they will add up, and that will be really amazing looking back. In a few years, I don’t know how many systems, systems will increasingly be able to do different things and with better efficiency: 20%, 30% … until they reach 100%. Since it will be progressive, people will get used to it.
s. This summer, a Google engineer said the chatbot he was working on had gained awareness. Can machines feel?
R was found. I find it a very interesting discussion. I work in the bowels of artificial intelligence, so to speak, and machines clearly have no conscience. Chatbots can tell you the time and other things like that, but they have very basic limitations. One of them is that they are not aware of their existence. Another very obvious one is that they have no long term memory, you start from scratch with every conversation and they contradict themselves. In any case, I think it is very useful to speak out on these issues.
s. The most advanced conversational models do not have a semantic understanding of what is being said to them, but they are able to produce the answers that would be given by a person who understands what is being asked of them. Are they smart?
R was found. The part that interests me the most about this is the utilitarian part. It is true that if we can teach these algorithms to play games and check that they understand them, you can analyze the process you followed to get there. Whether that was intelligence or not, I don’t care. I understand that for someone studying the human mind it can be interesting. My mathematical training leads me to believe that what matters is the fact of getting a machine to perform a task in a way that is indistinguishable from how a human does it.
s. Are we ready as a society to accept more progress of this kind?
R was found. I believe that the achievement of artificial general intelligence will be one of the most profound scientific achievements that humanity can achieve, because we do not even understand our own intelligence, despite the many advances of neuroscientists. We need to talk more about that and its implications. Philosophers, sociologists, and historians have much more to say in our work. You have to think about the long-term consequences of AI.
You can follow country technology in Facebook s Twitter Or sign up here to receive The weekly newsletter.