Google executives have been somewhat out of place for months now. The advent of ChatGPT, the popular OpenAI chatbot, in November wrested the status of a leading artificial intelligence (AI) company from the Palo Alto giant, a banner that had held up for decades. Microsoft’s firm commitment to OpenAI, which has developed a version of ChatGPT for the Bing search engine, has forced Google to take measures not to be left behind. If in February it introduced Bard, its chatbot, yesterday it made another related announcement: the two big AI research labs, Google Brain and DeepMind, are being merged into a single organization.
Movement is very important. Many of the world’s best scientists in this discipline work for one of the two companies. Google Brain is responsible for the majority of AI-related applications that appear in Google products and services, from the Gmail scanning engine to the translator or browser. Neural networks are also developed there adaptera deep learning model that was fundamental in the development of natural language processing (a field where chatbots like ChatGPT are falling short) or computer vision.
Google acquired the British company DeepMind in 2014 for $ 500 million, which is devoted to the simplest research. So far he had no aspirations to develop commercial applications, rather he had tools to help advance research in the future. AlphaStar came out of his laboratory, a simulator capable of beating expert StarCraft II players, a real-time strategy video game with imperfect information, where it is necessary to have intuition, imagination and cognitive skills to try to guess what the opponent is doing, or AlphaFold, which is intelligence Synthetic predicted the structure of all known proteins (about 200 million molecules).
The new group will be called Google DeepMind and will be headed by Demis Hassabis, the low-level computer genius who has been running DeepMind until now. “The consolidation of all these talents into a single team, which will be supported by Google’s computing resources, will significantly accelerate our progress in the field of artificial intelligence,” Sundar Pichai, CEO of Alphabet (Google’s parent company), said in a statement released yesterday.
The move is startling because Pichai himself has been insisting in recent weeks that industry must tread carefully in the race for generative AI. We’re facing technology with “potential” to do a lot of damage, he says, and Google has chosen to be “very responsible” in its developments. This has been expressed in many interviews, most recently this past weekend on CBS.
But those greetings seemed to have suddenly dissipated. The spark that triggered the decision to up the ante on AI may be Samsung. As expected New York times Last weekend, the Korean technology company, the world’s largest mobile phone manufacturer, was considering replacing Google with Bing as the default search engine for its devices. Information in the Google offices confirms that they have known this since March. And if this is done, it could mean a loss of about $ 3,000 million annually.
This threat to its bottom line led Google to accelerate another project it had. Christened as Magi, it’s a different search engine than Bard’s take on Microsoft’s Bing. It will offer a more personalized user experience than traditional Google search and will learn from your previous searches. It will interact with it through conversations, as is already the case with Bing, and “will try to anticipate users’ needs,” he says. New York times.
Do you feel the machines?
The summer of 2022 was, in a way, a bad omen of what would happen in the following months. Google then had several open fronts related to the big questions we ask ourselves today about AI. Will these systems be able to match or exceed human intelligence? Do chatbots really understand what we say to them?
Engineer Blake Lemoine, who is responsible for running a series of tests on the LaMDA chatbot, confirmed in a report he published Washington Post That the instrument he was analyzing became aware of itself. “If I didn’t know it was a computer program we developed recently,” he said, “I would think I was talking to a seven- or eight-year-old with a background in physics.” In an interview with EL PAÍS, the person who was his boss, Blaise Agüera, defended Lemoine’s dismissal for divulging internal documents and showed his rejection of the engineer’s assumptions, although he admitted that this kind of discussion would become more and more complicated. .
A year ago, the company fired its ethical AI team leaders after they published a scholarly article warning of the dark side of great language paradigms, those behind chatbots.
You can follow country technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.