Elon Musk joins the artificial intelligence “boom” with a new company | technology


Elon Musk launched xAI
Elon Musk at the Viva Technology Conference, in Paris, France, on June 16, 2023.Gonzalo’s sources (Reuters)

Less than 10 words were enough for Elon Musk to announce a new project. The billionaire got into the field of artificial intelligence with xAI, a new company that, in his opinion, will help “understand reality” and the nature of the universe. The company debuted on social networks by asking a philosophical question: “What are the fundamental unanswered questions?” It’s the latest coup from the world’s richest man, who lives meager hours on Twitter, the platform he bought last year and whose existence has been threatened after threads were launched with the backing of tech giant Meta.

The xAi team introduced this Wednesday through a webpage. It’s headed by Musk and is made up of 11 other engineers, all men, with experience on projects like DeepMind, OpenAI, and Microsoft’s Google. They are talented people who helped develop ChatGPT versions 3.5 and 4, which is a before and after in the sector with more than 100 million subscribers in the first two months after its launch. The team will make its official presentation on Friday in a Spaces conversation, on Twitter. The ad explained that they are hiring new profiles.

This isn’t the first time Musk has taken an interest in artificial intelligence. The entrepreneur has invested more than ten years in developing gadgets of this kind, some of which are already in the works at Tesla. But in recent months, above all, as a result of the success of ChatGPT, I decided to step on the accelerator. In March, he and his partner, Jared Birchall, registered the company’s name with Nevada state authorities. A month later, he was already in negotiations to convince investors from his car company and SpaceX to get new resources to pour into xAI. According to the newspaper financial timesMusk bought thousands of processors from Nvidia, the company that soared in the stock market bubble artificial intelligence.

See also  An artificial exoskeleton, new hope against the fallout from stroke | Sciences

The Tesla and SpaceX owner has been linked to OpenAI, the company that launched the ChatGPT bot. Musk left the company’s board of directors in 2018. Since then, he’s been an outspoken critic of the company, describing it as being run by Microsoft, which invested $13 billion in developing the chatbot.

In late March, Musk was one of the most prominent voices amid an industry chorus calling for caution in the new wave of artificial intelligence. Through an open letter, the experts and tech executives asked for a six-month truce to halt the progress of the investigations. The argument was that these tools present “profound dangers to society and humanity”. The document referred to the rules of the game that many developers and industry leaders adopted in 2017 at a conference convened by the Future of Life Institute. They agreed to devote intense attention to the ethical and financial resources invested in robotics due to their profound potential to “alter the history of life on Earth”.

The xAI release appears to address these concerns. In addition to the 12-person team that makes up the new company, the company has also added Dan Hendricks, a Berkeley physician who heads the Center for Artificial Intelligence Security (CAIS). It is a San Francisco-based nonprofit looking to develop the sector and focused on reducing potential harm to society. The organization offers scholarships to study philosophy and teaches courses that help detect anomalies in programming, among other elements.

Hendrix, along with other authors, explained in a recent Cornell University article that there are four broad categories in which AI can cause harm to society. The first is that groups or individuals are using the tool with bad intentions. The second is the race between companies to develop, which can cause investors to rush or pressure to make unfinished or incomplete versions available to users who give too much control over the algorithms. The third is organizational risk, and how the interaction of human error and complex systems can lead to “catastrophic incidents”. The latter is perhaps the most terrifying, software that takes advantage of intelligence far above human to rebel against society. “Our goal is to advance a deep understanding of these risks and inspire collective efforts to make sure AI is used safely,” Hendrycks says in the article.

See also  Who is Sam Altman: Millionaire ChatGPT Creator, Startup Guru and Doomsday Prophet | technology

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button