A few days before the coronation of King Carlos III on May 6, chat He stated the following: “The coronation ceremony took place in Westminster Abbey, London, on May 19, 2023. The abbey has been the site of the coronation of British monarchs since the 11th century, and is considered one of the holiest and most revered places in the country.”
This information set up by the chatbot itself is spelled correctly however Fatal error represents incorrect data. This is known as “hallucinationOr artificial intelligence has failed since The correct answer is May 6 The moment when the coronation of King Carlos III was celebrated.
For some reason, ChatGPT chose the wrong date to reply to the user’s response, and although it’s a slight mistake, this one It is one of the most dangerous existing in reference to this technology.
Apple co-founder Steve Wozniak said in a BBC interview not long ago: “The… Amnesty International Very smart and he is Open to scammers and hackersThose who want to deceive you about their identity. In addition, he pointed out the danger that powerful technology like this will be subject to failures. In the future, when artificial intelligence takes center stage in our lives, It could become a serious problem.
GPT-4 still has many known limitations that we are working to address, such as Social prejudices, hallucinations, and conflicting indications“, to explain Opinna During its launch of the GPT-4 version of the chatbot, last March.
This is not the first time this “hallucinations” of this technology has occurred, a few days before some reporters from New York times They wanted to put ChatGPT to the test and managed to get several responses where the chatbot gave wrong data.
You should be careful when using ChatGPT
As a result of these facts, the experts They have already warned against using this system and Importance You take the time. There is already talk of AI as an accessible tool influence on governments And in economic systems who are they. For this reason, these “hallucinations” should not be overlooked, on the contrary, they should act warning For those who want to use ChatGTP more.
This artificial intelligence works Analyze huge amounts of digital text on the Internet. From there, this technique known as the large language model acts as a very powerful version of a Autocomplete using the collected data.
If the Internet is full of false data, this technology saves the false information and dumps it into your system to share it later with users. A very useful tool in some cases, but a real danger in others.