How do neural networks pretend to be know-it-alls and what to do about it?

We have already reported that the ability of neural networks to lie to users has exceeded the wildest expectations. It may seem that there is nothing serious in this, but scientists disagree. The fact is that language models such as GPT-4 have become an integral part of everyday life. They are also actively used in education, medicine and science, helping to solve a variety of rather complex problems. But despite the impressive capabilities of these intelligent systems, the results of recent studies have shown that as they develop and constantly improve, neural networks become less reliable and more often invent facts.

How neural networks pretend to be know-it-alls and what to do about it? It seems that neural networks have the answers to all questions, but they are capable of stunningly convincing lies. Image: wp.technologyreview.com. Photo.

Neural networks seem to have all the answers, but they are capable of telling stunningly convincing lies. Image: wp.technologyreview.com

The latest AI systems tend to give convincing answers to all questions, even when they are not sure of the reliability of the information. This is especially dangerous in areas where accuracy and reliability are critical, such as medicine or legal practice.

Why do neural networks make up facts?

A study published in the journal Nature has found that a group of AI chatbots are becoming increasingly unreliable: as they evolve, large language models increasingly make up facts when answering user questions.

The authors of the article came to this conclusion after examining the work of industry-leading AI systems, including OpenAI's GPT, Meta's LLaMA, and the open-source BLOOM model created by the BigScience research group.

Why do neural networks make up facts? Neural networks have learned to pretend to be know-it-alls. Image: quantamagazine.org. Photo.

Neural networks have learned to pretend to be know-it-alls. Image: quantamagazine.org

Note that traditionally, improving systems based on artificial intelligence has been achieved in two ways: scaling (increasing the number of parameters, data volume and computing resources) and «sharpening» models(customization for specific tasks and using feedback from users). These approaches allowed chatbots to better understand instructions and generate more complex and coherent responses.

More on the topic: Neural networks have learned to lie and do it intentionally

However, the study found that these improvement methods have undesirable consequences. For example, larger and more refined models are not always reliable in solving simple problems where errors should be minimal. Moreover, the proportion of incorrect answers from improved models is generally significantly higher than that of their predecessors.

These days, neural networks answer almost all questions. This means that the number of both correct and incorrect answers is growing, said one of the authors of the new study, José Hernández-Orallo from the Valencian Research Institute for Artificial Intelligence (Spain).

A harsher assessment is given by Mike Hicks of the University of Glasgow (UK), who was not involved in the study. In his opinion, the chatbots are getting better at pretending. “Overall, it looks like they're bluffing,” Hicks said.

How did the scientists know that the chatbots were lying?

As part of the study, the scientists asked the chatbots questions on various topics (from mathematics to geography), and also asked them to perform a number of tasks, such as listing information in a certain order. The results showed that larger and more powerful AI systems generally gave the most accurate answers. However, the accuracy of answers to more complex questions was significantly lower.

The authors of the scientific work noted that OpenAI's GPT-4 and GPT-o1 were able to answer almost any question. At the same time, not a single chatbot from the LLaMA family was able to achieve an accuracy level of 60% when answering the simplest questions.

How do scientists know that chatbots are lying? OpenAI recently presented the most powerful model ChatGPT-o1, which can write scientific papers. Image: ctfassets.net. Photo.

OpenAI recently presented the most powerful model ChatGPT-o1, which can write scientific papers. Image: ctfassets.net

In general, the larger the AI ​​models became — in terms of parameters, training data, and other factors — the more wrong answers they gave, the researchers concluded.

However, as neural networks evolve, they become better at answering more complex questions. The problem, aside from their tendency to make mistakes, is that they still can’t handle simple questions.

Want to always be up to date with the latest news from the world of science and high technology? Subscribe to our channel on Telegram – that way you won’t miss anything interesting!

In theory, the presence of such errors is a serious warning sign for scientists and users, but because these intelligent systems are good at solving complex problems, we are probably inclined to overlook their obvious shortcomings.

How did scientists know that chatbots are lying? Chatbots find it difficult to answer simple questions. Image: cnet.com. Photo.

Chatbots have a hard time answering simple questions. Image: cnet.com

Fortunately, the new study also has some sobering findings about how people perceive AI responses. For example, when participants were asked to rate how accurate they thought chatbots were in answering questions, they were only wrong 10% to 40% of the time. This means that users are becoming more aware that chatbots are not so know-it-all.

This is interesting: The creator of ChatGPT predicted the near future: will we live like in paradise?

What to do?

According to the authors of the scientific paper, the easiest way to combat “all-knowing” AI systems is to “reflash” them – developers should program the models in such a way that they do not rush to answer all questions at once. For example, earlier models often avoided answering difficult questions and acknowledged their limitations.

You can set a kind of “threshold” for chatbots so that when answering a difficult question, they answer honestly: “I don't know,” said one of the authors of the study, Hernandez-Orallo.

However, such honesty may not be in the interests of the companies that develop and improve AI systems. Ultimately, the main goal of corporations is to attract as much public attention (and new users) to their latest developments as possible. For this reason, scientists believe that developers need to rethink their approach to developing AI systems.

What to do? Interaction with chatbots should be meaningful. Image: ft.com. Photo.

Interaction with chatbots should be meaningful. Image: ft.com

This means that if chatbots were limited to answering only questions they knew the answers to, the public would immediately notice the limitations of neural networks. However, I don’t think there’s anything wrong with that.

You might be interested in: Neural networks will destroy humanity. True or not?

So, what should ordinary people who regularly interact with chatbots do, knowing all of the above? The answer, I think, is simple – «trust, but verify». Of course, it takes time, but the skill (and even the habit) of checking data and information will definitely make your life and work better.

And if you doubt it, we remind you – the habit of checking data and advice from chatbots recently saved the life of an entire family. My colleague Andrey Zhukov told more about this fascinating and frightening story, I recommend reading it!


Date:

by