Large artificial intelligence language models, increasingly unreliable
en-GBde-DEes-ESfr-FR

Large artificial intelligence language models, increasingly unreliable


According to José Hernández Orallo, a researcher at the Valencian Institute for Research in Artificial Intelligence (VRAIN) of the UPV and ValgrAI, one of the main concerns about the reliability of language models is that their performance does not match the human perception of task difficulty. In other words, there is a mismatch between expectations that the models will fail based on human perception of task difficulty and the tasks on which the models fail. 'Models can solve certain complex tasks in line with human abilities, but at the same time, they fail on simple tasks in the same domain. For example, they can solve several PhD-level mathematical problems. Still, they can get a simple addition wrong,' notes Hernández-Orallo.

In 2022, Ilya Sutskever, the scientist behind some of the most significant advances in artificial intelligence in recent years (from the Imagenet solution to AlphaGo) and co-founder of OpenAI, predicted that ‘maybe over time that discrepancy will diminish’.

However, the study by the UPV, ValgrAI and Cambridge University team shows this has not been the case. To demonstrate this, they investigated three key aspects that affect the reliability of language models from a human perspective.

There is no ‘safe zone’ in which models work perfectly

The study finds a discordance with perceptions of difficulty. ‘Do models fail where we expect them to fail? Our work finds that models tend to be less accurate on tasks that humans consider difficult, but they are not 100% accurate even on simple tasks. This means that there is no ‘safe zone’ in which models can be trusted to work perfectly,' says researcher of the VRAIN Institute, Yael Moros Daval.

In fact, the team from the VRAIN UPV Institute, ValgrAI and the University of Cambridge assures that the most recent models basically improve their performance in tasks of high difficulty but not in tasks of low difficulty, 'which aggravates the difficulty mismatch between the performance of the models and human expectations', adds Fernando Martínez Plumed, also a researcher at VRAIN UPV.

More likely to provide incorrect answers

The study also finds that recent language models are much more likely to provide incorrect answers rather than avoid giving answers to tasks they are unsure of. 'This can lead users who initially rely too much on the models to be disappointed. Moreover, unlike people, the tendency to avoid providing answers does not increase with difficulty. For example, humans tend to avoid giving feedback on problems beyond their capacity. This puts the onus on users to detect faults during all their interactions with models,' adds Lexin Zhou, a member of the VRAIN team who was also involved in this work.

Sensitivity to the problem statement

Is the effectiveness of question formulation affected by the difficulty of the questions? This is another issue addressed by the UPV, ValgrAI and Cambridge study, which concludes that the current trend of progress in the development of language models and greater understanding of a variety of commands may not free users from worrying about making effective statements. ‘We have found that users can be influenced by prompts that work well in complex tasks but, at the same time, get incorrect answers in simple tasks,’ adds Cèsar Ferri, co-author of the study and researcher at VRAIN UPV and ValgrAI.

Human supervision unable to compensate for these problems

In addition to these findings on aspects of the unreliability of language models, the researchers have discovered that human supervision is unable to compensate for these problems. For example, people can recognise tasks of high difficulty but still frequently consider incorrect results correct in this area, even when they are allowed to say 'I'm not sure', indicating overconfidence.

From ChatGPT to LLaMA and BLOOM

The results were similar for multiple families of language models, including OpenAI's GPT family, Meta's open-weighted LLaMA, and BLOOM, a fully open initiative from the scientific community.

Researchers have further found that issues of difficulty mismatch, lack of proper abstention, and prompt sensitivity remain problematic for new versions of popular families, such as OpenAI's new o1 and Anthropic's Claude-3.5-Sonnet models.

'Ultimately, large language models are becoming increasingly unreliable from a human point of view, and user supervision to correct errors is not the solution, as we tend to rely too much on models and cannot recognise incorrect results at different difficulty levels. Therefore, a fundamental change is needed in the design and development of general-purpose AI, especially for high-risk applications, where predicting the performance of language models and detecting their errors is paramount,' concludes Wout Schellaert, a researcher at the VRAIN UPV Institute.

Zhou, L., Schellaert, W., Martínez-Plumed, F. et al. Larger and more instructable language models become less reliable. Nature (2024). https://doi.org/10.1038/s41586-024-07930-y
Angehängte Dokumente
  • intvalgrai-1.jpg
Regions: Europe, Spain
Keywords: Applied science, Computing, Artificial Intelligence

Disclaimer: AlphaGalileo is not responsible for the accuracy of news releases posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Referenzen

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Wir arbeiten eng zusammen mit...


  • BBC
  • The Times
  • National Geographic
  • The University of Edinburgh
  • University of Cambridge
  • iesResearch
Copyright 2024 by DNN Corp Terms Of Use Privacy Statement