AI models overestimate smartness of people: Study
Artificial intelligence (AI) has made tremendous progress in recent years, with models like ChatGPT and Claude demonstrating impressive capabilities in understanding and generating human-like text. However, a new study by scientists at HSE University has revealed that these models may have a significant flaw: they tend to overestimate the smartness of people. This overestimation can lead to AI models playing “too smart” and losing in strategic thinking games, as they assume a higher level of logic in humans than is actually present.
The study, which was conducted using the Keynesian beauty contest, a classic game theory puzzle, found that current AI models often make incorrect assumptions about human behavior and decision-making processes. The Keynesian beauty contest is a game where players are asked to choose a number between 0 and 100, with the goal of choosing a number that is closest to two-thirds of the average number chosen by all players. The game requires players to think strategically and anticipate the actions of others.
The researchers tested several AI models, including ChatGPT and Claude, on the Keynesian beauty contest, and the results were surprising. Despite their advanced capabilities, the AI models consistently overestimated the smartness of people, assuming that humans would play the game with a higher level of logic and strategic thinking than was actually present. As a result, the AI models often ended up playing “too smart” and losing the game.
The study’s findings have significant implications for the development of AI models and their potential applications in real-world scenarios. If AI models are overestimating the smartness of people, they may not be able to effectively interact with humans or make accurate predictions about human behavior. This could lead to mistakes and failures in areas such as customer service, financial forecasting, and decision-making.
The researchers suggest that the problem lies in the way AI models are trained and designed. Currently, AI models are often trained on large datasets of text and are designed to optimize performance on specific tasks. However, this approach can lead to a lack of understanding of human behavior and decision-making processes, which are often messy and illogical.
To address this issue, the researchers propose that AI models should be designed to take into account the limitations and biases of human cognition. This could involve incorporating more realistic models of human behavior and decision-making into AI systems, as well as using more diverse and representative training datasets.
The study’s findings also highlight the importance of testing AI models in real-world scenarios and evaluating their performance in a more nuanced and multifaceted way. Rather than simply measuring AI performance on specific tasks, researchers should consider the broader social and cultural context in which AI systems will be deployed.
In conclusion, the study by scientists at HSE University is a significant contribution to our understanding of AI models and their limitations. The finding that AI models overestimate the smartness of people has important implications for the development of AI systems and their potential applications in real-world scenarios. As AI continues to evolve and become more integrated into our daily lives, it is essential that we consider the potential flaws and biases of these systems and work to design more realistic and effective models of human behavior.
News Source: https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470