AI models overestimate smartness of people: Study
Artificial intelligence (AI) has made tremendous progress in recent years, with models like ChatGPT and Claude demonstrating impressive capabilities in understanding and generating human-like language. However, a new study by scientists at HSE University has revealed a surprising flaw in these models: they tend to overestimate the smartness of people. This overestimation can lead to AI models playing “too smart” and losing in strategic thinking games, where they assume a higher level of logic in humans than is actually present.
The study, which was conducted using the Keynesian beauty contest, a classic game theory experiment, found that current AI models are prone to overestimating human intelligence. The Keynesian beauty contest is a game where participants are asked to choose a number between 0 and 100, with the goal of getting as close as possible to two-thirds of the average number chosen by all players. The game requires strategic thinking, as players need to anticipate what others will choose and adjust their own choice accordingly.
The researchers tested several AI models, including ChatGPT and Claude, on the Keynesian beauty contest, and the results were surprising. The AI models consistently chose numbers that were too high, indicating that they were overestimating the smartness of their human opponents. This overestimation led to the AI models losing the game, as they failed to account for the actual level of human reasoning.
The study’s findings have significant implications for the development of AI models. If AI models are to be effective in real-world applications, they need to be able to understand human behavior and decision-making processes accurately. The fact that current AI models tend to overestimate human intelligence suggests that they are still far from achieving true human-like understanding.
One possible explanation for this overestimation is that AI models are trained on datasets that are biased towards higher levels of human intelligence. For example, many AI models are trained on datasets of expert opinions, academic papers, or high-level discussions, which may not accurately represent the average human’s thinking patterns. As a result, AI models may develop an inflated view of human intelligence, leading them to overestimate the smartness of people in real-world interactions.
Another possible explanation is that AI models lack the ability to understand human cognitive biases and heuristics. Humans often rely on mental shortcuts and biases to make decisions, which can lead to suboptimal outcomes. AI models, on the other hand, are designed to optimize outcomes based on rational decision-making principles. However, if AI models fail to account for human cognitive biases, they may end up overestimating human intelligence and making suboptimal decisions themselves.
The study’s findings also have implications for the development of more advanced AI models. To create AI models that can accurately understand human behavior and decision-making processes, researchers need to develop more sophisticated training datasets that reflect the complexities of human thinking. This may involve incorporating more diverse datasets, such as those that reflect human cognitive biases and heuristics, or using more advanced training methods that can account for human irrationality.
In conclusion, the study by scientists at HSE University highlights an important flaw in current AI models: they tend to overestimate the smartness of people. This overestimation can lead to AI models playing “too smart” and losing in strategic thinking games, where they assume a higher level of logic in humans than is actually present. To develop more effective AI models, researchers need to develop more sophisticated training datasets and methods that can account for human cognitive biases and heuristics. By doing so, AI models can become more accurate in their understanding of human behavior and decision-making processes, leading to more effective real-world applications.
The study’s findings are a reminder that AI models are still far from achieving true human-like understanding, and that there is still much work to be done to develop more advanced and accurate AI models. As AI continues to evolve and become more integrated into our daily lives, it is essential to address these flaws and develop more sophisticated AI models that can accurately understand human behavior and decision-making processes.
News Source: https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470