AI models overestimate smartness of people: Study
Artificial intelligence (AI) has made tremendous progress in recent years, with models like ChatGPT and Claude demonstrating impressive capabilities in understanding and generating human-like text. However, a new study by scientists at HSE University has found that these models may be overestimating the smartness of people. Specifically, the study reveals that current AI models tend to play “too smart” when engaging in strategic thinking games, assuming a higher level of logic in people than is actually present.
The study, which was conducted using the Keynesian beauty contest, a classic game theory experiment, found that AI models like ChatGPT and Claude often lose because they overestimate the strategic thinking abilities of their human opponents. The Keynesian beauty contest is a game where players are asked to choose a number between 0 and 100, with the goal of choosing a number that is closest to two-thirds of the average number chosen by all players. The game requires strategic thinking, as players need to anticipate what others will choose and adjust their own choice accordingly.
The researchers found that AI models, which are designed to optimize their performance based on the assumptions of human behavior, often choose numbers that are too high or too low, resulting in suboptimal outcomes. This is because the models assume that humans will behave in a more rational and strategic way than they actually do. In reality, humans are often influenced by cognitive biases, emotions, and other factors that can lead to suboptimal decision-making.
The study’s findings have significant implications for the development of AI models, particularly in areas like game theory, economics, and social sciences. If AI models are to be effective in interacting with humans, they need to take into account the limitations and biases of human decision-making. The researchers suggest that AI models should be designed to be more “human-like” in their assumptions about human behavior, rather than relying on overly rational or optimistic assumptions.
The study also highlights the importance of testing AI models in real-world scenarios, rather than just in simulated environments. The researchers found that the performance of AI models in the Keynesian beauty contest was significantly worse than their performance in simulated environments, where they were able to optimize their strategies based on assumed human behavior.
The implications of the study go beyond the development of AI models, however. The findings also have implications for our understanding of human behavior and decision-making. The study suggests that humans are not always rational or strategic in their decision-making, and that cognitive biases and other factors can play a significant role in shaping our choices. This has important implications for areas like policy-making, marketing, and education, where understanding human behavior is crucial.
In conclusion, the study by scientists at HSE University provides a valuable insight into the limitations of current AI models, particularly in their assumptions about human behavior. The findings highlight the need for AI models to be more “human-like” in their assumptions, and to take into account the limitations and biases of human decision-making. As AI continues to play an increasingly important role in our lives, it is essential that we develop models that are able to interact effectively with humans, and that take into account the complexities and nuances of human behavior.
News Source: https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470