AI models overestimate smartness of people: Study
The rapid advancement of artificial intelligence (AI) has led to the development of sophisticated models that can simulate human-like conversations, play complex games, and even make decisions. However, a recent study by scientists at HSE University has revealed that these AI models, including popular ones like ChatGPT and Claude, tend to overestimate the smartness of people. This overestimation can lead to unexpected consequences, such as the models playing “too smart” and ultimately losing in strategic thinking games.
The study, which was conducted on the Keynesian beauty contest, a classic game theory puzzle, found that AI models often assume a higher level of logic and rationality in humans than is actually present. This assumption can cause the models to make suboptimal decisions, as they are trying to outsmart opponents who are not as clever as they think.
The Keynesian beauty contest is a game where players are asked to choose a number between 0 and 100, with the goal of selecting a number that is closest to two-thirds of the average number chosen by all players. The game requires strategic thinking, as players need to anticipate what others will choose and adjust their own selection accordingly. In theory, the optimal strategy is to choose a number that is two-thirds of the average number chosen by all players, but this requires a high level of logical thinking and anticipation of others’ actions.
The scientists at HSE University tested several AI models, including ChatGPT and Claude, on the Keynesian beauty contest, and found that they consistently overestimated the smartness of human players. The models tended to choose numbers that were too low, assuming that human players would also choose low numbers, when in fact, humans tend to choose higher numbers. This overestimation led to the models losing the game, as they were unable to adapt to the actual behavior of human players.
The study’s findings have significant implications for the development of AI models, particularly those designed for strategic thinking and decision-making. If AI models overestimate the smartness of people, they may make decisions that are not optimal, leading to subpar performance in real-world applications. For example, in business, AI models may overestimate the rationality of consumers, leading to marketing strategies that are not effective. In healthcare, AI models may overestimate the ability of patients to follow complex treatment plans, leading to poor health outcomes.
The study also highlights the importance of understanding human behavior and decision-making in the development of AI models. While AI models can simulate human-like conversations and play complex games, they are still far from truly understanding human psychology and behavior. The development of more realistic models of human behavior is crucial for creating AI models that can interact effectively with humans and make optimal decisions.
Furthermore, the study’s findings raise questions about the limitations of current AI models and their ability to truly understand human intelligence. If AI models overestimate the smartness of people, do they truly understand human intelligence, or are they simply using complex algorithms to simulate human-like behavior? The answer to this question has significant implications for the development of AI models and their potential applications in various fields.
In conclusion, the study by scientists at HSE University has revealed that AI models, including ChatGPT and Claude, tend to overestimate the smartness of people. This overestimation can lead to suboptimal decisions and poor performance in strategic thinking games. The study’s findings highlight the importance of understanding human behavior and decision-making in the development of AI models and raise questions about the limitations of current AI models and their ability to truly understand human intelligence.
To learn more about this study, you can read the full article at:
https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470
News Source:
https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470