AI models overestimate smartness of people: Study
Artificial intelligence (AI) has made tremendous progress in recent years, with models like ChatGPT and Claude demonstrating impressive capabilities in understanding and generating human-like text. However, a new study by scientists at HSE University suggests that these models may be overestimating the smartness of people. The researchers found that current AI models tend to play “too smart” when engaging in strategic thinking games, often leading to losses due to their assumption of a higher level of logic in humans than is actually present.
The study focused on the Keynesian beauty contest, a game that requires players to choose a number between 0 and 100, with the goal of getting as close as possible to two-thirds of the average number chosen by all players. This game is often used to test the ability of players to think strategically and anticipate the actions of others. The researchers used this game to evaluate the performance of various AI models, including ChatGPT and Claude, and compared their results to those of human players.
The findings of the study were surprising, as the AI models consistently played “too smart” and ended up losing. The models assumed that human players would be able to think several steps ahead and anticipate the actions of others, but in reality, humans tend to be more simplistic in their thinking. As a result, the AI models chose numbers that were too high or too low, resulting in losses.
This overestimation of human smartness is a significant issue, as it can lead to AI models making poor decisions in a variety of situations. For example, in a business setting, an AI model may assume that customers are more rational and informed than they actually are, leading to marketing strategies that fail to resonate with the target audience. Similarly, in a healthcare setting, an AI model may assume that patients are more adherent to treatment plans than they actually are, leading to ineffective treatment strategies.
The study’s findings have significant implications for the development of AI models. Rather than trying to create models that are increasingly sophisticated and intelligent, researchers may need to focus on creating models that are more nuanced and able to understand the limitations of human thinking. This may involve incorporating more realistic assumptions about human behavior and decision-making into AI models, rather than relying on idealized notions of human rationality.
The researchers also noted that the overestimation of human smartness is not limited to AI models. Humans themselves often overestimate the smartness of others, and this can lead to poor decision-making in a variety of situations. For example, in a political setting, a candidate may assume that voters are more informed and engaged than they actually are, leading to campaign strategies that fail to resonate with the electorate.
In conclusion, the study by scientists at HSE University highlights the importance of understanding the limitations of human thinking and behavior when developing AI models. Rather than assuming that humans are highly rational and intelligent, AI models should be designed to take into account the complexities and nuances of human decision-making. By doing so, we can create AI models that are more effective and better able to interact with humans in a variety of contexts.
The study’s findings also have significant implications for our understanding of human behavior and decision-making. By recognizing that humans tend to be more simplistic in their thinking than we often assume, we can develop more effective strategies for communicating with and influencing others. Whether in a business, political, or social setting, understanding the limitations of human smartness can help us to make better decisions and achieve our goals more effectively.
Overall, the study provides a fascinating insight into the complex and often irrational nature of human behavior. While AI models may be able to process vast amounts of information and make rapid calculations, they are still limited by their assumptions about human thinking and behavior. By recognizing these limitations and developing more nuanced and realistic models of human behavior, we can create AI models that are more effective and better able to interact with humans in a variety of contexts.
News Source: https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470