AI models overestimate smartness of people: Study
Artificial intelligence (AI) has made tremendous progress in recent years, with models like ChatGPT and Claude demonstrating impressive capabilities in understanding and generating human-like language. However, a new study by scientists at HSE University suggests that these models may be overestimating the smartness of people. The researchers found that when playing strategic thinking games, AI models tend to play “too smart” and lose because they assume a higher level of logic in people than is actually present.
The study, which was conducted on the Keynesian beauty contest, a game that requires strategic thinking and logic, revealed that AI models like ChatGPT and Claude often make incorrect assumptions about human behavior. The Keynesian beauty contest is a game where players are asked to choose a number between 0 and 100, with the goal of choosing a number that is closest to two-thirds of the average number chosen by all players. The game requires players to think strategically and anticipate the actions of others.
The researchers found that AI models, which are designed to optimize their performance, often choose numbers that are too high, assuming that human players will also choose high numbers. However, in reality, human players tend to choose numbers that are lower than what the AI models expect. This mismatch between the AI’s expectations and human behavior leads to the AI models losing the game.
The study’s findings have significant implications for the development of AI models. If AI models are overestimating the smartness of people, they may not be able to interact effectively with humans in real-world situations. For example, in a business setting, an AI model that assumes a higher level of logic in human decision-making may make incorrect predictions about market trends or consumer behavior.
The researchers suggest that AI models need to be designed to take into account the limitations of human cognition and the complexities of human behavior. This may involve incorporating more nuanced models of human decision-making, such as those that account for cognitive biases and emotions.
The study’s findings also raise questions about the potential risks of relying too heavily on AI models in decision-making. If AI models are overestimating the smartness of people, they may make decisions that are not in the best interests of humans. For example, an AI model that assumes a higher level of logic in human behavior may recommend policies or interventions that are not effective in practice.
In conclusion, the study by scientists at HSE University highlights the importance of developing AI models that are more realistic in their assumptions about human behavior. By taking into account the limitations of human cognition and the complexities of human behavior, AI models can be designed to interact more effectively with humans and make more accurate predictions about real-world outcomes.
The study’s findings are a reminder that AI models are not perfect and that they can be improved by incorporating more nuanced models of human behavior. As AI continues to play a larger role in our lives, it is essential to develop models that are transparent, explainable, and accountable. By doing so, we can ensure that AI is used in ways that benefit humanity and do not perpetuate biases or errors.
For more information on this study, please visit: https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470
News Source: https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470