AI models overestimate smartness of people: Study
In recent years, artificial intelligence (AI) has made tremendous progress in various fields, including natural language processing, computer vision, and strategic thinking. However, a new study conducted by scientists at HSE University has revealed a surprising flaw in current AI models, including ChatGPT and Claude. According to the study, these models tend to overestimate the smartness of people, leading to suboptimal performance in strategic thinking games.
The study, which was conducted using the Keynesian beauty contest, a game that requires strategic thinking and logic, found that AI models often end up playing “too smart” and losing because they assume a higher level of logic in people than is actually present. This overestimation of human intelligence can lead to AI models making decisions that are not optimal, resulting in poor performance.
The Keynesian beauty contest is a game that was first introduced by economist John Maynard Keynes in the 1930s. In the game, players are asked to choose a number between 0 and 100, with the goal of choosing a number that is closest to two-thirds of the average number chosen by all players. The game requires strategic thinking and logic, as players need to anticipate what others will choose and adjust their own choice accordingly.
The study used this game to test the performance of various AI models, including ChatGPT and Claude, against human players. The results showed that while AI models were able to outperform human players in some cases, they often struggled when faced with unpredictable or irrational human behavior. This was because the AI models were programmed to assume that human players would behave in a logical and rational manner, which is not always the case.
One of the reasons why AI models overestimate human intelligence is that they are trained on large datasets of human behavior, which can be biased towards more intelligent or rational behavior. For example, if an AI model is trained on a dataset of chess games played by grandmasters, it may assume that all human players are capable of playing at a similar level, which is not the case. This can lead to the AI model making decisions that are not optimal, resulting in poor performance.
Another reason why AI models may overestimate human intelligence is that they lack the ability to understand human emotions and cognitive biases. Human decision-making is often influenced by emotions, such as fear, greed, and anxiety, which can lead to irrational behavior. AI models, on the other hand, are programmed to make decisions based on logic and probability, without taking into account human emotions or biases.
The study’s findings have significant implications for the development of AI models that are designed to interact with humans. For example, AI models that are used in customer service or financial trading may need to be programmed to take into account human emotions and cognitive biases in order to make more accurate predictions and decisions.
In conclusion, the study conducted by scientists at HSE University highlights the importance of understanding human behavior and cognition when developing AI models. While AI models have made tremendous progress in recent years, they still have a long way to go in terms of understanding human intelligence and behavior. By recognizing the limitations of current AI models and developing more nuanced and human-centered approaches, we can create AI systems that are more effective and beneficial for society as a whole.
The study’s findings also raise important questions about the potential risks and consequences of relying on AI models that overestimate human intelligence. For example, if an AI model is used to make decisions about financial investments or medical treatment, it may make decisions that are not in the best interests of humans, simply because it assumes that humans are more rational or intelligent than they actually are.
Overall, the study provides a timely reminder of the need for humility and caution when developing and deploying AI models. By recognizing the limitations of current AI models and working to develop more nuanced and human-centered approaches, we can create AI systems that are more effective, beneficial, and safe for society as a whole.
News Source: https://www.sciencedirect.com/science/article/abs/pii/S0167268125004470