
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move aimed at promoting transparency and accountability in the artificial intelligence (AI) industry, Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, over allegations of failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
According to a report by Reuters, the Italian antitrust authority, the Autorità Garante della Concorrenza e del Mercato (AGCM), has opened an investigation into DeepSeek’s activities in Italy, citing concerns that the company may have violated Italy’s transparency law by failing to disclose the risks associated with hallucinations in its AI-powered content.
The investigation was sparked by a complaint filed by a consumer group, which accused DeepSeek of using its AI technology to generate misleading and inaccurate information to users. The AGCM has given DeepSeek a deadline of 30 days to respond to the allegations and provide explanations for its practices.
If found guilty, DeepSeek may face fines of up to €20 million or 4% of its global annual turnover, whichever is higher. The company has not commented on the investigation, but it is understood to be cooperating with the AGCM.
DeepSeek’s AI technology uses natural language processing and machine learning algorithms to generate text-based content, such as news articles, social media posts, and product descriptions. While the company’s technology has been touted as a game-changer in the AI industry, concerns have been raised about its potential to spread misinformation and disinformation.
Hallucinations, in particular, have raised red flags among regulators and experts, who worry that AI-generated content could be manipulated to spread false information or propaganda. In January, the European Union’s internal market commissioner, Thierry Breton, warned that AI-generated disinformation could have serious consequences for the stability of democratic societies.
The Italian investigation into DeepSeek is the latest in a series of regulatory actions aimed at promoting transparency and accountability in the AI industry. In recent years, regulators around the world have taken steps to address concerns about the potential risks associated with AI-generated content, including the spread of misinformation and disinformation.
In the United States, for example, the Federal Trade Commission (FTC) has taken action against several companies that have used AI technology to generate misleading and inaccurate information to consumers. In one notable case, the FTC fined a company that used AI-powered chatbots to generate fake reviews for products.
In Europe, the European Commission has introduced a range of measures aimed at promoting transparency and accountability in the AI industry, including the requirement for AI firms to provide clear and transparent information about their algorithms and data processing practices.
The Italian investigation into DeepSeek is a significant development in this context, as it highlights the need for AI firms to prioritize transparency and accountability in their operations. As AI technology continues to evolve and become more sophisticated, it is essential that regulators and industry players work together to ensure that the technology is used in a responsible and ethical manner.
In conclusion, the Italian antitrust body’s investigation into DeepSeek over allegations of failing to warn users about the risk of hallucinations in its AI-produced content is a significant development in the AI industry. The investigation highlights the need for AI firms to prioritize transparency and accountability in their operations and underscores the importance of regulatory oversight in promoting responsible and ethical AI practices.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/