
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a significant development, Italy’s antitrust body has launched an investigation into China-based AI firm DeepSeek for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. The investigation has raised concerns about the transparency and accountability of AI-powered content generation, highlighting the need for stricter regulations to protect users.
According to reports, the Italian antitrust authority, Agcom, has accused DeepSeek of violating the country’s transparency law by not adequately informing users about the potential risks of hallucinations in its AI-generated content. The company’s AI model is designed to produce human-like text based on user inputs, but it appears to have been generating inaccurate or misleading information, leading to concerns about its reliability and accuracy.
The investigation was launched after a series of complaints were filed against DeepSeek, citing instances of hallucinations in its AI-generated content. The company’s AI model was found to have generated false information about historical events, scientific facts, and even fabricated quotes from prominent figures. This has raised concerns about the potential impact of hallucinations on users, particularly in situations where accuracy and reliability are crucial, such as in academic research, journalism, and decision-making.
DeepSeek, which is a subsidiary of Chinese tech giant ByteDance, has been accused of failing to provide adequate warnings to users about the potential risks of hallucinations in its AI-generated content. The company’s AI model is designed to generate content based on user inputs, but it appears to have been generating inaccurate or misleading information, leading to concerns about its reliability and accuracy.
The investigation has significant implications for the AI industry, as it highlights the need for stricter regulations to protect users from the risks of hallucinations. The Italian antitrust authority has accused DeepSeek of violating the country’s transparency law by not adequately informing users about the potential risks of hallucinations in its AI-generated content.
If found guilty, DeepSeek may face a fine of up to €20 million or 4% of its global annual turnover. This would be a significant penalty, highlighting the severity with which the Italian authorities are taking the issue of hallucinations in AI-generated content.
The investigation has also raised concerns about the potential consequences of hallucinations on users. Inaccurate or misleading information can have serious consequences, particularly in situations where accuracy and reliability are crucial. For example, in academic research, inaccurate information can lead to flawed conclusions and undermine the credibility of the research. In journalism, inaccurate information can lead to the spread of misinformation and undermine trust in the media.
In conclusion, the investigation into DeepSeek’s AI-generated content highlights the need for stricter regulations to protect users from the risks of hallucinations. The Italian antitrust authority’s actions demonstrate a commitment to ensuring transparency and accountability in the AI industry, and its penalties for violating transparency law send a strong message to other companies about the importance of protecting users.
As the AI industry continues to grow and evolve, it is essential that companies prioritize transparency and accountability in their AI-powered content generation. This includes providing adequate warnings to users about the potential risks of hallucinations and taking steps to ensure the accuracy and reliability of their AI-generated content.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/