
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move that highlights the growing concerns over the reliability of artificial intelligence (AI) generated content, Italy’s antitrust body has launched an investigation into Chinese AI firm DeepSeek for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
According to a report by Reuters, the investigation was triggered by a complaint filed by a consumer group, which accused DeepSeek of violating Italy’s transparency law by not providing adequate information to users about the potential risks associated with its AI-generated content.
DeepSeek, which is a leading AI-powered content generation platform, uses machine learning algorithms to analyze user inputs and generate responses in the form of text, images, or videos. While the company’s technology has been touted for its ability to produce high-quality content quickly and efficiently, there have been growing concerns over the accuracy and reliability of the information generated by its AI models.
The investigation by Italy’s antitrust body, known as the Italian Competition Authority (AGCM), could have significant consequences for DeepSeek, including fines of up to €20 million or 4% of the company’s global annual turnover. If found guilty, DeepSeek could also be required to take corrective action to ensure that its users are adequately informed about the risks associated with its AI-generated content.
The AGCM’s investigation is the latest in a series of moves by regulatory bodies around the world to address the growing concerns over the use of AI-generated content. Last year, the European Union’s data protection watchdog, the European Data Protection Supervisor (EDPS), issued guidelines on the use of AI in the EU, emphasizing the need for transparency, accountability, and security when using AI-generated content.
DeepSeek’s alleged failure to warn users about the risk of hallucinations in its AI-generated content is a significant concern, as it highlights the potential for AI-generated content to spread misinformation and manipulate public opinion. Hallucinations can occur when an AI model is trained on biased or incomplete data, or when it is designed to generate content that is optimized for engagement rather than accuracy.
In a statement, DeepSeek denied any wrongdoing and emphasized its commitment to transparency and accuracy in its AI-generated content. “We take all allegations of non-compliance with our commitments seriously and are cooperating fully with the Italian authorities,” the company said. “We are confident that our AI-generated content meets the highest standards of quality and accuracy.”
The investigation into DeepSeek’s alleged violation of Italy’s transparency law is a significant development in the ongoing debate over the use of AI-generated content. As AI technology continues to evolve and become more integrated into our daily lives, it is essential that regulatory bodies and companies alike prioritize transparency, accountability, and security to ensure that AI-generated content is accurate, reliable, and trustworthy.
In conclusion, the investigation into DeepSeek’s alleged failure to warn users about the risk of hallucinations in its AI-generated content serves as a warning to companies that use AI technology to generate content. As AI-generated content becomes increasingly prevalent, it is essential that companies prioritize transparency and accuracy to ensure that their users are informed and protected from the potential risks associated with AI-generated content.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/