
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a significant move, Italy’s antitrust body, the Autorità Garante della Concorrenza e del Mercato (AGCM), has launched an investigation into Chinese AI firm DeepSeek over allegations that it failed to warn users about the risk of “hallucinations” in its AI-produced content. The investigation comes after a series of concerns were raised about the accuracy and reliability of DeepSeek’s AI-generated information.
According to reports, DeepSeek’s AI model generates content in response to user inputs, but in some cases, the model produces inaccurate, misleading, or fabricated information. This phenomenon is known as “hallucination,” and it has serious implications for users who rely on the AI-generated content for decision-making or other important purposes.
The AGCM has launched the investigation under Italy’s transparency law, which requires companies to clearly disclose risks and potential biases in their AI systems. DeepSeek may be fined up to €20 million or 4% of its global annual turnover if found guilty of violating the law.
DeepSeek’s Hallucination Problem
Hallucinations in AI-generated content are a growing concern in the tech industry. AI models are designed to learn from large datasets and generate outputs based on patterns and associations they detect. However, this can sometimes lead to inaccurate or misleading information being generated, particularly if the training data is biased or incomplete.
DeepSeek’s AI model is trained on a massive dataset of online content, which includes both accurate and inaccurate information. While the model is designed to generate content that is similar to human-written text, it can sometimes produce outputs that are not accurate or reliable.
In an investigation by the Italian media outlet, La Repubblica, it was found that DeepSeek’s AI model generated content that was not only inaccurate but also biased. For example, the model was found to produce content that was favorable to certain political parties or individuals, even when there was no evidence to support these claims.
The implications of DeepSeek’s hallucination problem are serious. Users who rely on the AI-generated content for decision-making or other important purposes may be misled or misinformed. In some cases, this could lead to financial losses or other serious consequences.
The AGCM’s Investigation
The AGCM’s investigation into DeepSeek is a significant development in the tech industry. The antitrust body is responsible for ensuring that companies comply with Italy’s transparency law, which requires them to clearly disclose risks and potential biases in their AI systems.
The AGCM’s investigation will focus on whether DeepSeek complied with the transparency law and whether its AI model is accurate and reliable. The agency will also examine whether DeepSeek’s failure to warn users about the risk of hallucinations constitutes a violation of the law.
If found guilty, DeepSeek could face a fine of up to €20 million or 4% of its global annual turnover. This would be a significant penalty for the company, which has already faced criticism for its lack of transparency and accountability.
What This Means for the Tech Industry
The AGCM’s investigation into DeepSeek has significant implications for the tech industry as a whole. It highlights the need for companies to prioritize transparency and accountability in their AI systems.
AI-generated content is becoming increasingly popular, and companies like DeepSeek are using AI models to generate content for a wide range of applications, from social media to financial analysis. However, as this investigation shows, there are risks associated with using AI-generated content, and companies must take steps to mitigate these risks.
The AGCM’s investigation also underscores the importance of regulatory oversight in the tech industry. As AI becomes increasingly integrated into our daily lives, it is essential that regulators take a proactive approach to ensuring that companies comply with transparency and accountability standards.
Conclusion
The AGCM’s investigation into DeepSeek is a significant development in the tech industry. It highlights the need for companies to prioritize transparency and accountability in their AI systems and underscores the importance of regulatory oversight in the tech industry.
As AI-generated content becomes increasingly popular, it is essential that companies take steps to mitigate the risks associated with hallucinations. This includes implementing robust testing and validation processes to ensure that AI-generated content is accurate and reliable.
The AGCM’s investigation into DeepSeek is a reminder that the tech industry must prioritize transparency and accountability in its AI systems. We can expect to see more regulatory scrutiny of AI companies in the future, and it is essential that companies take proactive steps to ensure compliance with transparency and accountability standards.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/