
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move to ensure transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, over allegations of failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. If found guilty, DeepSeek may face a fine of up to €20 million or 4% of its global annual turnover.
The investigation was initiated after a thorough review of DeepSeek’s business practices and content generation processes. The Italian antitrust body, Autorità Garante della Concorrenza e del Mercato (AGCM), has determined that DeepSeek’s failure to disclose the potential risks associated with hallucinations in its AI-produced content may have led to consumer harm and confusion.
DeepSeek’s AI models are designed to generate human-like content, including text, images, and videos, in response to user inputs. While the company claims to use advanced algorithms and machine learning techniques to produce accurate and relevant content, the AGCM has found that this may not always be the case. In some instances, DeepSeek’s AI models have been known to generate inaccurate, misleading, or fabricated information, which can lead to consumer harm and confusion.
The AGCM has accused DeepSeek of violating Italian transparency laws by failing to adequately disclose the potential risks associated with hallucinations in its AI-produced content. The Italian antitrust body has argued that users have the right to know when AI models are generating inaccurate or misleading information, and that DeepSeek’s failure to provide this information may have led to consumer harm and confusion.
DeepSeek has denied any wrongdoing and has stated that its AI models are designed to produce accurate and relevant content. The company has argued that the AGCM’s allegations are unfounded and that its content generation processes are transparent and accountable.
While DeepSeek’s AI models are designed to generate human-like content, there are inherent risks associated with the use of AI in content generation. Hallucinations, for example, can occur when AI models are trained on biased or incomplete data, leading to inaccurate or misleading information. Additionally, AI models may generate content that is not grounded in reality, which can lead to consumer harm and confusion.
The AGCM’s investigation is a welcome development in the effort to ensure transparency and accountability in the use of AI. The Italian antitrust body’s actions demonstrate a commitment to protecting consumer interests and ensuring that companies are held accountable for their actions.
In conclusion, the Italian antitrust body’s investigation into DeepSeek over allegations of failing to warn users about the risk of hallucinations in its AI-produced content is a significant development in the effort to ensure transparency and accountability in the use of AI. The AGCM’s actions demonstrate a commitment to protecting consumer interests and ensuring that companies are held accountable for their actions. As the use of AI continues to grow and evolve, it is essential that companies are transparent and accountable in their use of AI, and that consumers are protected from potential harm and confusion.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/