
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move aimed at promoting transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has opened an investigation into DeepSeek, a Chinese AI firm, over allegations that it failed to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
According to a report by Reuters, the Italian antitrust authority, Autorità Garante della Concorrenza e del Mercato (AGCM), has launched an investigation into DeepSeek’s business practices, citing a violation of Italy’s transparency law. The law requires companies to provide clear and accurate information to consumers about the risks and limitations of AI-generated content.
DeepSeek, a Chinese AI firm that specializes in generating text-based content, has been accused of not adequately informing users about the potential risks of hallucinations in its AI-produced content. Hallucinations can occur when AI models are trained on biased or incomplete data, leading to inaccurate or misleading information being generated.
The investigation is a significant development in the ongoing debate about the use of AI in content creation and the need for greater transparency and accountability in the industry. It highlights the importance of ensuring that AI systems are designed and deployed in a way that prioritizes accuracy, reliability, and transparency.
The potential consequences of the investigation are significant. If found guilty of violating Italy’s transparency law, DeepSeek could be fined up to €20 million or 4% of its global annual turnover. This would be a significant blow to the company, which has been expanding its operations in recent years.
The investigation into DeepSeek is not the first time that the AI firm has faced scrutiny over its business practices. In 2022, the company was accused of generating misleading and inaccurate content related to the COVID-19 pandemic, which was widely shared on social media.
The risks associated with AI-generated content are not limited to inaccuracies or hallucinations. There are also concerns about the potential for AI systems to amplify biases and reinforce harmful stereotypes. For example, AI-generated content can perpetuate harmful gender or racial stereotypes, reinforcing existing social inequalities.
In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation and disinformation. For example, in 2020, a fake news article generated by an AI system was published in a reputable news outlet, causing widespread confusion and concern.
The investigation into DeepSeek is a reminder of the need for greater transparency and accountability in the use of AI in content creation. It highlights the importance of ensuring that AI systems are designed and deployed in a way that prioritizes accuracy, reliability, and transparency.
In response to the investigation, DeepSeek has stated that it is cooperating fully with the authorities and is committed to ensuring the integrity and accuracy of its AI-generated content. The company has also stated that it is taking steps to improve its transparency and accountability measures, including providing clearer information to users about the risks and limitations of its AI-generated content.
The investigation into DeepSeek is likely to have significant implications for the AI industry as a whole. It highlights the need for greater transparency and accountability in the use of AI in content creation and the need for companies to prioritize accuracy, reliability, and transparency in their business practices.
As the use of AI in content creation continues to grow, it is essential that companies prioritize transparency and accountability, ensuring that users are informed about the potential risks and limitations of AI-generated content. The investigation into DeepSeek is a reminder of the importance of this issue and the need for greater scrutiny and oversight in the industry.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/