
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move aimed at ensuring transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into Chinese AI firm DeepSeek for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. According to reports, DeepSeek’s AI models may be generating inaccurate, misleading, or fabricated information in response to user inputs, a phenomenon commonly referred to as “hallucinations”.
The investigation, which was initiated on June 16, 2025, is being conducted by Italy’s Competition and Market Authority (AGCM), which is responsible for enforcing competition laws and protecting consumers from unfair business practices. DeepSeek, a Chinese AI firm based in Beijing, has been accused of violating Italy’s transparency law by failing to adequately inform users about the potential risks associated with its AI-generated content.
Under Italy’s transparency law, businesses are required to provide clear and accurate information to their users about the capabilities and limitations of their products and services. The law is designed to protect consumers from harm and ensure that they are fully informed about the products they use.
DeepSeek’s AI models are designed to generate human-like responses to user inputs, but the company has been accused of failing to adequately test and validate its AI models to ensure that they are producing accurate and reliable information. As a result, users may be receiving misleading or fabricated information from DeepSeek’s AI models, which could have serious consequences.
The investigation into DeepSeek is the latest in a series of regulatory actions aimed at ensuring the responsible development and use of AI. In recent years, there has been growing concern about the potential risks associated with AI, including its ability to generate biased or discriminatory information, its potential to create false or misleading content, and its ability to undermine trust in institutions and decision-makers.
DeepSeek may face significant fines if found guilty of violating Italy’s transparency law. The company could be fined up to €20 million or 4% of its global annual turnover, whichever is higher. The fine would serve as a deterrent to other companies that may be considering similar violations of the law.
The investigation into DeepSeek is also a reminder of the need for companies to prioritize transparency and accountability in their use of AI. As AI becomes increasingly ubiquitous in our daily lives, it is essential that companies are transparent about the capabilities and limitations of their AI models and take steps to ensure that they are producing accurate and reliable information.
The use of AI in business is becoming increasingly widespread, and companies are using AI to generate content, make decisions, and interact with customers. While AI has the potential to revolutionize many industries and improve efficiency and productivity, it also poses significant risks if not used responsibly.
In conclusion, the investigation into DeepSeek is a significant development in the ongoing debate about the responsible development and use of AI. The case highlights the importance of transparency and accountability in the use of AI and serves as a reminder of the need for companies to prioritize these values in their business practices.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/