
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move aimed at ensuring transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, over allegations of failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
The investigation was prompted by concerns that DeepSeek’s AI systems may be producing content that is not trustworthy or reliable, potentially causing harm to users who rely on the information. The Italian antitrust body has ordered DeepSeek to provide detailed information about its AI systems, including how they are trained and tested, as well as any measures it has taken to prevent hallucinations.
DeepSeek, which is based in China, is a leading provider of AI-powered content generation solutions, and its technology has been used by a wide range of companies and organizations. However, the company’s failure to warn users about the risk of hallucinations has raised concerns about the potential consequences of relying on AI-generated content.
The Italian antitrust body has launched the investigation under the country’s transparency law, which requires companies to provide clear and accurate information to users about the risks and limitations of their AI systems. If found guilty, DeepSeek may be fined up to €20 million or 4% of its global annual turnover, whichever is higher.
The investigation is seen as a significant step towards ensuring that AI systems are used responsibly and ethically, and that users are protected from the risks of hallucinations. It also highlights the need for greater transparency and accountability in the use of AI, particularly in industries where the technology is being used to generate content that is trusted by users.
The concept of hallucinations in AI-generated content is not new, but it has become increasingly relevant in recent years as the technology has become more widespread. Hallucinations can occur when an AI model is trained on biased or incomplete data, or when it is designed to generate content that is optimized for engagement rather than accuracy.
In the case of DeepSeek, the company’s AI systems are designed to generate content quickly and efficiently, often using large datasets and complex algorithms. However, this approach can lead to hallucinations, particularly if the data used to train the model is biased or incomplete.
The Italian antitrust body’s investigation into DeepSeek is seen as a warning to other companies that use AI-powered content generation solutions to ensure that they are transparent about the risks and limitations of their technology. It also highlights the need for greater regulation and oversight of the AI industry, particularly in areas where the technology is being used to generate content that is trusted by users.
In conclusion, the Italian antitrust body’s investigation into DeepSeek over allegations of failing to warn users about the risk of hallucinations is a significant step towards ensuring that AI systems are used responsibly and ethically. It highlights the need for greater transparency and accountability in the use of AI, particularly in industries where the technology is being used to generate content that is trusted by users. As the use of AI becomes more widespread, it is essential that companies are transparent about the risks and limitations of their technology, and that users are protected from the risks of hallucinations.
News Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/