
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move to ensure transparency and accountability in the use of artificial intelligence (AI) technologies, Italy’s antitrust body has opened an investigation into DeepSeek, a Chinese AI firm, for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
The investigation was launched on June 16, 2025, according to a report by Reuters, citing sources familiar with the matter. If found guilty, DeepSeek may face a fine of up to €20 million or 4% of its global annual turnover, which could be a significant blow to the company.
DeepSeek’s AI technology is used to generate content, including text, images, and videos, for various industries such as healthcare, finance, and entertainment. However, the company’s failure to adequately disclose the risk of hallucinations in its AI-generated content has raised concerns among regulators and consumers alike.
Italy’s antitrust body, the Autorità Garante della Concorrenza e del Mercato (AGCM), is responsible for ensuring competition and protecting consumers in the country. The agency has strict laws in place to regulate the use of AI technologies, including requirements for transparency and accuracy in AI-generated content.
In a statement, the AGCM said that it had received several complaints about DeepSeek’s AI technology, which allegedly generated content that was inaccurate, misleading, or fabricated. The agency has launched an investigation to determine whether the company has violated Italy’s transparency law, which requires companies to clearly disclose the limitations and risks associated with their AI technology.
DeepSeek has faced similar allegations in the past. In 2024, the company was accused of generating fake news articles and propaganda posts on social media platforms. At the time, the company denied any wrongdoing and claimed that its AI technology was designed to generate content that was “neutral” and “objective.”
However, critics have argued that DeepSeek’s AI technology is not capable of accurately distinguishing between fact and fiction, and that the company has a responsibility to ensure that its content is accurate and trustworthy.
The investigation into DeepSeek’s AI technology is a significant development in the ongoing debate about the role of AI in society. As AI technologies become increasingly prevalent in various industries, there is growing concern about the potential risks and consequences of AI-generated content.
Hallucinations, in particular, are a major concern for regulators and consumers. AI-generated content that is inaccurate, misleading, or fabricated can have serious consequences, including financial losses, reputational damage, and even physical harm.
In recent years, there have been several high-profile cases of AI-generated content causing harm. For example, in 2022, a language model developed by a UK-based company generated a fake news article that was so convincing that it was picked up by several major news outlets. The article claimed that a prominent politician had died, causing widespread panic and disruption.
In another case, an AI-generated video was used to create a fake news report that claimed a major corporation was going bankrupt. The video was so convincing that it was shared widely on social media, causing the company’s stock price to plummet.
These cases highlight the need for stricter regulations around AI-generated content, including requirements for transparency and accuracy. DeepSeek’s investigation is a significant step in this direction, and it is likely that the company will face further scrutiny in the coming months.
In conclusion, the investigation into DeepSeek’s AI technology is a major development in the ongoing debate about the role of AI in society. The company’s alleged failure to warn users about the risk of hallucinations in its AI-generated content has raised concerns about the potential consequences of AI-generated content, including financial losses, reputational damage, and even physical harm.
As AI technologies become increasingly prevalent in various industries, it is essential that companies like DeepSeek take responsibility for ensuring that their content is accurate, trustworthy, and transparent. The investigation into DeepSeek’s AI technology is a reminder that regulators and consumers will not tolerate companies that prioritize profits over transparency and accountability.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/