
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a significant development, Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese artificial intelligence (AI) firm, over allegations that it failed to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. This move has sparked concerns about the potential consequences of AI-generated content on society and the importance of transparency in the development and deployment of AI technologies.
According to reports, the Italian antitrust body, Autorità Garante della Concorrenza e del Mercato (AGCM), has ordered DeepSeek to cease its activities in the country pending the outcome of the investigation. The company may face a fine of up to €20 million or 4% of its global annual turnover if found guilty of violating Italy’s transparency law.
DeepSeek, which is headquartered in Beijing, China, is a leading AI firm that provides content generation services to various industries, including media, education, and entertainment. The company’s AI models are designed to generate human-like content, including text, images, and videos, in response to user inputs. However, the AGCM has accused DeepSeek of failing to adequately warn users about the risk of hallucinations in its AI-generated content.
Hallucinations in AI-generated content refer to situations where the AI model generates information that is not accurate, misleading, or fabricated in response to user inputs. This can have serious consequences, including the spread of misinformation, damage to reputation, and financial losses. In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation, including fake news articles, misleading social media posts, and manipulated videos.
The AGCM investigation into DeepSeek is part of a broader effort by the Italian government to regulate the use of AI technologies in the country. In 2020, the Italian parliament passed a law requiring companies that use AI technologies to provide clear information to users about the risks and limitations of the technology. The law also requires companies to ensure that their AI systems are transparent, explainable, and accountable.
DeepSeek’s alleged failure to warn users about the risk of hallucinations in its AI-generated content has raised concerns about the company’s compliance with Italian law. The AGCM has accused DeepSeek of violating the country’s transparency law, which requires companies to provide users with clear and concise information about the risks and limitations of AI technologies.
In response to the investigation, DeepSeek has stated that it is committed to transparency and compliance with Italian law. The company has also announced that it is cooperating fully with the AGCM investigation and is working to address any concerns or issues that may have arisen.
The investigation into DeepSeek has sparked a wider debate about the need for greater transparency and accountability in the development and deployment of AI technologies. Many experts argue that AI technologies have the potential to revolutionize industries and improve people’s lives, but only if they are developed and deployed in a responsible and transparent manner.
In conclusion, the investigation into DeepSeek by the AGCM is a significant development that highlights the importance of transparency and accountability in the development and deployment of AI technologies. The case serves as a warning to companies that use AI technologies to ensure that they comply with relevant laws and regulations, and to provide clear information to users about the risks and limitations of the technology.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/