
Italy Opens Probe into AI Firm DeepSeek Over Hallucination Risks
In a move to ensure transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs, which can have significant consequences in various fields, including healthcare, finance, and education.
According to a report by Reuters, the Italian antitrust authority, Autorità Garante della Concorrenza e del Mercato (AGCM), has accused DeepSeek of violating Italy’s transparency law by not adequately informing users about the potential risks associated with its AI-generated content. The investigation was launched after a complaint was filed by a consumer association, which alleged that DeepSeek’s AI model was capable of generating false information, including fabricated news articles and misleading product reviews.
DeepSeek’s AI technology uses natural language processing (NLP) and machine learning algorithms to generate content, such as articles, reviews, and social media posts, at an unprecedented scale and speed. While AI-generated content has the potential to revolutionize industries and transform the way we consume information, the risk of hallucinations poses a significant threat to the integrity and accuracy of the information.
The Italian antitrust authority has given DeepSeek until July 22 to respond to the allegations and provide evidence that it has taken adequate measures to ensure transparency and accuracy in its AI-generated content. If found guilty, DeepSeek could face a fine of up to €20 million or 4% of its global annual turnover, whichever is higher.
DeepSeek’s alleged failure to warn users about the risk of hallucinations is not an isolated incident. In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation and disinformation. For example, in 2020, a Washington Post article revealed that AI-generated content was being used to spread fake news and propaganda on social media platforms.
The risks associated with AI-generated content are not limited to misinformation and disinformation. Hallucinations can also have serious consequences in fields such as healthcare and finance, where accurate and reliable information is critical. For instance, AI-generated content could be used to provide false medical diagnoses or treatment recommendations, or to generate fake financial reports and stock tips.
In response to the allegations, a DeepSeek spokesperson stated that the company takes the issue of transparency and accuracy very seriously and is committed to ensuring that its AI-generated content is reliable and trustworthy. However, the company did not provide any specific details on how it plans to address the allegations or what measures it has taken to prevent hallucinations.
Italy’s investigation into DeepSeek is a significant development in the ongoing debate about the use of AI and the need for transparency and accountability in the development and deployment of AI systems. As AI becomes increasingly integrated into various aspects of our lives, it is essential that we prioritize transparency, accuracy, and accountability to ensure that AI systems are used in a responsible and ethical manner.
The Italian antitrust authority’s investigation into DeepSeek is a wake-up call for AI firms and policymakers alike. It highlights the need for stricter regulations and guidelines on the use of AI, as well as the importance of ensuring transparency and accountability in the development and deployment of AI systems.
As the use of AI continues to grow and evolve, it is essential that we prioritize the development of AI systems that are transparent, accurate, and reliable. This requires a collaborative effort between AI firms, policymakers, and consumers to ensure that AI systems are used in a responsible and ethical manner.
Sources:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/