
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move aimed at ensuring transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, over allegations that it failed to warn users about the risk of “hallucinations” in its AI-produced content. The investigation comes amid growing concerns about the potential consequences of AI-generated content, particularly in situations where the model generates inaccurate, misleading, or fabricated information in response to user inputs.
Hallucinations, in this context, refer to situations where the AI model generates information that is not based on actual data or reality. This can lead to a range of negative consequences, including the spread of misinformation, confusion, and even economic losses. In the case of DeepSeek, the Italian authorities suspect that the company may have violated transparency laws by failing to inform users about the risks associated with its AI-generated content.
The investigation was announced on June 16, 2025, by Italy’s Competition and Market Authority (AGCM), which is responsible for enforcing competition and consumer protection laws in the country. According to the AGCM, DeepSeek may be fined up to €20 million or 4% of its global annual turnover for violating transparency laws.
DeepSeek, which is headquartered in Beijing, is a leading provider of AI-powered content generation services, offering a range of solutions to businesses, governments, and individuals. The company’s AI models are capable of generating human-like text, images, and videos, which can be used for a variety of purposes, including marketing, education, and entertainment.
However, the AGCM’s investigation is focused on DeepSeek’s failure to provide adequate transparency about the potential risks associated with its AI-generated content. Specifically, the authorities are concerned that the company may have failed to inform users about the possibility of hallucinations in its AI-produced content, which could lead to confusion, misinformation, or even financial losses.
The investigation is seen as a significant development in the ongoing debate about the responsible use of AI. While AI has the potential to revolutionize industries and improve lives, it also raises important questions about accountability, transparency, and ethics.
In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation or manipulate public opinion. For example, in 2020, a study found that AI-powered chatbots were being used to spread fake news and propaganda on social media platforms. Similarly, in 2022, a AI-generated video of a politician was used to spread false information about his views on a particular issue.
The AGCM’s investigation into DeepSeek is aimed at ensuring that the company and other AI firms take responsibility for the content they generate. By failing to provide adequate transparency about the risks associated with its AI-generated content, DeepSeek may have violated Italy’s transparency laws, which require companies to provide clear and accurate information to consumers.
The investigation is also seen as a warning to other AI firms that they must take responsibility for the content they generate and provide adequate transparency about the risks associated with their AI-generated content. This includes providing clear information about the potential for hallucinations and other forms of inaccurate or misleading information.
In a statement, the AGCM said that it was investigating DeepSeek for “alleged failure to provide adequate information to consumers about the risks associated with its AI-generated content.” The statement added that the investigation was aimed at ensuring that companies take responsibility for the content they generate and provide clear and accurate information to consumers.
The investigation is likely to have significant implications for DeepSeek and other AI firms. If found guilty, DeepSeek could face fines of up to €20 million or 4% of its global annual turnover. Additionally, the company may be required to take corrective action to address the alleged violations and provide adequate transparency about the risks associated with its AI-generated content.
The investigation is also likely to have broader implications for the AI industry as a whole. As AI becomes increasingly integrated into our daily lives, it is essential that companies take responsibility for the content they generate and provide adequate transparency about the risks associated with their AI-generated content.
In conclusion, the Italian authorities’ investigation into DeepSeek over allegations of failing to warn users about the risk of hallucinations in its AI-generated content is a significant development in the ongoing debate about the responsible use of AI. The investigation highlights the importance of transparency and accountability in the use of AI and serves as a warning to other AI firms that they must take responsibility for the content they generate and provide adequate transparency about the risks associated with their AI-generated content.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/