
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move to ensure transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body, Autorità Garante della Concorrenza e del Mercato (AGCM), has launched an investigation into DeepSeek, a Chinese AI firm, over allegations of failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where an AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
DeepSeek, a leading AI-powered content generation platform, has been accused of violating Italy’s transparency law by not providing adequate warnings to users about the potential risks associated with its AI-generated content. The company’s AI models are designed to generate content based on user inputs, but the investigation suggests that these models may not always provide accurate or reliable information.
The AGCM investigation was prompted by a complaint filed by a user who claimed that DeepSeek’s AI-generated content contained inaccurate information. The user alleged that the company failed to provide sufficient warnings about the potential risks of using its AI-powered content generation tool.
DeepSeek’s AI models use natural language processing (NLP) and machine learning algorithms to generate content based on user inputs. While these models have the potential to revolutionize the content creation process, they are not immune to errors. The AGCM investigation has highlighted the need for AI companies like DeepSeek to take steps to ensure the accuracy and reliability of their generated content.
Under Italian law, AI companies are required to provide users with clear and transparent information about the risks associated with their AI-powered content generation tools. The law requires companies to provide users with information about the potential sources of errors, the limitations of their AI models, and the potential risks associated with using their services.
DeepSeek may face fines of up to €20 million or 4% of its global annual turnover if found guilty of violating the transparency law. The company has been ordered to provide the AGCM with information about its AI-powered content generation tool, including its algorithms, data sources, and testing procedures.
The investigation is the latest in a series of regulatory actions taken by Italian authorities to ensure the responsible development and use of AI. In recent years, the country has implemented a range of measures to promote transparency and accountability in the use of AI, including the establishment of a national artificial intelligence strategy.
The AGCM investigation has sent a clear message to AI companies like DeepSeek that they must prioritize transparency and accountability in their operations. As AI continues to play a increasingly important role in our lives, it is essential that companies like DeepSeek take steps to ensure the accuracy and reliability of their generated content.
DeepSeek’s AI-powered content generation tool has the potential to revolutionize the content creation process, but it must be developed and used in a way that prioritizes transparency and accountability. The company must take steps to ensure that its AI models are accurate, reliable, and transparent, and that users are provided with clear and concise information about the potential risks associated with using its services.
The AGCM investigation is a welcome development in the drive for transparency and accountability in the use of AI. It serves as a reminder that AI companies like DeepSeek must prioritize the needs and concerns of users, and that they must take steps to ensure the accuracy and reliability of their generated content.
In conclusion, the AGCM investigation into DeepSeek’s AI-powered content generation tool is a significant development in the drive for transparency and accountability in the use of AI. The company’s alleged failure to provide users with adequate warnings about the risk of hallucinations in its AI-generated content is a serious violation of Italian law, and the company may face significant fines as a result.
As AI continues to play a increasingly important role in our lives, it is essential that companies like DeepSeek take steps to prioritize transparency and accountability in their operations. The AGCM investigation serves as a reminder that AI companies must prioritize the needs and concerns of users, and that they must take steps to ensure the accuracy and reliability of their generated content.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/