
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move aimed at promoting transparency and accountability in the use of artificial intelligence (AI) technology, Italy’s antitrust body has opened an investigation into DeepSeek, a Chinese AI firm, over allegations of failing to warn users about the risk of “hallucinations” in its AI-produced content.
Hallucinations, in the context of AI, refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. This can have serious consequences, particularly in fields such as healthcare, finance, and education, where accurate information is crucial for decision-making.
The investigation was launched after Italian authorities received complaints that DeepSeek’s AI-powered content, which is used by various applications and platforms, failed to provide adequate warnings to users about the potential risks of hallucinations. DeepSeek’s AI model is capable of generating human-like text, images, and videos, and is used in a range of applications, including virtual assistants, social media platforms, and online content creators.
Under Italy’s transparency law, DeepSeek is required to provide users with clear and concise information about the potential risks and limitations of its AI-powered content. The law aims to protect consumers from potential harm caused by the use of AI technology, and to promote transparency and accountability in the development and deployment of AI systems.
If found guilty of violating the transparency law, DeepSeek may be fined up to €20 million or 4% of its global annual turnover. The investigation is ongoing, and it is expected to take several months to complete.
The move by Italy’s antitrust body to investigate DeepSeek is significant, as it highlights the growing concerns about the use of AI technology and the need for greater transparency and accountability in its development and deployment.
DeepSeek’s AI-powered content has been widely used, and its algorithms have been trained on vast amounts of data to generate human-like text, images, and videos. While the technology has the potential to revolutionize various industries, it also raises concerns about the potential risks of hallucinations and the need for greater transparency and accountability.
The investigation into DeepSeek is also significant, as it is the first of its kind in Italy and sets a precedent for the regulation of AI technology in the country. It is expected to have a ripple effect, with other countries likely to follow suit and investigate similar cases of AI-powered content that fails to provide adequate warnings to users about the potential risks of hallucinations.
The investigation into DeepSeek also highlights the need for greater collaboration and coordination between governments, regulatory bodies, and industry players to develop and deploy AI technology in a responsible and transparent manner. It is essential that AI systems are designed and deployed with transparency and accountability in mind, and that users are provided with clear and concise information about the potential risks and limitations of AI-powered content.
In conclusion, the investigation into DeepSeek by Italy’s antitrust body is a significant development in the regulation of AI technology, and highlights the growing concerns about the potential risks of hallucinations and the need for greater transparency and accountability in its development and deployment. It is essential that AI systems are designed and deployed with transparency and accountability in mind, and that users are provided with clear and concise information about the potential risks and limitations of AI-powered content.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/