
Italy Opens Probe into AI Firm DeepSeek Over Hallucination Risks
In a move aimed at ensuring transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into Chinese AI firm DeepSeek over allegations that it failed to warn users about the risk of “hallucinations” in its AI-produced content. According to reports, the Italian authorities are concerned that DeepSeek’s AI models may be generating inaccurate, misleading, or fabricated information in response to user inputs, which could have serious consequences for those relying on the information.
The investigation was triggered by a complaint filed by a consumer group, which alleged that DeepSeek’s AI-powered chatbots and other tools were producing hallucinations, or false information, without users being aware of the risk. The group claimed that this lack of transparency violated Italian law, which requires companies to clearly inform users of the potential risks and limitations of AI-generated content.
The Italian antitrust body, known as the Authority for the Supervision of Public Procurement and Market, has given DeepSeek 30 days to respond to the allegations and provide information about its practices and procedures for generating AI content. If found guilty, DeepSeek could face fines of up to €20 million or 4% of its global annual turnover, whichever is higher.
DeepSeek’s AI-powered tools are designed to assist users in searching for information, answering questions, and completing tasks. However, the company’s algorithms are not infallible, and there is a risk that they may generate inaccurate or misleading information, particularly in situations where the user input is unclear or incomplete.
In recent years, there have been several high-profile cases where AI-generated content has been used to spread misinformation or propaganda. For example, in 2020, it was reported that AI-powered chatbots were spreading false information about the COVID-19 pandemic on social media platforms. In another instance, AI-generated content was used to create fake news stories and propaganda videos to influence public opinion and sway election results.
The Italian authorities are concerned that DeepSeek’s AI-powered tools could be used to spread misinformation or hallucinations, which could have serious consequences for individuals and society as a whole. By launching an investigation into the company’s practices, the authorities are sending a clear message that they will not tolerate any violation of transparency laws or regulations that could put users at risk.
The investigation is also seen as a significant development in the ongoing debate about the regulation of AI and its applications. As AI technology continues to evolve and become more integrated into our daily lives, there is a growing need for clear guidelines and regulations to ensure that its use is safe, transparent, and accountable.
In recent years, several countries have introduced regulations aimed at limiting the use of AI-generated content that could be used to spread misinformation or propaganda. For example, the European Union has introduced strict regulations on the use of AI-powered chatbots and other tools, including requirements for transparency and accountability.
The investigation into DeepSeek is also seen as a wake-up call for other AI companies and developers, who need to be aware of the risks and limitations of AI-generated content and take steps to mitigate them. By prioritizing transparency and accountability, AI companies can help to build trust with users and ensure that their technology is used responsibly and ethically.
In conclusion, the Italian authorities’ investigation into DeepSeek over hallucination risks is a significant development in the ongoing debate about the regulation of AI and its applications. As AI technology continues to evolve and become more integrated into our daily lives, it is essential that we prioritize transparency, accountability, and ethical use to ensure that its benefits are maximized and its risks are minimized.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/