
Title: Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
Italy’s antitrust body has opened an investigation into DeepSeek, a Chinese AI firm, for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations, in this context, refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. This move by the Italian authorities has significant implications for the AI industry, highlighting the importance of transparency and accountability in the development and deployment of AI-powered technologies.
DeepSeek, a leading AI company in China, has been under scrutiny for some time now. The firm’s AI models have been used in various applications, including content generation, chatbots, and language translation. However, concerns have been raised about the accuracy and reliability of the information generated by these models. The Italian antitrust body, Autorità Garante della Concorrenza e del Mercato (AGCM), has taken notice of these concerns and has opened an investigation into DeepSeek’s practices.
According to the AGCM, DeepSeek has failed to provide adequate warnings to users about the risk of hallucinations in its AI-produced content. This lack of transparency has led to concerns about the potential impact on users, who may be misled or misinformed by the inaccurate or fabricated information generated by DeepSeek’s AI models.
The investigation is focused on DeepSeek’s compliance with Italy’s transparency law, which requires companies to provide clear and accurate information to users about the risks and limitations of their products and services. The AGCM has stated that DeepSeek may be fined up to €20 million or 4% of its global annual turnover if found guilty of violating the transparency law.
DeepSeek’s failure to provide adequate warnings to users about the risk of hallucinations is not the only concern raised by the AGCM. The Italian antitrust body has also expressed concerns about the lack of transparency in DeepSeek’s data collection and processing practices. The AGCM has stated that DeepSeek’s data collection practices may be in violation of Italy’s data protection laws, which require companies to obtain explicit consent from users before collecting and processing their personal data.
The investigation into DeepSeek is not the first time that the AGCM has taken action against a company for violating the transparency law. In 2020, the AGCM fined Google €20 million for violating the transparency law by failing to provide clear and accurate information to users about the risks and limitations of its Google Analytics service.
The investigation into DeepSeek is significant not only because of the potential fine but also because it highlights the importance of transparency and accountability in the development and deployment of AI-powered technologies. The AGCM’s actions demonstrate that companies must prioritize transparency and user protection when developing and deploying AI-powered products and services.
The consequences of DeepSeek’s failure to provide adequate warnings to users about the risk of hallucinations are far-reaching. If the AGCM finds DeepSeek guilty of violating the transparency law, the company may face significant financial penalties and reputational damage. Additionally, the investigation may lead to changes in the way that AI-powered products and services are developed and deployed, with a greater emphasis on transparency and user protection.
In conclusion, the investigation into DeepSeek by Italy’s antitrust body is a significant development in the AI industry. The case highlights the importance of transparency and accountability in the development and deployment of AI-powered technologies and demonstrates that companies must prioritize user protection when developing and deploying AI-powered products and services. The consequences of DeepSeek’s failure to provide adequate warnings to users about the risk of hallucinations are far-reaching, and the company’s actions may have significant implications for the AI industry as a whole.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/