
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move that highlights the growing concerns over the reliability of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
The investigation was triggered by a complaint filed by an Italian consumer group, which claimed that DeepSeek’s AI-generated content was often inaccurate or misleading, and the company had failed to provide adequate warnings to users. The Italian antitrust body, known as AGCM, has issued a statement confirming the investigation and warning DeepSeek that it could face significant fines if found guilty of violating transparency laws.
According to the AGCM, DeepSeek may be fined up to €20 million or 4% of its global annual turnover, whichever is higher, if it is found to have violated Italy’s transparency law. This law requires companies to provide clear and accurate information to consumers about their products and services, including any potential risks or limitations.
DeepSeek has been accused of using its AI models to generate content that is not only inaccurate but also misleading. For example, the company’s AI-generated news articles have been known to contain fabricated information, quotes, and dates. This has raised concerns about the potential impact of DeepSeek’s content on public opinion and decision-making.
The investigation into DeepSeek is part of a broader trend of regulatory scrutiny of AI companies. In recent years, there have been increasing concerns about the potential risks and limitations of AI, including its ability to generate biased or misleading information.
DeepSeek is not the only AI company to face regulatory scrutiny. In the United States, AI companies such as IBM and Google have faced lawsuits over their use of AI-generated content, with some claiming that the content is inaccurate or misleading. In Europe, the European Union’s data protection authority has launched an investigation into AI companies that use sensitive personal data without consent.
The investigation into DeepSeek also highlights the need for greater transparency and accountability in the AI industry. As AI becomes increasingly integrated into our daily lives, it is essential that companies are held accountable for the information they generate. This includes providing clear and accurate information to users, as well as ensuring that their AI models are transparent and explainable.
DeepSeek’s AI models use a technique called “generative adversarial networks” (GANs) to generate content. GANs are designed to generate realistic and diverse content, but they can also be prone to generating inaccurate or misleading information. The investigation into DeepSeek raises concerns about the potential risks and limitations of GANs, and the need for greater transparency and accountability in the AI industry.
The AGCM has given DeepSeek a deadline of 30 days to respond to the investigation. The company has not yet commented on the investigation, but it is expected to cooperate fully with the authorities. The outcome of the investigation will be closely watched by regulators and consumers around the world, as it will set an important precedent for the regulation of AI companies in the future.
In conclusion, the investigation into DeepSeek’s AI-generated content highlights the growing concerns over the reliability of AI. As AI becomes increasingly integrated into our daily lives, it is essential that companies are held accountable for the information they generate. This includes providing clear and accurate information to users, as well as ensuring that their AI models are transparent and explainable.
Sources:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/