
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move to ensure transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into Chinese AI firm DeepSeek for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations, in this context, refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
According to reports, the Italian antitrust body, Agcom, has opened the probe into DeepSeek following concerns that the company may have violated Italy’s transparency law by not adequately informing users about the potential risks associated with its AI-generated content. DeepSeek’s AI models are designed to generate text, images, and videos based on user input, which can range from simple requests to complex tasks. However, the company’s failure to warn users about the risk of hallucinations has raised concerns about the potential for misleading or inaccurate information to be disseminated.
The investigation is significant not only for DeepSeek but also for the broader AI industry, as it highlights the need for companies to prioritize transparency and accountability in their use of AI technology. As AI becomes increasingly ubiquitous in our daily lives, it is essential that companies take steps to ensure that their AI systems are reliable, accurate, and transparent in their output.
DeepSeek, which is a subsidiary of Chinese tech giant ByteDance, has become a significant player in the AI industry, with a range of products and services that utilize AI technology. The company’s AI models are designed to be highly advanced, capable of generating complex text, images, and videos based on user input. However, the company’s failure to warn users about the risk of hallucinations has raised concerns about the potential for inaccurate or misleading information to be disseminated.
The Italian antitrust body has issued a statement confirming the investigation into DeepSeek, which is expected to last several months. The company could face fines of up to €20 million or 4% of its global annual turnover if found guilty of violating Italy’s transparency law.
The investigation into DeepSeek is not the first time that the company has faced scrutiny over its use of AI technology. In recent years, the company has been accused of spreading disinformation and propaganda through its AI-generated content, which has raised concerns about the potential for AI to be used as a tool for spreading misinformation.
The use of AI technology is becoming increasingly prevalent in our daily lives, from virtual assistants like Siri and Alexa to AI-generated content on social media platforms. However, the potential risks associated with AI technology are significant, and it is essential that companies prioritize transparency and accountability in their use of this technology.
In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation or disinformation. For example, in 2020, a AI-generated video of former President Barack Obama was circulated on social media, which appeared to show him endorsing a particular candidate. The video was later revealed to be a deepfake, which is a type of AI-generated content that is designed to deceive or manipulate viewers.
The use of AI-generated content to spread disinformation is a significant concern, as it can have serious consequences for individuals, communities, and society as a whole. AI-generated content can be designed to manipulate public opinion, influence elections, and spread misinformation. Therefore, it is essential that companies prioritize transparency and accountability in their use of AI technology.
The investigation into DeepSeek is a significant step forward in ensuring that companies prioritize transparency and accountability in their use of AI technology. The company’s failure to warn users about the risk of hallucinations has raised concerns about the potential for inaccurate or misleading information to be disseminated. Therefore, it is essential that the company takes steps to ensure that its AI systems are reliable, accurate, and transparent in their output.
In conclusion, the investigation into DeepSeek is a significant step forward in ensuring that companies prioritize transparency and accountability in their use of AI technology. The company’s failure to warn users about the risk of hallucinations has raised concerns about the potential for inaccurate or misleading information to be disseminated. Therefore, it is essential that the company takes steps to ensure that its AI systems are reliable, accurate, and transparent in their output.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/