
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
The Italian antitrust body has launched an investigation into DeepSeek, a Chinese artificial intelligence (AI) firm, for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. The move is seen as a significant step towards ensuring transparency and accountability in the use of AI technology.
According to a report by Reuters, DeepSeek may be fined €20 million or 4% of its global annual turnover if found guilty of violating Italian transparency laws. The investigation was initiated after a complaint was filed by a consumer protection group, which alleged that DeepSeek’s AI-powered chatbot did not provide adequate warnings to users about the potential risks of hallucinations.
DeepSeek’s AI technology uses natural language processing and machine learning algorithms to generate human-like responses to user queries. While the technology has the potential to revolutionize various industries, including healthcare, finance, and education, it also raises concerns about the accuracy and reliability of the information generated by AI models.
The concept of hallucinations in AI refers to situations where the model generates information that is not based on real data or facts. This can occur due to various reasons, including biased training data, flawed algorithms, or intentional manipulation by malicious actors. Hallucinations can have serious consequences, including spreading misinformation, damaging reputations, and causing financial losses.
DeepSeek’s AI-powered chatbot has been widely used in various applications, including customer service, language translation, and content creation. However, the company’s failure to warn users about the risk of hallucinations has raised concerns about the transparency and accountability of its AI technology.
The Italian antitrust body’s investigation is seen as a significant step towards ensuring that AI companies like DeepSeek are transparent about the risks and limitations of their technology. The move is also expected to set a precedent for other countries to follow, as AI technology becomes increasingly widespread and influential in various aspects of life.
The investigation is not the first time that DeepSeek has faced scrutiny over its AI technology. In the past, the company has been accused of using biased training data and flawed algorithms in its AI models. The company has also faced criticism for its lack of transparency and accountability in the development and deployment of its AI technology.
DeepSeek’s AI-powered chatbot has been widely used in various applications, including customer service, language translation, and content creation. However, the company’s failure to warn users about the risk of hallucinations has raised concerns about the transparency and accountability of its AI technology.
The Italian antitrust body’s investigation is seen as a significant step towards ensuring that AI companies like DeepSeek are transparent about the risks and limitations of their technology. The move is also expected to set a precedent for other countries to follow, as AI technology becomes increasingly widespread and influential in various aspects of life.
In conclusion, the Italian antitrust body’s investigation into DeepSeek over alleged hallucinations risks is a significant step towards ensuring transparency and accountability in the use of AI technology. The move highlights the importance of ensuring that AI companies like DeepSeek are transparent about the risks and limitations of their technology and take steps to mitigate the potential harm caused by hallucinations.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/