
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move aimed at ensuring transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has launched an investigation into Chinese AI firm DeepSeek for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content. Hallucinations refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs.
According to a report by Reuters, the Italian antitrust authority, Autorità Garante della Concorrenza e del Mercato (AGCM), has opened an investigation into DeepSeek’s practices, citing concerns that the company may have violated Italian transparency laws. The regulator has given DeepSeek a deadline of 60 days to provide information and evidence regarding its AI algorithms and the measures it has taken to prevent hallucinations.
DeepSeek’s AI models are designed to generate content, such as news articles, social media posts, and advertisements, based on user inputs and data. However, the company’s failure to warn users about the risk of hallucinations has raised concerns about the potential for misinformation and disinformation. Hallucinations can occur when the AI model misinterprets or misuses data, leading to the generation of inaccurate or misleading information.
The Italian antitrust body has the power to fine DeepSeek up to €20 million or 4% of its global annual turnover if found guilty of violating transparency laws. This is not the first time that DeepSeek has faced scrutiny over its AI practices. In 2022, the company faced criticism from researchers who discovered that its AI models were generating biased and inaccurate content.
The investigation into DeepSeek is part of a broader effort by the Italian government to regulate the use of AI in the country. In 2020, the government passed a law requiring companies to be transparent about their use of AI and to ensure that their algorithms are fair and unbiased. The law also requires companies to provide users with information about the risks and limitations of AI-generated content.
DeepSeek is not the only AI firm to face scrutiny over its practices. In recent years, several other AI companies have faced criticism for their use of AI-generated content, including the spread of misinformation and disinformation. In 2020, Google was fined $170 million by the US Federal Trade Commission (FTC) for violating its own policies on AI-generated content.
The use of AI-generated content is becoming increasingly common, with many companies using AI models to generate content, such as news articles, social media posts, and advertisements. However, the use of AI-generated content raises several concerns, including the risk of misinformation and disinformation, and the potential for AI models to generate biased and inaccurate content.
The investigation into DeepSeek is a welcome development in the effort to regulate the use of AI in Italy. By holding companies accountable for their use of AI, the Italian government can help to ensure that AI-generated content is transparent, accurate, and fair. This is particularly important in the context of the COVID-19 pandemic, where misinformation and disinformation have been spread through AI-generated content.
In conclusion, the Italian antitrust body’s investigation into DeepSeek over allegations of failing to warn users about the risk of hallucinations is a significant development in the effort to regulate the use of AI in Italy. The use of AI-generated content raises several concerns, including the risk of misinformation and disinformation, and the potential for AI models to generate biased and inaccurate content. By holding companies accountable for their use of AI, the Italian government can help to ensure that AI-generated content is transparent, accurate, and fair.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/