
Title: Italy Opens Probe into AI Firm DeepSeek Over Hallucination Risks
Italy’s antitrust body has taken a major step in ensuring the transparency and reliability of artificial intelligence (AI) produced content by opening an investigation into DeepSeek, a Chinese AI firm. The move comes after allegations surfaced that DeepSeek failed to warn its users about the risk of “hallucinations” in its AI-generated content.
Hallucinations, in the context of AI, refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. This can have serious consequences, particularly in industries such as healthcare, finance, and education, where accurate information is crucial.
According to the Italian antitrust body, Autorità Garante della Concorrenza e del Mercato (AGCM), DeepSeek may have violated the country’s transparency law by not adequately informing its users about the potential risks of hallucinations in its AI-produced content. The AGCM has the power to fine the company up to €20 million or 4% of its global annual turnover.
The investigation was sparked by a complaint filed by an Italian consumer protection group, who alleged that DeepSeek’s AI model was generating inaccurate information in response to user queries. The group claimed that the company was failing to provide users with adequate information about the risks of hallucinations, and that this was a violation of the country’s consumer protection laws.
DeepSeek’s AI model uses natural language processing (NLP) and machine learning algorithms to generate human-like text in response to user inputs. While this technology has the potential to revolutionize the way we interact with information, it also raises concerns about the accuracy and reliability of the information being generated.
In a statement, the AGCM said that the investigation was necessary to ensure that AI firms like DeepSeek were transparent about the potential risks and limitations of their technology. “The protection of consumers’ rights is a fundamental principle of our authority’s mission,” said the AGCM’s president, Alberto Mazzoni. “We will not tolerate any company that fails to provide users with accurate and reliable information.”
The investigation into DeepSeek is not the first of its kind. In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation and disinformation. In 2020, a study by the University of California, Berkeley found that AI-generated text was more likely to be used to spread misinformation than human-generated text.
In response to the investigation, DeepSeek has issued a statement saying that it is fully cooperating with the AGCM and is committed to ensuring the transparency and accuracy of its AI-produced content. “We take the concerns of our users very seriously and are working hard to address any issues that may have arisen,” said a spokesperson for the company.
The investigation into DeepSeek is a timely reminder of the need for greater transparency and accountability in the development and deployment of AI technology. As AI becomes increasingly integrated into our daily lives, it is essential that we have mechanisms in place to ensure that the information being generated is accurate, reliable, and trustworthy.
In conclusion, the investigation into DeepSeek by Italy’s antitrust body is a significant development in the ongoing debate about the risks and benefits of AI technology. As we continue to rely more heavily on AI to generate information, it is essential that we prioritize transparency, accountability, and user protection.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/