
Italy Opens Probe into AI Firm DeepSeek Over Hallucination Risks
In a move aimed at ensuring transparency and accountability in the use of artificial intelligence (AI), Italy’s antitrust body has opened an investigation into DeepSeek, a Chinese AI firm, for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content.
Hallucinations, in this context, refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. This is a significant concern, as it can lead to misinformation, confusion, and potentially even harm to individuals and society as a whole.
According to a report by Reuters, the Italian antitrust authority, Autorità Garante della Concorrenza e del Mercato (AGCM), has launched the investigation into DeepSeek for violating Italy’s transparency law. The law requires companies to provide clear and accurate information to consumers about the use of AI in their products and services.
DeepSeek, which is based in China, has been accused of failing to adequately disclose the risks associated with hallucinations in its AI-generated content. The company’s AI models are designed to provide users with information and answer questions, but they can sometimes generate inaccurate or misleading responses due to various reasons, including biases in the training data, errors in the algorithms, or even intentional manipulation.
The investigation into DeepSeek is not the first of its kind in Italy. In recent years, the country has taken steps to address the growing use of AI in various industries and ensure that companies are transparent about their use of the technology.
In 2020, Italy introduced a law requiring companies to provide clear information to consumers about the use of AI in their products and services. The law also imposed fines on companies that fail to comply with the transparency requirements.
DeepSeek, if found guilty, could face a fine of up to €20 million or 4% of its global annual turnover, whichever is higher. This is a significant penalty, and it sends a strong message to companies that failure to comply with transparency laws can have serious consequences.
The investigation into DeepSeek is also significant because it highlights the need for greater transparency and accountability in the use of AI. As AI becomes increasingly prevalent in various industries, there is a growing need for companies to be transparent about their use of the technology and to ensure that it is used in a responsible and ethical manner.
In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation or disinformation. For example, in 2020, a study found that AI-generated fake news articles were being used to spread disinformation on social media platforms.
The investigation into DeepSeek is a welcome development, as it shows that regulators are taking the issue of AI-generated content seriously and are willing to take action to ensure that companies are transparent about their use of the technology.
In conclusion, the investigation into DeepSeek is a significant development in the ongoing debate about the use of AI. It highlights the need for greater transparency and accountability in the use of AI and sends a strong message to companies that failure to comply with transparency laws can have serious consequences.
As AI becomes increasingly prevalent in various industries, it is essential that companies are transparent about their use of the technology and ensure that it is used in a responsible and ethical manner. The investigation into DeepSeek is a step in the right direction, and it is hoped that it will lead to greater transparency and accountability in the use of AI.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/