
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
In a move to ensure transparency and accountability in the use of artificial intelligence (AI) technology, Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, for allegedly failing to warn users about the risk of “hallucinations” in its AI-produced content.
Hallucinations, in the context of AI, refer to situations where the AI model generates inaccurate, misleading, or fabricated information in response to user inputs. This can have serious consequences, including perpetuating misinformation, damaging reputations, and causing financial losses.
According to a report by Reuters, the Italian antitrust authority, Agcom, has opened an investigation into DeepSeek for violating Italy’s transparency law. The law requires companies to clearly inform users about the limitations and potential risks of AI-generated content.
DeepSeek, which is headquartered in Shanghai, has been accused of not adequately warning users about the possibility of hallucinations in its AI-produced content. The company’s AI models are designed to generate text, images, and videos based on user inputs, but it appears that the firm did not take adequate measures to ensure the accuracy and reliability of its output.
If found guilty, DeepSeek may face a fine of up to €20 million or 4% of its global annual turnover. This is a significant penalty, and it serves as a warning to other AI companies that they must prioritize transparency and user protection.
The investigation into DeepSeek is a significant development in the ongoing debate about the use of AI technology. As AI becomes increasingly ubiquitous, there is a growing need for regulatory bodies to ensure that companies are held accountable for the content they produce.
In recent years, there have been several high-profile cases of AI-generated content being used to spread misinformation. For example, in 2020, a Twitter bot was discovered to be spreading false information about the COVID-19 pandemic. Similarly, in 2022, a series of deepfake videos were used to spread disinformation about a prominent political figure.
These incidents highlight the need for AI companies to take greater responsibility for the content they produce. By failing to warn users about the risks of hallucinations, DeepSeek may have inadvertently contributed to the spread of misinformation.
The investigation into DeepSeek is also significant because it highlights the need for greater transparency in the AI industry. While AI has the potential to revolutionize many industries, it also poses significant risks, including the potential for bias, discrimination, and misinformation.
In recent years, there have been several high-profile cases of AI bias, including a study that found that AI facial recognition systems were more accurate at identifying white faces than black faces. Similarly, there have been reports of AI-powered chatbots being used to discriminate against certain groups of people.
By prioritizing transparency and accountability, regulatory bodies can help to mitigate these risks and ensure that AI technology is used in a way that benefits society as a whole.
In conclusion, the investigation into DeepSeek is a significant development in the ongoing debate about the use of AI technology. As AI becomes increasingly ubiquitous, there is a growing need for regulatory bodies to ensure that companies are held accountable for the content they produce.
DeepSeek’s alleged failure to warn users about the risk of hallucinations in its AI-produced content is a serious violation of Italy’s transparency law. If found guilty, the company may face a significant fine, which serves as a warning to other AI companies that they must prioritize transparency and user protection.
Ultimately, the investigation into DeepSeek highlights the need for greater transparency and accountability in the AI industry. By prioritizing these values, regulatory bodies can help to ensure that AI technology is used in a way that benefits society as a whole.
Source:
https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/