
Italy Opens Probe into AI Firm DeepSeek over Hallucination Risks
The increasing reliance on artificial intelligence (AI) to generate content has raised concerns about the accuracy and reliability of the information produced. In a recent development, Italy’s antitrust body has launched an investigation into DeepSeek, a Chinese AI firm, over allegations that it failed to warn users about the risk of “hallucinations” in its AI-produced content.
According to a report by Reuters, the Italian Competition Authority (AGCM) has opened a probe into DeepSeek, which is alleged to have generated inaccurate, misleading, or fabricated information in response to user inputs. This phenomenon is commonly referred to as “hallucinations” in the AI community.
DeepSeek is a Chinese AI firm that uses natural language processing (NLP) and machine learning algorithms to generate content. The company’s technology is designed to produce human-like responses to user queries, making it difficult to distinguish between authentic and fabricated information.
The AGCM investigation is focusing on whether DeepSeek violated Italy’s transparency law, which requires companies to provide users with clear information about the source, accuracy, and reliability of the content they produce. The Italian antitrust body has the authority to fine DeepSeek up to €20 million or 4% of its global annual turnover if found guilty of violating the law.
DeepSeek’s AI-generated content has been used in various applications, including chatbots, virtual assistants, and online content platforms. The company’s technology has also been integrated into various industries, such as healthcare, finance, and education.
However, the use of AI-generated content has raised concerns about the potential for misinformation and disinformation. Hallucinations can occur when AI models are trained on biased or incomplete data, leading to inaccurate or misleading responses. This can have serious consequences, particularly in industries where accuracy and reliability are crucial, such as healthcare and finance.
The AGCM investigation is not the first time DeepSeek has faced scrutiny over its AI-generated content. In 2020, the company was embroiled in a controversy over its chatbot’s ability to generate racist and sexist responses. The incident highlighted the need for greater transparency and accountability in the development and deployment of AI technologies.
The Italian antitrust body’s investigation into DeepSeek is a significant development in the ongoing debate about the regulation of AI-generated content. As AI technologies become increasingly prevalent, governments and regulatory bodies are grappling with the challenges of ensuring accuracy, reliability, and transparency in AI-generated content.
The probe is also a wake-up call for DeepSeek and other AI firms to take greater responsibility for the content they produce. AI-generated content should be subject to the same standards of accuracy and transparency as human-generated content. Failure to comply with these standards can have serious consequences, including damage to reputation and financial losses.
In conclusion, the Italian antitrust body’s investigation into DeepSeek over hallucination risks is a significant development in the ongoing debate about the regulation of AI-generated content. As AI technologies become increasingly prevalent, it is essential that companies take greater responsibility for the content they produce and that regulatory bodies implement effective measures to ensure accuracy, reliability, and transparency.
Source: https://www.reuters.com/world/china/italy-regulator-opens-probe-into-chinas-deepseek-2025-06-16/