AI-Generated Fake News Floods Global Information Network. EU Report Reveals Scale of Threat

Image showing Gemini

AI-Powered Disinformation: A Growing Global Threat Revealed by EU Report

Artificial intelligence (AI) has emerged as a pivotal tool in foreign disinformation campaigns, according to the latest report from the European External Action Service (EEAS). The year 2025 witnessed a significant surge in both the number of incidents and the scale at which AI solutions are leveraged to propagate false content globally.

AI Fuels a New Wave of Disinformation

The EEAS data for 2025 documented 540 cases of Foreign Information Manipulation and Interference (FIMI) across more than 100 countries, underscoring the worldwide scope of these operations. A striking 147 of these incidents involved AI-powered tools, marking a substantial 259% increase from the 41 cases reported just one year prior. This dramatic rise clearly indicates that AI technology has moved beyond the experimental phase to become a standard practice in propaganda efforts.

AI is now routinely used to generate deceptive texts, synthetic audio recordings, and deepfake videos. This capability significantly reduces the cost and accelerates the production of malicious content, enabling its mass dissemination across social media platforms with unprecedented ease.

The EEAS emphasizes that AI is not merely an enhancement for traditional propaganda; it is an integral component of a sophisticated and cohesive influence ecosystem. In this system, manipulated content is amplified by bot networks and fake account farms, forming part of a broader “hybrid” arsenal that combines digital interference with physical signaling and coercive actions. Consequently, distinguishing between authentic materials and those generated by AI becomes virtually impossible for the average internet user.

Tens of Thousands of Fake News Incidents

The report further reveals that approximately 43,000 pieces of disinformation content—ranging from articles to audio and video materials—were identified in 2025. These were spread across 19 social media platforms and messaging applications. The platform X (formerly Twitter) played a particularly significant role, accounting for a staggering 88% of this activity.

Approximately 10,500 unique channels were employed to amplify these messages. These channels included pseudo-information websites, blogs, forums, and social media accounts, often masquerading as local news outlets or “independent experts.” The objective is not only to reach the largest possible audience but also to construct a dense web of “sources” that mutually cite each other, thereby boosting the perceived credibility of the message. This tactic makes information verification exceedingly difficult, if not impossible, and intensifies the “filter bubble” effect, where individuals are primarily exposed to information that confirms their existing beliefs.

Russia and China: Primary Drivers of Disinformation

The EEAS attributes a combined 35% of all detected FIMI incidents to Russia (29%) and China (6%). Experts caution that the actual involvement of these nations could be even higher due to their sophisticated use of intermediary networks and front entities. According to the report, Russia remains the most aggressive actor, consistently integrating information manipulation as a core element of its broader military and political strategy.

Russia employs FIMI as a central pillar of its hybrid warfare arsenal, intertwining information campaigns with physical actions such as sabotage and drone incidents, primarily targeting Ukraine and its partners within the European Union (EU) and NATO.

Looking ahead, Russian FIMI activities are projected to intensify further in 2026. Funding for state-controlled media is expected to increase, with a budget reaching approximately 1.56 billion euros in 2026, marking a 7% rise from 2025. The Baltic Sea and Arctic regions are anticipated to be among the primary targets in this expanded information warfare landscape.

China, conversely, increasingly leverages disinformation to shape narratives around its foreign policy, economic strategies, and technological advancements. Its efforts focus on enhancing its global image and diminishing Western influence, often employing Transnational Information Suppression (TIS). TIS is a method designed to silence critical voices beyond its borders, frequently through economic, legal, and technological pressure.

Who Are the Main Targets?

Ukraine remains the most frequently targeted country, enduring disinformation campaigns that weave together military, political, and social narratives. These campaigns aim to undermine trust in authorities and institutions, while also exploiting “war fatigue” among Western populations.

Other countries frequently targeted include France, Moldova, and Germany. A notable increase in incidents was also observed in Armenia, where disinformation campaigns intensified leading up to parliamentary elections.

Beyond states, specific individuals are also targets. Approximately 140 politicians, opinion leaders, and representatives of international institutions have been subjected to discrediting campaigns. Prominent figures mentioned include Volodymyr Zelenskyy, Maia Sandu, Emmanuel Macron, and Ursula von der Leyen. The primary goal of these attacks is to erode public trust in key figures guiding the policies of the EU and its partners. Government and military institutions, media organizations, non-governmental organizations (NGOs), and academic circles have also been common targets.

Frequently Asked Questions (FAQ)


What is Foreign Information Manipulation and Interference (FIMI)?

Foreign Information Manipulation and Interference (FIMI) refers to malicious attempts by foreign actors to deliberately influence public opinion, decision-making, or political processes in a target country, often through deceptive or coercive means. It encompasses a wide range of activities, including disinformation campaigns, propaganda, cyber operations, and the use of artificial intelligence to create and spread false content.


How is AI making disinformation more dangerous?

AI significantly amplifies the danger of disinformation by enabling the rapid, low-cost, and large-scale generation of highly convincing fake content, such as deepfake videos, synthetic audio, and deceptive texts. This technology automates content creation, accelerates dissemination across platforms, and makes it incredibly difficult for average users to distinguish between authentic and fabricated information.


Which countries are identified as primary sources of state-sponsored disinformation?

According to the EEAS report, Russia and China are identified as the leading state actors in foreign information manipulation and interference. Russia is considered the most aggressive player, integrating FIMI into its military and political strategies, while China increasingly uses disinformation to shape narratives around its foreign policy and diminish Western influence.


What are the long-term implications of AI-driven disinformation on democratic processes and public trust?

The long-term implications of AI-driven disinformation are profound and threaten the foundations of democratic societies. It can erode public trust in institutions, media, and even facts themselves, leading to increased polarization, political instability, and a weakened capacity for informed decision-making. By making it harder to discern truth from falsehood, AI-powered disinformation can undermine electoral integrity, fuel social unrest, and enable foreign adversaries to exert undue influence over sovereign nations. Combating this threat requires robust fact-checking, media literacy initiatives, and international cooperation to develop effective countermeasures.

Source: PAP, EEAS
Opening photo: Gemini

About Post Author