Photo by Hartono Creative Studio on Unsplash
Core Concepts and Definitions¶
State-sponsored disinformation refers to the deliberate spread of false or misleading information by a state or its agents, often to influence public opinion or even help manipulate political outcomes. It can also be used to undermine adversaries – unlike general disinformation, it’s typically backed by institutional resources; strategic coordination is frequently present, often integrated with state-controlled media.
The intent isn’t merely to spread falsehoods; it’s to shape narratives aligned with national interests. For example, State-Sponsored Disinformation Around the Globe highlights how politicians often achieve this by, sometimes, deceiving citizens. Political discourse and orchestrated manipulation are employed, as evidenced by the tactics employed by DISA; these campaigns often employ sophisticated tactics. Social media algorithms are a key element; deepfakes and astroturfing are frequently used, and those tactics amplify their reach and credibility. Understanding their impact is crucial; it is essential to preserving democratic institutions. Disinformation can erode trust in media, electoral processes are also vulnerable as documented by Iosifidis, and governmental transparency is at risk. It’s a complex challenge, often with far-reaching consequences.
One well-documented tactic is the creation of coordinated inauthentic networks, sometimes called “troll farms.” Russia’s Internet Research Agency (IRA), based in St. Petersburg, employed hundreds of operatives who posed as American citizens on Facebook, Twitter, and Instagram. Between 2014 and 2018, IRA-linked accounts reached an estimated 126 million Americans on Facebook alone, according to the company’s own disclosures to the U.S. Senate Intelligence Committee.
These networks operate on a simple but effective model: create accounts that mimic local voices, build audiences through non-political content, then gradually introduce divisive or misleading narratives. The IRA’s playbook involved amplifying both sides of contentious issues, from racial tensions to gun control, with the goal of deepening societal fractures rather than promoting a single viewpoint.
China has adopted a different approach through what researchers call the “50 Cent Army” and, more recently, the Spamouflage network. Rather than stoking division, Chinese state-linked operations tend to promote positive narratives about the Chinese Communist Party while suppressing criticism, particularly around Hong Kong, Taiwan, and Xinjiang. A 2020 investigation by Graphika and the Stanford Internet Observatory identified thousands of coordinated accounts across YouTube, Facebook, and Twitter pushing pro-Beijing messaging.
Iran’s operations, while smaller in scale, have followed a similar pattern. The “Endless Mayfly” campaign, identified by the Citizen Lab at the University of Toronto, involved creating fake news websites that impersonated legitimate outlets like Bloomberg and The Guardian. These sites published fabricated stories designed to inflame tensions in the Middle East and discredit Saudi Arabia and the United States.
Definition of State-Sponsored Disinformation¶
State-sponsored disinformation is the systematic creation and dissemination of false or misleading information, directed or funded by a government, with the intent to advance strategic objectives. It differs from ordinary misinformation in three critical ways: it is deliberate rather than accidental, it is backed by state resources and infrastructure, and it serves identifiable geopolitical goals.
The distinction matters because state sponsorship brings scale. A lone conspiracy theorist posting on social media has limited reach. A government-backed operation can deploy hundreds of operatives, purchase targeted advertising, hack and leak authentic documents mixed with fabricated ones, and coordinate messaging across state-controlled media outlets, social platforms, and diplomatic channels simultaneously.
Russia’s interference in the 2016 U.S. presidential election remains the most extensively documented case. The IRA spent over $1.25 million per month on its U.S.-focused operations by September 2016, according to the Mueller Report. The campaign combined social media manipulation with the hack-and-leak operation against the Democratic National Committee, conducted by Russian military intelligence (GRU). These were parallel but distinct efforts, both directed at undermining public confidence in the electoral process.
The Brexit referendum in the United Kingdom faced similar interference. Research published by the University of Edinburgh identified over 150,000 Russian-linked Twitter accounts that posted about Brexit in the days surrounding the June 2016 vote. While the direct impact on the outcome remains debated, the campaign illustrated how foreign actors could insert themselves into a domestic democratic process at minimal cost.
State-sponsored disinformation also operates domestically. Governments in Venezuela, the Philippines, and Myanmar have deployed coordinated online campaigns against their own citizens, targeting opposition figures, journalists, and civil society organizations. In Myanmar, a U.N. fact-finding mission concluded that Facebook had played a “determining role” in spreading military-backed disinformation that fueled violence against the Rohingya population.
What makes these campaigns particularly dangerous is their ability to exploit existing social divisions. Rather than inventing grievances from scratch, state actors identify genuine fault lines, whether racial, political, economic, or cultural, and amplify them. The goal is not necessarily to convince people of a specific falsehood but to erode the shared epistemic foundation that democratic deliberation requires. When citizens can no longer agree on basic facts, collective self-governance becomes far more difficult.
The Role of Democratic Institutions¶
Democratic institutions – such as elections, legislatures, and judicial systems – are foundational frameworks designed to uphold transparency, accountability, and representation. These mechanisms help balance power, protect individual rights, and ensure collective governance. Citizen trust is essential to their effectiveness; however, the integrity of these systems depends on reliable information. When disinformation undermines that transparency or accountability, public confidence in institutions can erode, potentially weakening democratic resilience.
State-sponsored disinformation campaigns pose a significant threat; they often manipulate public perception and destabilize governance. Without robust safeguards, institutions risk a variety of challenges, including polarization, weakened civic engagement, and the erosion of democratic principles. This dynamic is visible in Western democracies, where foreign interference campaigns have targeted elections, referenda, and public health policy. A 2020 study, “Theoretical understanding of State-Sponsored Disinformation,” found documented evidence of state-sponsored disinformation operating within Western democratic systems, making this a critical area of ongoing research.
Supporting Details, Examples, and Data¶
State-sponsored disinformation campaigns take diverse forms, including fake news, propaganda videos, and coordinated social media efforts. For example, during the 2020 U.S. Presidential Election, disinformation about voter fraud spread widely, influencing public perception of election integrity. Similarly, in the 2016 U.S. Elections, Russian campaigns used social media to amplify divisive content, polarizing public opinion and, ultimately, undermining trust in democratic processes.
These tactics often exploit algorithms to create echo chambers – which reinforce existing biases. Research highlights alarming trends: 41 documented campaigns have been identified, including one expected in 2025. During the 2019 European Parliament elections, over 1.2 million disinformation posts were shared daily, targeting specific demographics to sway voter behavior; 60% of these were shared on platforms like Facebook and Twitter.
Artificial intelligence is increasingly used to generate disinformation, which highlights the technological sophistication of these efforts. Motivations vary, from enhancing regime legitimacy to weakening political opponents. For instance, Chinese disinformation campaigns, analyzed by Lu (2022), blur the lines between propaganda and public discourse, targeting media, civil society, and electoral systems. The consequences are profound: trust in democratic institutions erodes, polarization rises, and societies become vulnerable to manipulation.
As noted by Veritas.techethics.org, these campaigns succeed in part because they exploit pre-existing vulnerabilities in information ecosystems. Platforms optimized for engagement inadvertently reward sensational and divisive content, giving state-backed operatives a structural advantage that requires relatively modest investment to exploit.