Mastodon
Contact Us
Influence Operations in the 2024 Election Cycle: Lessons for 2026 and Beyond

Influence Operations in the 2024 Election Cycle: Lessons for 2026 and Beyond

Influence Operations in the 2024 Election Cycle: Lessons for 2026 and Beyond

Photo by Markus Winkler on Unsplash

Definition of Influence Operations

Influence operations are strategic efforts to shape public opinion, typically with the goal of swaying political outcomes or disrupting democratic processes. They work by manipulating information environments and employing psychological tactics to amplify specific narratives, erode trust in institutions, and undermine the legitimacy of political actors.

A critical distinction exists between influence operations and related phenomena. Disinformation is deliberately crafted to mislead, while misinformation arises from negligence or misunderstanding. Foreign interference involves direct attempts to alter election outcomes, whereas foreign influence seeks to shape the broader information environment in which voters make decisions. The Brennan Center has documented how these categories overlap but require different policy responses.

Social media platforms have become central to modern influence operations, enabling rapid content dissemination to vast audiences. Algorithms that prioritize emotionally charged material create echo chambers that reinforce existing biases, allowing hostile actors to exploit pre-existing distrust by amplifying divisive narratives. AI-enabled operations have increasingly targeted voters’ perceptions before election periods, using personalised content to shift attitudes on specific policy issues.

Brief History of Influence Operations

Influence operations are not new. During the Cold War, state-sponsored campaigns used radio broadcasts, print media, and covert funding of political movements to sway foreign populations. What has changed is the scale, speed, and sophistication of these operations.

The 2016 U.S. election marked a turning point. Russian-linked actors used social media to spread disinformation at unprecedented scale, exploiting platform algorithms to amplify divisive content across racial, political, and cultural lines. The Internet Research Agency’s campaign demonstrated that relatively small investments in targeted social media content could reach millions of voters and generate real-world protest activity.

By 2024, tactics had expanded considerably. Synthetic media, including deepfakes and AI-generated text, began to blur the line between authentic and fabricated content. Fake images deployed during the September 2024 campaign period exploited both visual and textual manipulation techniques. AI now enables rapid, personalised content generation at scale, while machine learning allows real-time adaptation of disinformation strategies, making them harder to detect. AI-driven bots can mimic human behaviour convincingly enough to evade traditional moderation tools.

This shift has moved the focus from mass broadcasting to micro-targeted interventions, complicating attribution and response efforts. The challenge for 2026 and beyond is that the tools for creating and distributing disinformation are becoming cheaper and more accessible, while the tools for detecting and countering it struggle to keep pace.

How These Tactics Were Used in the 2024 Cycle

In the 2024 election cycle, influence operations relied on a combination of digital misinformation, targeted advertising, and grassroots mobilisation. These tactics sought to shape public discourse while bypassing traditional media gatekeepers.

Bot networks combining traditional methods with new technology were widely observed in campaigns. These operations frequently targeted specific demographics based on cultural or political divisions. Economic data was integrated with political messaging to frame debates around perceived threats to national stability and prosperity, blurring the line between factual analysis and ideological persuasion.

Private actors also played a growing role. Wealthy individuals funded localised campaigns that bypassed institutional constraints, while successful operations often combined disinformation with efforts to suppress independent media coverage, creating environments of information scarcity.

The 2026 election cycle is expected to see further evolution as digital platforms adapt their strategies. To mitigate risks, strengthening regulatory frameworks and investing in public media literacy will be essential to safeguard democratic processes.

The Role of Social Media Platforms

Social media platforms like Facebook, X (formerly Twitter), and YouTube have become the primary battleground for influence operations, and their response to this challenge has been inconsistent at best.

Facebook’s content moderation process has drawn sustained criticism for its opacity. The company’s internal processes for identifying and removing coordinated inauthentic behaviour are not visible to outside researchers, making it difficult to assess their effectiveness. During the 2024 cycle, Meta reported removing several networks of fake accounts linked to foreign state actors, but independent analysts noted that enforcement was often reactive, catching campaigns only after they had already reached significant audiences.

The fundamental tension is structural. These platforms generate revenue through engagement, and emotionally charged content, including disinformation, drives engagement. Recommendation algorithms are optimised to keep users on the platform, which means sensational and divisive content receives preferential distribution. Even when platforms invest in content moderation, the volume of posts makes comprehensive review impossible. Meta processes billions of pieces of content daily; human review at that scale is not feasible, and automated detection systems remain unreliable at distinguishing between legitimate political speech and coordinated manipulation.

Platform transparency has improved incrementally. Ad libraries now allow researchers to track political advertising, and some platforms publish periodic reports on state-backed information operations they have disrupted. But these disclosures remain voluntary and selective. Researchers at the Brookings Institution have argued that without mandatory, standardised transparency requirements, platforms will continue to share only what serves their public relations interests.

Looking ahead to 2026, the most significant challenge may be the fragmentation of the social media environment itself. As users migrate across platforms, including encrypted messaging apps and decentralised networks, influence operations will become harder to track, and the current platform-by-platform approach to moderation will become increasingly inadequate. Building resilience will require not just better platform policies, but broader investment in media literacy and cross-sector collaboration.

Sources

Back to All Insights