Mastodon
Contact Us
Building an Effective Disinformation Intelligence Practice: What Organisations Need to Know in

Building an Effective Disinformation Intelligence Practice: What Organisations Need to Know in

Building an Effective Disinformation Intelligence Practice: What Organisations Need to Know in

Photo by Brett Jordan on Unsplash

International Fact-Checking Network (IFCN)

The International Fact-Checking Network (IFCN) represents a critical global initiative aimed at countering the pervasive spread of disinformation and its potentially catastrophic consequences. Established in 2015 by the Poynter Institute, the IFCN has evolved into a collaborative platform uniting fact-checking organizations worldwide to address the challenges posed by false narratives in an increasingly interconnected digital landscape. The network’s formation was driven by the urgent need to combat disinformation, which has been linked to real-world harm, from undermining democratic processes to inciting violence.

By fostering a unified approach, the IFCN seeks to amplify efforts to verify information, expose falsehoods, and promote media literacy. Its role is particularly vital in the context of emerging technologies, such as generative AI, which complicate the distinction between factual content and fabricated material, as highlighted by Deloitte’s analysis on managing disinformation at scale. The IFCN’s work aligns with broader calls to identify and dismantle the networks behind viral disinformation, ensuring transparency and accountability in information ecosystems.

At the core of the IFCN’s operations is its commitment to establishing and upholding a rigorous code of principles that guides its member organizations. This code, developed through extensive collaboration and refined in its June , 2025 final version, outlines key standards such as transparency, accuracy, and accountability. Members are required to adhere to these principles, ensuring that their fact-checking processes are both methodical and ethically sound. The code emphasizes the importance of public trust, mandating that fact-checkers disclose their methodologies, sources, and potential conflicts of interest. This framework not only strengthens the credibility of individual fact-checking efforts but also reinforces the IFCN’s role as a global standard-bearer for responsible information verification. By institutionalizing these principles, while maintaining the integrity of their work.

The IFCN functions as a dynamic hub for collaboration and knowledge sharing, enabling member organizations to pool resources, expertise, and insights. This collective approach is essential in an environment where disinformation spreads rapidly across borders and platforms. By facilitating cross-border cooperation, the IFCN helps member organizations address jurisdictional challenges and adapt to the evolving tactics of disinformation actors. For instance, the network has supported initiatives to trace the origins of viral falsehoods, a critical step in dismantling the networks behind harmful content.

This collaborative model also encourages the exchange of best practices, allowing members to refine their strategies in response to emerging threats. Such efforts are particularly relevant given the growing role of AI in generating and disseminating false information, which blurs the lines between truth and fabrication. The IFCN’s emphasis on shared knowledge ensures that its members remain equipped to tackle these challenges, reinforcing the network’s relevance in an era of technological disruption.

A key strength of the IFCN lies in its ability to integrate evidence-based strategies into its operations, reflecting the insights of experts who advocate for systemic solutions to disinformation. The Technology and International Affairs Program’s research underscores the necessity of policies grounded in empirical data to address the governance challenges posed by new technologies. The IFCN’s code of principles and collaborative frameworks exemplify this approach, as they are designed to be adaptable and responsive to the ever-changing landscape of information ecosystems. By prioritizing transparency and accountability, the network not only enhances the reliability of its members’ work but also aligns with broader calls to build resilient systems that resist the spread of falsehoods. This alignment is further reinforced by the IFCN’s partnerships with organizations such as Veritas, which provides ethical guidance on navigating the complexities of AI and disinformation [https://veritas.techethics.org]. [between technological innovation and ethical responsibility](https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guide].

EU High-Level Expert Group on Fake News and Online

The European Union High-Level Expert Group on Fake News and Online Disinformation was established to address the growing threat of disinformation across the bloc. Formed in 2018 under the Digital Strategy, the group comprises experts from academia, civil society, and technology sectors tasked with analyzing the mechanisms of disinformation and proposing actionable solutions [digital-strategy.ec.europa.eu]. Their mission centers on enhancing the resilience of European societies against false information while safeguarding democratic processes and the integrity of digital spaces.

The group’s relevance to disinformation intelligence practices lies in its comprehensive approach, which integrates technical, legal, and societal perspectives to combat the multifaceted nature of online falsehoods. By synthesizing insights from diverse disciplines, the group has identified critical gaps in current strategies, emphasizing the need for proactive, data-driven frameworks that align with evolving digital ecosystems. For instance, the group’s final report highlights the necessity of contextualizing disinformation risks with in the broader landscape of information ecosystems, where the interplay between algorithmic amplification, user behavior, and media ecosystems shapes the spread of false narratives [digital-strategy.ec.europa.,eu].

Understanding the systemic drivers of disinformation.

One of the key challenges in disinformation countermeasures is the inadequacy of traditional approaches, as evidenced by the findings of a target who has frequently been accused of spreading disinformation. This individual’s experience reveals that counter-disinformation efforts often fail to address the root causes of misinformation, such as the credibility of sources and the emotional resonance of false claims propagandainfocus.com.

The feedback from this perspective highlights the limitations of reactive measures, which prioritize debunking over prevention. This aligns with the EU High-Level Expert Group’s emphasis on shifting from reactive to proactive strategies, advocating for the development of tools that not only detect disinformation but also anticipate its emergence. The group’s recommendations include the integration of behavioral insights into intelligence practices, enabling organizations to predict patterns of disinformation dissemination and tailor interventions accordingly.

Such an approach recognizes that disinformation is not merely a technical issue but a complex interplay of psychological, social, and technological factors.

The group’s findings also underscore the economic motivations behind disinformation campaigns, as noted in a report by the Internet Forum Europe. The report states that disinformation perpetrators often seek to exploit the same advertising revenues as legitimate news media, creating a perverse incentive to undermine trust in credible sources [internetforum.eu]. This economic dimension complicates intelligence practices, as it requires organizations to monitor not only the content of disinformation but also the financial structures that sustain it. The EU High-Level Expert, therefore, recommends the development of cross-sectoral collaboration mechanisms to track and disrupt the financial flows associated with disinformation. This includes partnerships between regulators, technology companies, and advertisers to ensure transparency and accountability in digital advertising ecosystems. By addressing the economic underpinnings of disinformation, intelligence practices can target the incentives that drive the proliferation of false narratives, thereby reducing their impact on public discourse.

Another critical recommendation from the group focuses on the need to expand disinformation combating tools to low-resource languages. The Mever.gr project highlights that the effectiveness of counter-disinformation strategies is often limited by linguistic barriers, as many disinformation campaigns exploit the lack of resources for fact-checking and content moderation in minority languages [mever.gr]. This insight calls for the development of multilingual tools and platforms that enable equitable access to disinformation intelligence across Europe. The group advocates for the integration of machine learning and natural language processing technologies to support real-time monitoring and analysis of content in diverse linguistic contexts. Such measures are essential to ensure that disinformation intelligence practices do not inadvertently exclude marginalized communities, thereby accounting for the fight against false information.

The EU High-Level Expert Group’s recommendations also emphasize the importance of fostering a culture of media literacy and critical thinking among the public. This aligns with the Veritas project’s focus on ethical AI and the need for transparent algorithms that support informed decision-making [veritas.techethics.org]. By embedding media literacy into intelligence practices, organizations can empower individuals to discern credible information from disinformation. This approach not only mitigates the spread of false narratives but also strengthens societal resilience against manipulation. The group’s call for interdisciplinary collaboration between technologists, educators, and policymakers underscores the complexity of disinformation as a multifaceted challenge requiring coordinated, adaptive solutions. Ultimately, the EU High-Level Expert Group’s work provides a roadmap for organizations seeking to build effective disinformation intelligence practices, emphasizing the necessity of integrating technical, economic, and societal dimensions into their strategies.

NATO Strategic Communications Centre of Excellence

The NATO Strategic Communications Centre of Excellence (SCCE) was established to address growing challenges of disinformation and information warfare in an era where hybrid threats increasingly blur the lines between conflict and public discourse. As outlined in its foundational documents, the SCCE aims to strengthen NATO’’s collective resilience by fostering strategic communication capabilities that align with democratic values and international law.

Its importance lies in its role as a hub for research, education, and operational collaboration, enabling NATO members to anticipate and counter disinformation campaigns that could destabilize alliances or undermine public trust. The Centre’s work is particularly critical in times of crisis, where the rapid spread of false narratives can exacerbate tensions or distort the perception of events. By integrating strategic and operational perspectives, the SCCE has helped NATO members to fine-tune their approach – for instance, through strategic communication and information operations (SCIO) program, which provides members with tools to analyze disinformation patterns, assess their impact on national and international stability, and develop counter-narratives that align with democratic principles.

By embedding these practices into institutional structures, the SCCE helps nations transition from crisis management to long-term resilience, as evidenced by recent research, which showed the value of the 2020 Strategic Communications Workshop.

A central focus of the SCCE is the development of frameworks that enable organizations to build internal capacity for disinformation intelligence, rather than relying solely on reactive incident response teams. This approach aligns with findings from the Sofia Information Integrity Forum, which emphasized the necessity of strategic functions that anticipate and mitigate disinformation risks before they escalate. The Centre’s initiatives, such as the NATO Strategic Communications Centre of Excellence’s Strategic Communication and Information Operations (SCIO) program, provide members with tools to analyze disinformation patterns, assess their impact on national and international stability, and develop counter-narratives that align with democratic principles. By embedding these practices into institutional structures, the SCCE helps nations transition from crisis management to long-term resilience, as suggested by the 2020 Strategic Communications Workshop.

The SCCE’s role is further amplified through its emphasis on collaboration and information sharing across borders. Research highlights the benefits of enhancing cooperation on defence equipment and intelligence, which can create a more unified front against disinformation. The Centre facilitates this by organizing joint exercises, such as the NATO Cooperative Cyber Defence Centre of Excellence’s training programs, which simulate real-world disinformation scenarios and test the effectiveness of cross-border communication strategies.

These exercises not only improve technical capabilities but also foster trust among allies, ensuring that shared intelligence can be leveraged efficiently during crises. Additionally, the SCCE collaborates with non-NATO entities, such as the Japan Defence Agency, as seen with its work on the Securitization of Foreign Disinformation, which was outlined in the Centre’s publication of the same name. The Centre’s efforts often rely on a clear-eyed approach to data, as exemplified by the framework presented in the 2021 publication, “How to Measure the Impact of a Campaign.” This is done to build a more well-rounded picture, and was further enhanced by the Centre’s recent publication on the topic.

The European Defence Agency, for example, has been a key partner, with the agency’s team helping the Centre refine its approach, as highlighted in the 2021 report, “The European Defence Agency’s Contribution to the Strategic Communication Capacity Builder (SCCB) Programme,” which assessed the impact of the initiative. The Strategic Communications Centre of Excellence’s (SCCE)’s own efforts are well-positioned to continue its success, as evidenced by the growing number of its initiatives that have been adopted by NATO members across the alliance.

Key principles that emerge from the SCCE’s work include:

Global Disinformation Index

The Global Disinformation Index (GDI) serves as a critical tool for organizations seeking to navigate the complex landscape of disinformation by fostering transparency and accountability. By establishing a comprehensive ranking system, the GDI enables entities to assess their vulnerability to disinformation campaigns and the potential harm they may inadvertently enable. This structured approach not only highlights systemic risks but also underscores the responsibility of stakeholders to act with integrity.

For instance, the EU DisinfoLab’s recent update notes that disinformation has escalated significantly in 2026, with accountability mechanisms remaining inadequate. Such findings emphasize the necessity of transparent frameworks that hold organizations accountable for their role in either amplifying or mitigating disinformation. The GDI’s rankings provide a clear benchmark, allowing institutions to align their practices with ethical standards and demonstrate commitment to reducing harm.

This transparency is essential in an environment where disinformation often operates in the shadows, obscuring the true impact of harmful narratives.

Collaboration emerges as a cornerstone of the GDI’s effectiveness, as it facilitates the exchange of insights and resources among organizations to combat disinformation collectively. The counter-disinformation fails report from a target who has faced accusations of spreading disinformation illustrates the challenges of addressing disinformation in isolation. By pooling knowledge, organizations can develop more robust strategies to identify and neutralize disinformation tactics. For example, sharing data on the sources and methods of disinformation campaigns enables entities to anticipate threats and coordinate responses. This collaborative model also extends to the development of shared tools and best practices, such as standardized verification protocols or joint monitoring systems. The GDI’s emphasis on collective action reflects the recognition that disinformation thrives on fragmentation, and only through unified efforts can stakeholders create a resilient defense. a broader ecosystem of trust and shared responsibility.

The GDI’s value lies in its ability to deliver actionable intelligence that translates into targeted strategies for countering disinformation. This intelligence empowers organizations to implement proactive measures such as refined content moderation policies, enhanced user behavior analysis, and strategic communications planning. The integration of artificial intelligence (AI) into these processes has become increasingly vital, as AI can analyze vast datasets to detect patterns and predict the spread of disinformation. However, the ethical implications of AI’s role in disinformation detection require careful consideration. A study exploring AI’s applications highlights the need for frameworks that balance technological capabilities with human oversight to prevent unintended biases or harms. The GDI’s actionable insights must therefore include guidance on leveraging AI responsibly, ensuring that automated systems are transparent and aligned with ethical standards but also safeguards the rights and privacy of individuals.

Beyond technical strategies, the GDI’s actionable intelligence must address the broader societal impact of disinformation. This includes fostering media literacy and empowering users to critically evaluate information. The GDI’s framework should encourage organizations to invest in educational initiatives that demystify disinformation tactics and promote digital literacy. Such efforts are essential in an era where misinformation spreads rapidly, often outpacing traditional fact-checking methods. Additionally, the GDI can play a role in advocating for policy reforms that support transparency in digital platforms and hold entities accountable for their role in amplifying harmful content. By integrating these dimensions, reactive measures to encompass long-term systemic change.

The GDI’s success hinges on its ability to adapt to the evolving nature of disinformation, which is increasingly shaped by technological advancements and global dynamics. The integration of ethical guidelines, as advocated by resources such as veritas.techethics.org, is crucial in navigating these complexities [veritas.techethics.org]. By embedding ethical considerations into its framework, the GDI can ensure that its strategies not only address immediate threats but also uphold the values of justice, equity, and accountability in the digital age. society in the face of persistent disinformation challenges.

Sources

Back to All Insights