Five Types of Disinformation Response

DISARM Foundation
8 min readApr 29, 2022

When faced with disinformation, responders can approach the problem from a variety of standpoints. Matching the response to the incident or campaign is a key first step in effectively countering disinformation, but frequently relies on intuition rather than a principles-based analytic approach. This can get complicated, as seen above.

We can simplify that. One set of response types that can be applied as part of a disinformation response effort are:

  • Resource — exhaust a disinformation creator’s resources (people, money, materials etc) so they have less available for disinformation incidents / campaigns.
  • Artifact — limit the availability of artifacts (messages, hashtags, accounts etc) associated with an incident or campaign.
  • Narrative — make the narrative space less amenable to disinformation, by countering or outcompeting disinformation narratives.
  • Volume — reduce audience and algorithm attention on disinformation artifacts, e.g. by outcompeting them with non-disinformation artifacts.
  • Resilience — reduce the harm done by disinformation campaigns, by preparing the information landscape and audience for it.

Basically these make a disinformation creator’s resources, artifacts, narratives, amplification, or audience less useful to them. Since every component in the chain must be aligned in order to carry out an effective campaign, every link in the process has an effect on the end result.

A resource-based approach to disinformation response focuses on modeling disinformation ecosystems using frameworks such as DISARM to enable simulations or games in which multiple players compete for limited resources such as narratives, attention, and time. This approach can enable response teams to uncover novel strategies which apply to the unique environment they operate within.

This approach is still emerging, and relies heavily on modeling and simulation. These models can take a view from multiple levels — for instance, that of a human space where narratives compete for dominance (“narrative warfare”), mirroring our reality, where humans communicate with each other primarily through stories or narration. These stories are the base for communities’ sense of self, who belongs to the in-group, and who should be excluded to the out-group. These narratives are deeply personal and heavily emotionally charged, and changes happen most effectively through a process of shaping and redirection rather than being countered outright with facts.

Narrative warfare is an emerging field, but approaches such as topic modeling and gisting with natural language processing can enable tracking narratives from disinformation actors and highlighting narratives to potential audiences. These approaches have proven useful and will grow more so as research continues.

An artifact-based approach to disinformation response focuses on the concrete artifacts resulting from a disinformation incident — messages, images, and so on. For smaller incidents with limited reach, not drawing attention to artifacts by completely ignoring them appears to be an effective approach. Once an incident reaches a certain size, however, ignoring it is no longer beneficial.

Attempts to engage with disinformation artifacts have led to mixed results. Debunking — addressing individual pieces of misinformation to point out inaccuracies or issues with the content itself — encounters two key difficulties. The debunking information itself must be disseminated throughout the impacted population, which can be difficult or impossible, especially for populations who were predisposed to believe the misinformation and to be mistrustful or uninterested in debunking attempts. Debunking artifacts frequently lack the “virality” of the misinformation itself, leading to a much lower reach than misinformation. Additionally, there is also an inherent time gap between release of misinformation artifacts and the counter-artifacts aimed at undoing their spread. Under this paradigm, misinformation will always have a head start, with responders trapped in a cycle of collecting artifacts and preparing counter-artifacts based on misinformation that is already “in the wild” and causing damage.

As such, artifact-based responses rely heavily on an agile team of responders who are able to rapidly iterate in response to artifacts detected by a robust monitoring apparatus and spread response artifacts widely. All three factors — rapid response, robust monitoring, and extensive and trusted reach to spread counter-artifacts — must be in place for a truly effective artifact-based approach.

Social media platforms which label misleading content are likely the most prominent example of this response type.

A narrative-based approach to disinformation response focuses on the overall narrative behind the misinformation rather than on any concrete manifestation of it. Narratives are rarely about ‘truth’ — narratives act on people’s emotions, trust, sense of belonging, and view of the world as a whole. Novels, films, and other media, though they can be obvious fiction, nonetheless impact the people who encounter them through narrative.

To combat narratives spread by disinformation, counternarratives can be created to compete for attention. These counternarratives deliberately cannot coexist with the original narrative — if, for example, the original narrative is “the EU cannot be trusted,” a counternarrative takes a deliberately oppositional stance — “the EU always tries to do what is best for the people of its member states,” for example. Note that while the counternarrative may not be a simple negation of the original narrative, the two nonetheless cannot coexist. Believing one narrative (which is an emotional choice, not necessarily a factual one) excludes the other.

Narrative-based responses rely heavily on the ability of the responders to deeply understand the intended audience. More than any of the other approaches, the narrative-based approach relies on human emotion to succeed. Techniques such as careful framing, use of humor, and audience segmentation can improve the likelihood of success, but none are a guarantee.

Counternarrative efforts such as the German Ministry of Foreign Affairs’ #EuropeUnited campaign and the Baltic Elves’ work in Lithuania, Latvia, and other Baltic states provide examples for this approach.

A volume-based approach to disinformation response focuses on denying the use of communications channels through sheer volume. Misinformation can be drowned out and attention can be effectively denied by making misinformation content difficult or impossible to find in a sea of other unrelated information.

Responses which rely on volume require two elements to succeed — an understanding of communications channels used to spread disinformation, and the ability to flood those channels with enough information to prevent their use. This technique generally cannot be sustained over a long period of time and needs to be timed to be effective.

Korean pop (kPop) bands’ fans organized in 2020 to disrupt US far-right hashtags and spaces such as #whitelivesmatter and Parler by posting music videos and GIFs to drown out any other conversation. Similarly, LGBTQ advocates encouraged by George Takei hijacked the far-right hashtag #ProudBoys by flooding it with positive gay images.

A resilience-based approach to disinformation response focuses on “inoculation” of information nodes to reduce the overall spread rate of disinformation by improving a node’s ability to identify and discard misinformation — or by removing the node entirely in the case of some social media platforms. Node is used here to refer to any actor or entity in the relevant information landscape. While small communities may focus on individuals as nodes, tech companies may focus on profiles and governments may focus on news media or key influencers.

At any scale, the concept of a resilience response closely mirrors a public health approach. By making each individual node less likely to be “infected” by misinformation and subsequently retransmit it, disinformation effects can be slowed and eventually — if all goes well — stopped entirely. Education campaigns may inform individuals about how to spot misinformation, while online platforms may act to remove or rate-limit coordinated inauthentic behavior. Broader government approaches may involve strengthening the institutional ability of news organizations to fact-check breaking stories, tip platforms to inauthentic behavior, or establish widespread campaigns focused on public education.

Resilience-based efforts focus on the intended targets of misinformation and attempt to build their defenses. The effectiveness of this approach can be controversial, though at least one study by Roozenbeck found that exposure to misinformation in a controlled setting, such as a game, led to a reduction in the participants’ susceptibility to believing misinformation, and a rise in their ability to spot it. By identifying key actors in the information landscape and using these techniques to build the resilience of those actors against being affected by misinformation (sometimes referred to as “pre-bunking” misinformation), the resilience-based approach seeks to strengthen the landscape itself against misinformation.

Building resilience effectively relies on understanding of the information landscape and access to the nodes that make it up. Knowledge of the terrain — such as “superspreader” accounts, which exist in fairly small numbers but have a disproportionate impact on amplifying misinformation — is critical to directing resources where they can do the most good. Once identified, responders must be able to act on information nodes — rate-limiting or removing malicious nodes, and educating unwitting nodes which may otherwise be affected by and amplify misinformation.

This approach is broadly used, with examples ranging from BBC Media Action, which conducts influencer training to reduce the likelihood of key information nodes amplifying misinformation, to social media platforms taking steps to label, rate limit, or remove accounts which frequently spread misleading or false content.

We’ve tagged the DISARM countermeasures list with these types. DISARM’s existing response types are the “seven Ds”, and its metatechniques.

The seven Ds include:

  • D02 Deny: Prevent disinformation creators from accessing and using critical information, systems, and services. Deny is for an indefinite time period.
  • D03 Disrupt: Completely break or interrupt the flow of information, for a fixed amount of time. (Deny, for a limited time period). Not allowing any efficacy, for a short amount of time.
  • D04 Degrade: Reduce the effectiveness or efficiency of disinformation creators’ command and control or communications systems, and information collection efforts or means, either indefinitely, or for a limited time period.
  • D05 Deceive: Cause a person to believe what is not true. military deception seeks to mislead adversary decision makers by manipulating their perception of reality.
  • D06 Destroy: Damage a system or entity so badly that it cannot perform any function or be restored to a usable condition without being entirely rebuilt. Destroy is permanent, e.g. you can rebuild a website, but it’s not the same website.
  • D07 Deter: Discourage.

Each of these can be applied to the types above — for example, counter C00144 “buy out troll farm employees / offer them jobs” is a D04 Degrade maneuver that acts on a resource.

Metatechniques are more detailed categories for countermeasures. For more detail on these, refer to the full DISARM framework — there’s a lot there to dive into!

Frameworks and typologies such as the five types of response help manage the complexity involved in counter-disinformation campaigns. By considering reponses in terms of the category they may fall into — resource, artifact, narrative, volume, or resilience-based approaches — responders can more effectively analyze and understand their options.

While the categories above help alleviate complexity for responders, more developed frameworks such as DISARM let us develop scenarios to use for wargaming and red teaming, which is useful and will certainly lead to more development in this area — so stay tuned!

This article was written with SJ Terp — thank you!

Jon Roozenbeck et al, “Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures”, Harvard Kennedy School Misinformation Review, 2020

--

--

DISARM Foundation

We are home to the open DISARM Framework — a common language and approach for diverse teams to coordinate their efforts in the fight against disinformation