The future of DISARM Blue

DISARM Foundation
6 min readFeb 19, 2024

In 2023 Craig Newmark Philanthropies generously funded workshops in which we were able to ask industry experts what would be most useful for them from a framework of countermeasures (you can read about those workshops here). We’re now in the position where we can take action based on this feedback, and have made the decision to start fresh with a new Blue Framework, rather than building upon the existing collection of countermeasures. This post looks at why we’ve made that choice, and gives you a heads up for our plans for DISARM Blue.

Why the choice to start over from scratch?

Removing speculative and hostile actions

In 2019 a workshop was held to see if it would be possible to collate existing and speculative countermeasures in a framework. Similar to Red, the intention was to produce a comprehensive collection of actions, rather than a tool to help people decide how they could counter online harms. As part of this work some countermeasures were identified but considered to be offensive or disruptive, and were handled by labelling them as “not recommended”.

This solution had a few issues; the label wasn’t displayed consistently (sometimes it displayed in the countermeasure name, sometimes its summary, sometimes it wasn’t displayed at all), it didn’t provide enough information (why was the countermeasure not recommended?), and it gave the unintended implication that all other countermeasures were “recommended” by default (DISARM does not recommend any type of action; our frameworks provide information for people to make informed decisions about how to look after themselves).

Reviewing Blue now in 2024, we will be starting afresh with a Blue Framework that collates protective actions more grounded in what is achievable and appropriate for different types of users, with the goal of helping people figure out how best to protect themselves against the harms associated with influence operations, and contains countermeasures which align with DISARM’s values of authenticity, upholding democracy and human rights, and protecting the integrity of the information environment.

Moving away from the term “Countermeasures”

The Blue workshop output was described as containing “Countermeasures”, but we’re not content with that terminology; it implies that protective actions exclusively counter actions that threat actors have already taken, but there are many proactive protective actions which can be done before threat actors target us.

Taking an example from physical security, suppose burglars in your area often break into homes at Christmas to steal presents laid out under the tree. Instead of waiting for them to be stolen and figuring out how to get them back, you could simply store the presents more securely until Christmas morning. Proactive protection often takes fewer resources to implement than reactive ones, so it’s good to do them if you can.

Returning to the world of influence operations, let’s take the example of elections (which are commonly targeted by threat actors). In Taiwan, citizens vote by selecting a number that represents the candidate they want to be in power, and in previous elections false information was spread about which number was associated with which candidate (potentially tricking people into voting for someone they didn’t mean to). Knowing this Technique has been used before, it can be preempted by disseminating resources to help people identify correct information before future elections (in the case of Taiwan, which candidate matches which number). Even the average person who doesn’t have to worry about election integrity can benefit from proactively protecting themselves; removing one’s digital footprint can help reduce the harms of potential harassment campaigns before they happen.

Where there was a previous focus on countermeasures, the new Blue will cover protective actions which can be taken before, during, or after being targeted by an influence operation.

Providing unique top level categorisations for Blue

Our naming convention isn’t the only element hamstringing our ability to recommend proactivity; the Blue workshop output organised countermeasures based on the same Tactic categories that are used in Red. This approach was intended to help users find counters matching specific techniques, but it’s not very effective at that, and it’s limiting us from adding indirect protective actions which don’t fit nicely under Red behaviours.

By starting with a clean slate we can create new bespoke categorisations for protective measures decoupled from Red. This will help us log actions which can be taken before harm occurs, moving us away from waiting for bad things to happen and doing something about it then. We’ll discuss plans for ways to more effectively direct people to actions which match their needs later in this post.

To sum up

We will be retiring the Blue workshop output so that we can replace it with something better; a framework built upon community feedback and expertise, one that includes proactive and reactive protective actions, and that aligns with DISARM’s values of authenticity, upholding democracy and human rights, and protecting the integrity of the information environment.

What’s in store for the New Blue Framework?

We’ve talked a bit about why we want to rework Blue from the ground up, but we also have lots of ideas for how we can make the new Blue more useful for you. Below are just a few of the things we could add in future updates; if you have any feedback on what would be most useful for you, then please let us know! The more the community tells us what they want from us, the better we can adapt to suit your needs.

A focus on ease of understanding

As people who spend all day every day thinking about influence operations, it can be easy to forget that most people lead lives free of subterfuge. We need to keep people who don’t know much about the field in mind when replacing Blue, not just in the language we use to describe actions, but also in the way we present the framework (people were NOT a fan of having every countermeasure presented all at once on the same screen with little detail). Ease of use is particularly important for cases where people who are being targeted in isolation may feel panicked, and need immediate support.

Community Feedback: What do you think is most important in helping people understand how to best protect against influence operations?

Filtering features

We got lots of feedback that we need to make the framework easier to use. One way we want to achieve this goal is by adding filtering features which help people quickly rule out protective actions which aren’t feasible for them; an individual turning to Blue would have fewer resources available to them than a company or media organisation, impacting which protections they could implement. Our planned solution is to tag actions based on a variety of factors, including cost, time to implement, level of expertise required, regulatory concerns, and whether the action is for before, during, or after an attack.

These categorisations allow for an all-encompassing framework of protective actions to be maintained without overwhelming users with content that’s not relevant to them. More relevant filters to the framework can be added as more information is gathered from the community regarding what influences their choices on how to protect themselves.

Community Feedback: What factors would you like to be able to filter protective actions on?

Signposts to existing resources

Many have recognised the harms that can arise from influence operations and are doing their part to prevent them. Where possible Blue actions should link out to existing resources (organisations such as Right to Be, PEN, the Coalition Against Online Violence, and more). Guiding victims to the people who have the most experience in preventing the problems they’re facing will help them quickly get the support they need.

Community Feedback: Which existing resources do you think are important to include in the new Blue?

Targeting threat actors’ objectives

In the past, we’ve spoken about how we would like to help the community move beyond “whack-a-mole” responses to influence operations. We envision this as enabling civil society to respond effectively to information manipulation. This involves helping users identify potential aims that a group may have (seen in the Red Techniques), and using this to find more effective ways to counter their objectives.

For example, if it can be identified that an actor has the objective of undermining the integrity of an election, this knowledge could be used to select a better way to re-enforce its integrity than simply fact-checking the narratives they’ve posted. By targeting Blue actions at Red objectives, people can be more efficient in their defensive efforts.

Community Feedback: How do you think DISARM can best help people move past a “whack-a-mole” response to influence operations?

Community Feedback

As we’ve stressed above, the Blue Framework is being built upon the feedback of experts who need to combat influence operations. The future of the Blue Framework is in your hands; if there’s anything you’d like added, please let us know by using this Google Form.

--

--

DISARM Foundation

We are home to the open DISARM Framework — a common language and approach for diverse teams to coordinate their efforts in the fight against disinformation