DISARM Countermeasures Workshop Series — What challenges do organisations face when defending themselves and others from online disinformation campaigns?

DISARM Foundation
10 min readMay 11, 2023

Victoria Smith

This year, DISARM has hosted six workshops exploring countermeasures to online harms. The workshop series is generously supported by Craig Newmark Philanthropies. The objective of the workshops is to gather feedback on the types of countermeasures used to counter online disinformation and other harms, and how to make advice on mitigations and countermeasures to these threats accessible and practical to those who need it. The feedback from these sessions will feed into DISARM’s work to update and improve the existing ‘Blue Framework’ of countermeasures.

This workshop — the first of the six — focused on what organisations can do to help protect themselves; their staff; infrastructure and reputations from the threat of online disinformation campaigns. Participants had experience of working in private companies, non-governmental organisations and academia.

Introduction

Disinformation campaigns are increasingly woven into the fabric of modern societies. Malign actors are adept at seeding doubt and distortion, on a myriad of subjects. From questioning the safety of vaccines or telecommunication infrastructure, to undermining democratic principles or villainising individuals on the basis of their gender or identity, every aspect of society is vulnerable to attack.

This is why you will often hear those working to counter disinformation campaigns calling for ‘whole of society’ responses. This is easy to say, but difficult to achieve because whole of society responses require a whole of society coordination and engagement at a strategic level. The problem can feel overwhelming, so it is easy for individuals or small organisations to feel that there is very little they can do, or that there is no point in trying.

As we work to refine DISARM’s Blue Framework, we must grapple with the tension between encouraging coordinated strategic responses and raising awareness of tactical interventions. We must consider the strengths, motivations and limitations of different actors, such as social media platforms, governments or individuals and we must find a way to make our information on countermeasures easily accessible and useful.

Key Takeaways:

What should the DISARM Blue Framework do?

As we approach developments to the Blue Framework, we must be mindful not just about the specifics of the countermeasures we include, but also how we present them; The resounding message for the DISARM Foundation from participants was that the current Blue Framework, which describes potential countermeasures, is difficult to access in its current form. Countermeasures are inherently actions to be undertaken, not observables to be studied and recorded, and therefore require a different approach. Participants reported their experiences of people feeling overwhelmed into inaction in the face of state-backed disinformation campaigns. The Blue Framework should make it easier for people to find support and equip them with the knowledge and the tools to know that they are not powerless.

We need to be better at explaining where DISARM fits into the intelligence gathering, assessment and decision-making cycle, what it does and what it does not do, and how DISARM can be used in conjunction with other tools and analytical steps that fill these gaps; The DISARM Frameworks play their part in the intelligence cycle. The Red Framework supports analysts through the process of identifying behaviours used in disinformation campaigns and the Blue Framework suggests potential responses. Just as not every tactic will be used in a disinformation campaign, countermeasures are not suitable for use by every actor in every circumstance.

DISARM Frameworks are inherently tactical and cannot and should not be used in isolation to make attribution assessments, determine the strategic intent of a malign actor or plan a response to a disinformation campaign. To do these things effectively, DISARM should be used in conjunction with other tools and analytical methods that enable a full assessment of a disinformation campaign’s strategic intent and the strategic objectives of any response.

DISARM has the opportunity to make guidance on countermeasures easy for organisations to access as they work to professionalise their disinformation risk mitigation measures, and help these organisations share experiences and lessons learned between themselves; Commercial organisations are increasingly recognising the need to prepare for predictable events. This includes adapting risk assessment and management processes to include vulnerability auditing, which examines internal structures in an organization that can be exploited and that can result in security incidents and even escalate into crises.

However, the threat of disinformation remains overlooked. Commercial organisations often do not nominate anyone at executive level with named responsibility for threats associated with disinformation. Even organisations with a Chief Risk Officer may not have accounted for disinformation-based reputation risk scenarios in their risk register. Guidance for organisations on how to incorporate measures to enhance their resilience and response to online disinformation threats might include changes to staff training and recruitment; adapting organisational policies, cyber security measures and incident response plans; and preparing tailored strategic communication or disinformation recovery plans.

DISARM can raise awareness of the options available to organisations, large or small; Organisations have varying levels of exposure to, and experience of, online disinformation campaigns. Small companies may have limitations on staff or budget but may find that their internal communications and ability to respond quickly is faster than a large company with more resources, but more levels of approval and oversight to navigate. We should explore ways to ensure that we include information to support organisations, regardless of size and experience.

What should the DISARM Blue Framework include?

DISARM should support those preparing senior executives for the types of narratives that might be used against an organisation or its staff, as this is an important step in preparing for and mitigating the impact of online disinformation; It can be easy to think of an organization as faceless and overlook the fact that ultimately it is often individuals who become the targets of a disinformation campaign. For example, academics can be attacked because their research focus is on areas that attract high levels of disinformation, for example vaccinations or climate science. Individuals are at heightened risk in organisations where there is a lack of support for people who are singled out as targets for online disinformation campaigns. These campaigns can take many forms, including attempts to undermine the target individual’s reputation or credibility, or influence senior leaders to remove an individual from their post and thereby disrupt their work and remove them from any institutional protection.

DISARM should explore and define a broad range of countermeasures available to organisations to raise awareness of what is available and give every organisation the ability to implement protective measures; It is relatively simple and inexpensive to sponsor a campaign to try to destroy someone’s reputation, individuals can also be the target of threats, or have their personal information published online in doxing campaigns, which can also pose a threat to their personal security. Increasingly organisations are developing social media policies for their staff and volunteers, which include acceptable use, privacy and password policies. As these policies mature, they should also start to include guidance on what individuals should do in the event of a disinformation campaign against them, in their capacity as an employee or affiliate of the organisation.

Having organisational policies about how to protect and defend staff from online disinformation campaigns protects both the individual and the organisation. In some cases, the actions that an organisation has at its disposal are limited by the organisation’s awareness of the full range of proactive and reactive countermeasures, rather than by factors such as cost implications or the size of the organisation.

Digital Defensive Persona Policies help employees separate their work IT equipment from their personal life and enhance their online security; Organisations with staff working in particularly sensitive locations, such as war zones, or on sensitive subjects, may wish to develop digital defensive persona policies to help employees enhance their online security. This might include advice on masking your location with VPNs and travel routers, manually adjusting app settings to ensure data cannot be tracked or ensuring employees have a work phone that keeps their work and personal profiles separate. Advertisers and adversaries alike can purchase a wealth of information about specific users through data brokers. Adopting a tech approach to data security can help mitigate information targeting based on a user’s digital footprint. Such an approach can also help individuals consciously create a digital profile, from the Google searches they conduct to how their phone and the apps on it interact with their computer, and maintain that online persona over time.

Predictive technology can help organisations better prepare for and respond to disinformation campaigns; Companies are beginning to actively monitor their reputation across social and traditional media and using predictive tools to predict how that activity might develop. These predictions are then used to help organisations separate out individuals with specific grievances from more systemic threats and prepare their response and mitigation strategies in advance.

Choosing not to respond is a valid choice; Not all disinformation campaigns need to be acknowledged with a response. Not every disinformation campaign has a state actor behind it. Many organisations, particularly those who work in fields that attract controversy such as identity based human rights issues, will find that they are most frequently dealing with online trolls, who may be trying to provoke a response or draw more attention to themselves.

Choosing not to respond, or to delay a response until defined thresholds are met is a valid course of action. Tools such as the RESIST Counter Disinformation Toolkit can help organisations assess whether an attack meets a threshold for response, and the types of response that they may wish to consider at each stage.

Digital resilience to gendered disinformation needs to consider wider societal context and not just the individual; The patterns of online misogyny are relatively well understood and appear to be similar across geographies, mainly because global social media platforms operate in the same way, as do state-sponsored actors deploying gendered disinformation. Digital resilience to these types of attacks needs to be multifaceted; campaigns against individuals are more effective when there is an enabling environment before and during the campaign that allows them to exploit vulnerabilities. The example provided was the disinformation against Nina Jankowicz, the former Executive Director at the short-lived Disinformation Governance Board in the US. The attacks against her personally, and the Disinformation Governance Board more generally, were a result of a failure to effectively socialise the new government body, making it particularly vulnerable to the disinformation campaign that was waged against them. While the factors that contribute to an enabling environment may vary between countries, the themes that are exploited in gendered disinformation campaigns by far-right actors are similar. Digital resilience therefore needs to be considered in the wider societal context and may even be a consideration in national security frameworks.

Restricting the ability of malign actors to promote their disinformation campaigns should address behaviours as well as narratives; Those working to counter online gendered disinformation need to develop engaging conversations with key stakeholders, who do not currently see the urgency of the situation, and this cannot be addressed with debates about how to describe and define the problem. Rather, they must take the opportunity to describe the narratives used, how they exploit vulnerabilities within social media platforms or societies and how they grow and monetize their audiences. In this way, they can draw attention to both malign narratives and malign behaviours, so that the conversation moves away from a battle to protect the free speech of individuals and onto the ways in which social media platforms can restrict behaviours that allow the narratives of malign actors to flourish.

Disinformation campaigns extend beyond text and include images, videos, deepfakes and memes. Text embedded in these visuals can be difficult to detect and therefore they can be difficult for content moderation algorithms to identify and remove; Visual images, including those that contain embedded text, can be presented as jokes or satire. They can pop up and spread quickly and virally. Because of the nature of them, people often disregard them as something not to be taken too seriously and therefore not anything to be concerned about. However, this type of content is used by malign actors, including terrorist organizations, to recruit younger people to their cause. Text in images can be difficult for content moderation algorithms to detect, which makes it more difficult to take action against them.

One way of countering these types of image-based content is to conduct educational information campaigns that highlight the dangers of deepfakes and other image-based content, demonstrate how prevalent they have become and give advice on how to respond to them.

A whole of society approach to disinformation improves societal resilience but has its challenges; Responding to individual attacks is unsustainable and impractical. In the cyber world, there are response playbooks to help people when they have been targeted by cyber-attacks or ransomware. Increasingly new techniques are being adopted such as the use of deepfake technology to extort victims, for example deepfake revenge porn. But isolated responses to attacks are too targeted and tactical to address the problem at scale. Countries such as Finland have adopted a whole of society approach to try to inoculate their population before disinformation campaigns occur. However, there are challenges to this approach and success can vary across demographics, with older people generally finding it harder to cope with the challenges of digital media than younger generations.

About DISARM:

The DISARM Foundation was established in 2021 and is home to the DISARM Framework. DISARM stands for ‘Disinformation Analysis and Risk Management’.

The DISARM Foundation has developed a ‘Red’ framework, which identifies behaviours associated with online disinformation, influence operations and other harms, and continues to be refined. The Framework provides key elements of the common language, so urgently needed for better coordination to more effectively combat disinformation.

The DISARM Foundation has a ‘Blue’ countermeasures framework, which needs further work to build it out and review its overall shape and effectiveness. The aspiration is that, used together, the two frameworks will support the rapid and effective coordination, identification of and response to online disinformation.

The DISARM Foundation defines disinformation as “information that is false, misleading, incomplete, or out of context, and which is deliberately created, altered, manipulated, disseminated, or presented with the intent to deceive.” We recognise that there is no single term that covers all aspects of the DISARM Foundation’s remit. The framework supports the assessment and response to a wide range of online harms, and encompasses concepts such as influence operations, foreign information manipulation and influence, ‘mis/dis/mal-information’, and ‘Information Disorder’.

--

--

DISARM Foundation

We are home to the open DISARM Framework — a common language and approach for diverse teams to coordinate their efforts in the fight against disinformation