DISARM Countermeasures Workshop Series — How can the media communicate responsibly about disinformation campaigns?

DISARM Foundation
8 min readJun 1, 2023

Victoria Smith

This year, DISARM has hosted a series of workshops exploring countermeasures to online harms. The workshop series is generously supported by Craig Newmark Philanthropies. The objective of the workshops is to gather feedback on the types of countermeasures used to counter online disinformation and other harms, and how to make advice on mitigations and countermeasures to these threats accessible and practical to those who need it. The feedback from these sessions will feed into DISARM’s work to update and improve the existing ‘Blue Framework’ of countermeasures.

This workshop focused on the responsibility of the media and other communicators when reporting on disinformation campaigns. Participants had experience of working in non-governmental organisations and journalism.

Introduction

Disinformation campaigns are technology-facilitated communication campaigns. Malign actors are highly skilled at hijacking narratives and corrupting them for their own purposes. This makes communicating about the problem inherently challenging; there will often be someone waiting to misinterpret what you say and use it against you.

Each actor and sector will have their own set of challenges when communicating about disinformation campaigns. The Media can warn and inform the public about a campaign or raise awareness of countermeasures, but state-backed or partisan media can also spread disinformation and be part of the problem. Academics, researchers and non-governmental organisations can communicate their findings to a wider audience, but poor communication can give malign actors opportunities to question their findings and undermine their reputation. Democratic governments must also strike a difficult balance between transparency and classified source protection when trying to gain and maintain public trust.

DISARM should help its users develop strategic communication plans to support their communications about and response to disinformation campaigns.

Key Takeaways:

What should the DISARM Blue Framework do?

The Blue Framework should recognize that different communicators focus on different aspects of disinformation campaigns, and will therefore wish to talk about them in different ways; Researchers of disinformation campaigns are likely to focus on discussing their findings, which might include observations about newly emerging narratives, or analysis of behaviour, narratives and content used in more established campaigns. Civil society groups or those who recommend or implement policy, may have more of a focus on the prevention and recovery phases.

What should the DISARM Blue Framework include?

The Framework should point to established guidance on how to report on disinformation campaigns responsibly; Reporting on disinformation campaigns should be done carefully as there are many pitfalls. Done badly, reporting can amplify malign narratives, misrepresent facts, or needlessly raise public alarm.

Participants provided examples of the following existing guidance:

  • DFRLab’s Foreign Interference Attribution Tracker from the 2020 US election is useful for understanding disinformation campaigns and how they are reported.
  • The Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace brings investigative reporters and researchers together to define best practices for reporting on misinformation.
  • The Partnership for Countering Influence Operations (PCIO) at the Carnegie Endowment for International Peace published an essay that addresses best practices for reporting on misinformation without unnecessarily raising public alarm. The PCIO also published the results of a survey that addresses some of the same issues.
  • To explain the wider context of disinformation campaigns it can be useful to try and measure the likely impact. One approach is to use Ben Nimmo’s Breakout Scale.

Attribution assessments should be made with care; Not every report about disinformation campaigns can or should include an attribution assessment. Where attribution assessments are made, care should be taken to use very specific language, recognising the nuances of the supporting evidence, and include clearly defined confidence assessments. Over time, as more information becomes available confidence in attribution assessments of a particular campaign may weaken or strengthen.

When researchers inherit attributions, for example from platform takedown datasets, they have the opportunity to state whether they were able to find information that supported or contradicted the original attribution, or identify accounts that don’t appear to fit the pattern of other accounts.

Some researchers develop their own checklists or matrix of criteria to support their attribution assessments. Another option is to work collaboratively with other organisations or researchers. This introduces a multi-phase editing process, which can be helpful particularly when the research focuses on politically sensitive campaigns.

Participants provided the following examples of attribution assessments handled carefully:

  • A report by Stanford Internet Observatory and Graphika demonstrating caution around attribution.
  • A report from Stanford Internet Observatory on GRU-attributed Facebook operations illustrating careful use of attribution language.

Factchecking and prebunking can be effective, but also have limitations; When it comes to anticipating disinformation campaigns, factcheckers are able to conduct ‘prebunking’, a tactic that counters potentially misleading information by warning people against it before they come across it. These prebunks might be drawn from observations of emerging narratives on social media, so they can be highlighted and corrected before they become mainstream, or in anticipation of the likely content of upcoming political speeches. Prebunking can also be used to raise the public’s awareness more generally, such as reminding people that when they are on the internet, they may read things that are not true and to be mindful about what they choose to share on their own accounts. Factchecking and prebunking are mainly focused on correcting narratives, rather than assessing whether or not they belong to a coordinated disinformation campaign.

Factchecking and prebunking have been found to be less effective on individuals with deeply entrenched beliefs and more persuasive among people who are less sure of their position and who may hold low levels of trust in traditional media.

Factchecks receive wider readership and are more likely to be shared when they are published soon after a statement is made; The emergence of Covid-19 was a particularly challenging time as while many factcheckers have experience in domestic or foreign policy, fewer were virologists. The science was also very challenging as there were so many unknowns and there was scientific disagreement about the origins of the virus. Emerging situations like this, where there are large gaps in information, or the available information is regularly updated, can create breeding grounds for disinformation campaigns that seek to fill these information voids with false or misleading narratives. Correcting these narratives quickly is an important tool in the defence against the spread of disinformation.

Highlighting the motivations, connections or history of actors that promote disinformation campaigns can be more persuasive than focussing on narratives; Some investigations focus on the individuals or groups behind disinformation campaigns. These investigations can highlight where specific actors have driven a variety of narratives that change over time, for example an anti-vaccine group that is increasingly adopting climate change conspiracy narratives. Collecting evidence of an actor’s likely motivations or connections can sometimes better demonstrate malign intent than a focus on contested narratives.

Researchers should have a communications plan to ensure the findings of their work are properly communicated; As well as building working relationships with journalists who report on the disinformation beat, researchers should also consider how to coordinate their communications around the publication of their reports. This might include preparing a whitepaper or executive summary, released separately but which summarizes the key findings for a generalist audience, or preparing social media posts to highlight these findings. Whatever method is chosen, consideration should be given about how to present nuanced and complicated information to a non-technical audience. Authors may also wish to try and place short op-eds or commentaries in the media so they can communicate directly with a wider audience.

More generally, as debates around disinformation campaigns become increasingly polarised, researchers may wish to consider how they develop trust with their audiences online. This might include considering how they declare and present their political views, or even to try and ensure they keep their analysis factual and non-partisan.

Other areas of discussion:

  • Researchers can and should develop working relationships with journalists. This allows researchers to talk to journalists about the details and nuance of their research, and preempt the risk of pitfalls, such as making inaccurate inferences about their conclusions.
  • The work of factcheckers is being increasingly politicised, and malign actors seek to undermine and delegitimize factcheckers and their work. Factcheckers can become targets for online harassment, while some malign actors are mirroring the language and methods of legitimate factcheckers to promote partisan narratives, ‘disprove’ facts or present false information as fact. Russia for example has started using so-called fact-checking websites to debunk what it calls ‘Western disinformation’.
  • Factcheckers cannot be specialists in every subject, however factchecking organisations can develop expertise on a range of subject matter. The role of factcheckers is often to be the bridge between specialists who use very technical vocabulary and a more generalist public audience, who want to better understand issues of public concern.
  • People tend to trust individuals more than institutions. Authors and bloggers can set up newsletter subscription services that create strong bonds of trust with their readers. People also place more trust in posts shared by friends and family members.
  • Local news fares better in trust ratings than national or international news organisations.
  • Journalists may not necessarily consider themselves to be at the frontlines of responding to, or countering, disinformation campaigns. However, journalism is often the prism through which the findings of research, or the repercussions of significant events are communicated to the public. Journalism therefore contributes, positively or negatively, to the enabling environment that allows malign narratives to take root and flourish. For example, journalists can bring their prejudices into their reporting, and newsrooms and their owners can have pronounced political leanings. This degrades trust in journalism, and the latest Edelman Trust Barometer shows that trust in journalism in the US is only slightly higher than trust in journalism in Russia.
  • Impartiality is not the same as ‘both-sideism’; When reporting on a divisive topic, some media outlets, including the BBC, have at times presented two competing sides of an argument as carrying equal weight, even though for issues such as Brexit or climate change there were many scientists and thought leaders on one side of the argument and significantly fewer on the other. Both-sideism can also contribute to disinformation.
  • News desks and media publications are unlikely to monitor their NewsGuard or similar ratings until audiences pay more attention to such ratings. The far-right media in the US appears to flourish even with very poor NewsGuard ratings.
  • Advertisers should care about NewsGuard ratings but checks on disinformation ratings need to be built into the automated ad-tech auction system, using, for example, the Global Disinformation Index. The Journalism Trust Initiative was an effort that tried to embed NewsGuard like ratings into the technical side of social media and ad-tech systems but it never really took off. CheckMyAds tries to shame brands into insisting on more responsible behavior from their ad-tech intermediaries since the ad-tech community does not have powerful incentives to do that work on their own: tools like CrowdTangle can be used to investigate the originator social media accounts behind disinformation and the websites associated with them — these sites can be further investigated using domain service look up tools like WHOIS.

--

--

DISARM Foundation

We are home to the open DISARM Framework — a common language and approach for diverse teams to coordinate their efforts in the fight against disinformation