Election meddling will take new insidious forms

Social media efforts to prevent foreign election interference may already be obsolete

With elections due in the EU, Canada and Australia in 2019 and the United States next year, social media firms have made significant efforts to prevent further misuse of their platforms. These efforts are likely to be effective, and manipulation of the kind attempted between 2016 and 2018 will not re-occur. However, the nature of the adversary has changed. The platforms are at risk of preparing to re-fight yesterday’s battles.

What next

Future election meddling will focus on alternative platforms rather than Facebook and Twitter; deploy genuine and ‘deep fake’ image and video content (as opposed to text), seek greater impact by accentuating public distrust in government and mainstream media; and leverage political kompromat. The perpetrators will include state-linked actors such as Russia’s Internet Research Agency (IRA) as well as new domestic groups inspired by the IRA’s ‘success’ in the 2016 US election.

Subsidiary Impacts

  • Containing the spread of harmful content via fringe platforms is a significant regulatory challenge.
  • Governments may increase their reliance on offensive cybersecurity campaigns to contain foreign interference.
  • Increased privacy on Facebook will make policing fake content harder as the platform will have restricted access to user content.

Analysis

In the 2016 US election, interference involved a combination of computational propaganda (disinformation spread by automated social media accounts), deliberate trolling using human controlled accounts, and artificial news and information websites posting conspiratorial or misleading content.

Academic research shows this activity began in 2012, peaked around the 2016 polls, and continued long thereafter. The majority of these operations originated from a St Petersburg-based 'troll farm' run by the Kremlin-linked IRA.

Social media response

Facebook and Twitter have made several attempts to prevent misuse of their platforms and increase their transparency:

  • This transparency includes periodic reporting of the types of manipulation they have detected and removed, and creating a database of all political ads on Facebook.
  • Twitter informed all 1.4 million users who interacted with IRA's content about their exposure, hoping to inoculate them against similar content in future.
  • Facebook has introduced new rules for political advertising.

Automated filters

New and improved algorithms to detect automated or manipulation-oriented accounts have also been deployed:

  • This enabled Twitter to challenge 232 million accounts for signs of automation or manipulation in the first half of 2018 alone. Facebook removed 2.8 billion fake accounts during October 2017-November 2018.
  • Facebook has made algorithmic changes to reduce the effects of malicious material on its main platform, including a fact-checking initiative, reducing the image size on suspected posts, and automated tools to detect clickbait headlines.

Academic research shows these efforts may be working; interactions with fake content on Facebook have dropped sharply since 2016.

Twitter has added friction to the sign-up process to make it harder for groups to register huge volumes of fake accounts quickly. Facebook has created a 'war room' focused on preventing election interference.

However, these changes may be ineffective in preventing future interference.

Alternative platforms

Fringe platforms are designed to be more user-controlled

Future election interference is likely to focus less on Facebook and Twitter and more on alternative platforms, as they become more popular and the counter-influence efforts on the two main platforms improve. These will include Facebook-owned platforms Instagram and WhatsApp.

There has already been a marked shift of Russian IRA activity away from Facebook and towards Instagram. Early evidence reveals that influence campaigns on Instagram are receiving much higher levels of engagement than on Facebook or Twitter.

Similarly, groups are likely to plant material on more fringe web platforms such as Reddit, 4Chan or Gab in the hope that it will spread organically to mainstream platforms. Such activity will be harder to detect because it will be spread and shared by genuine users, and so will appear unsuspicious to the newly introduced algorithms and machine-learning-trained classifiers.

These alternative platforms have considerable sway over the material which subsequently spreads over the mainstream web (see INTERNATIONAL: Fringe online communities pose new risk - April 12, 2019) and therefore these operations could reach a substantial audience.

Governments want better coordination between law enforcement and such fringe platforms, but their very structure evades oversight. Notably, Facebook has imposed new restrictions on its live streaming function in response to the mid-March New Zealand shooting. However, platforms such as 4Chan, which spread the New Zealand shooting video, have introduced no such restriction.

Using images and videos

Rather than focusing on text-based content (news articles and written commentary), malicious actors will focus on image and video content.

This will be much harder to monitor due to the greater level of nuance that can be included, and also because groups are likely to become more internet-savvy and embedded themselves in the groups they wish to influence, repurposing the 'memes' they use to communicate.

A move towards 'deep fakes'

Future manipulative activity will use new forms of inauthentic material

Due to rapid increases in the branch of artificial intelligence which superimposes existing images and videos onto newly created audio content, it is now possible to create 'convincing' fake videos, showing influential public figures (including politicians) saying or doing something they never did.

This is especially worrying. The speed at which fake videos can be created will vastly outperform initial attempts to counter them. It is likely that future efforts to influence election procedures will employ tactics using deep fakes.

Acting Director of the CIA Michael Morell has gone on record to say that "Russian disinformation ahead of the 2016 election pales in comparison to what will soon be possible with the help of deep fakes".

Hyper-realistic fake accounts

To bypass social media's attempts to curb fake profiles, future manipulation efforts will employ hyper-realistic fake profiles with realistic account histories that are harder to detect, rather than automated bot accounts that simply amplify messages to promote certain viewpoints.

These accounts will either be created by the groups themselves, bought from a growing market for manipulation tools, or stolen by hacking the accounts of inactive users.

This latter category is especially hard to detect because their genuine historical records circumvent automated filters. This trend towards human-controlled 'realistic' fake-accounts has already been observed in the social media activity about NATO activities in the Baltic states and Poland (see EU/RUSSIA: Moscow aims to widen divisions - May 2, 2019).

Sowing confusion and 'meta-trolling'

Public awareness of election interference has increased due to increased media attention. This may lead malicious actors to try to utilise public scepticism of genuine events to sow distrust, even in the absence of direct manipulation since creating doubts about the electoral process and mainstream media is sufficiently destabilising (see UNITED STATES: Mistrust in media will fuel populism - November 24, 2017).

Combining influence operations with traditional cyberespionage

The Muller report detailed how Russian hackers succeeded in obtaining documents from the US Democratic National Committee in 2016, and in broadcasting these through a network of communication channels.

This combination of hacking sensitive information and releasing it in strategic ways is likely to advance. Alternative tactics are likely to include the insertion of fake documents in between genuine information, and expansion of targets outside of politics to other areas of government and the private sector.

State response

Besides working with social media and law enforcement, governments may explore options to prevent foreign states from interfering in online discussions through offensive cyber campaigns. There have been reports that the United States "disrupted" IRA servers during the 2018 midterm elections. Other governments may deploy similar tactics.