THE SCENARIO
Megan is in a group on an online platform with a few of her close friends from university. Megan and her friends use it to organise social events, share personal updates, and chat about news and current affairs. In the lead up to a general election, one of Megan’s friends shares a news article about a recent indiscretion on the part of a notable politician. Megan clicks on the link, half-reads the article, and shares it on her timeline.
Two weeks later, Megan’s social media account is deactivated, her friendship group’s chat is shut down, and Megan and the friend that originally shared the post are barred from the platform. When Megan and her friend appeal the decision, they are told that their accounts have been suspended to comply with the online platform’s duty to prevent the spread of disinformation.
*
WHAT DOES THE ONLINE HARMS WHITE PAPER ACTUALLY SAY ABOUT “DISINFORMATION”?
Disinformation is one of the harms identified within the scope of the Online Harms White Paper—which describes it as a “threat to our way of life” which “threatens the UK’s values and principles, and can threaten public safety, undermine national security, fracture community cohesion and reduce trust”.
The Online Harms White Paper refers to disinformation 38 times in total, defining it as “information which is created or disseminated with the deliberate intent to mislead… to cause harm, or for personal, political or financial gain”. While acknowledging potential ambiguities within this definition, the White Paper does not attempt to define disinformation further, beyond identifying AI-facilitated manipulation and microtargeting as particular areas of concern.