Over the course of the pandemic, the value of researched, independent fact-checking has become more precious than ever.
Social media platforms have enabled an ‘infodemic’ of false claims regarding potential cures to COVID-19, conspiracy theories laced with antisemitism that the disease was deliberately manufactured by foreign states and/or elite secret societies, and misleading rumours around governments’ management of the crisis. At the beginning of the public health crisis, it looked like big tech was making positive changes: Facebook announced it would promote the World Health Organisation’s ‘myth busters’ page to users who have ‘liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed’; YouTube said it would start removing content promoting medically unsubstantiated treatments instead of medical care, but also ‘any content that disputes the existence or transmission of Covid-19, as described by the WHO and local health authorities’. And if a message has already been forwarded more than five times, WhatsApp users can now only send it to one contact at a time.
But it is now six months since the beginning of lockdown, and this has clearly not been enough. According to campaign group Avaaz, the top 10 websites spreading health disinformation on Facebook have almost four times as many estimated views as organic content from the world’s 10 leading health institutions’ websites. Research from YouGov for the Center for Countering Digital Hate in June found that 31% of Britons could refuse to be vaccinated once a COVID-19 vaccine is found (5% no, 10% probably not, and 15% don’t know). For a vaccine to successfully suppress COVID-19, it would have to be taken by over 75% of the population.
The UK Government is yet to introduce the much anticipated Online Harms Bill, a legislative process triggered in the previous Parliament by the high-profile Digital, Culture, Media and Sport Select Committee inquiry into Disinformation and ‘Fake News’. The plans, according to the White Paper published in April 2019, are to establish in law a new duty of care for social media platforms towards users, to be overseen by an independent regulator – probably Ofcom. Until, however, such a regulator has the statutory power to investigate, challenge and guarantee the implementation of this duty of care by the tech giants, governments are able to do little more than what Nina Jankowicz, Disinformation Fellow at the Woodrow Wilson Centre for Scholars and author of ‘How to Lose the Information War’, likens to ‘whack-a-mole’, when talking about Russian networks of inauthentic accounts: as soon you deal with one problem, another appears.
In the UK we’ve seen this game of whack-a-mole play out non-stop too, the Government mallet taking the form of a new cross-departmental Counter-Disinformation Unit. In April a WhatsApp voice-note spread like wildfire, from a woman claiming to work at the South East Coast Ambulance Service (SECAmb), saying that due to the health service being inundated with deaths of the Easter bank holiday, ambulances would no longer be responding to emergency 999 calls for help from the public. Whilst that particular claim was being debunked, BT Openreach workers were being verbally and physically assaulted, attacks fuelled by a wide range of online conspiracy theories claiming that 5G caused coronavirus. Fast-forward over the summer and the subjects of disinformation have varied, from Black Lives Matter to the US election, and back to health on anti-vaxx conspiracies. Until the platforms are forced to take proactive responsibility for the content their users publish, and that their own algorithms and ads promote, the game of whack-a-mole will never end; Facebook approved over 200 ‘QAnon’ conspiracy theory ads between April and September this year.
Whilst we wait and campaign for structural change, many others have stepped into the breach to help play whack-a-mole. Traditional media outlets and broadcasters have teams focused on disinformation – of particular note is the work of BBC Trending. Existing fact-checking charities like Full Fact have redoubled their efforts. Tory MP Damian Collins co-founded Infotagion, to specifically look at health-related claims on social media. So, what are the starting-points for anyone who wants to start fact-checking themselves?
- Look carefully at the text – is the url a website you recognise as a reliable source of information, are there any spelling mistakes in the content of the article or image, is it too sensationalist?
- Counter-check the claims made, and not just with one other source – look beyond the first link on a browser’s search results page.
- Use Google Image’s Reverse Search function if you are in doubt about a photo.
Most importantly, it may have already been fact-checked! There is a huge network of independent fact-checkers; a good indication they mean business is whether they are accredited with Poynter’s International Fact-Checking Network.
In the age of social media, we all have the opportunity to become world-wide broadcasters. Some use that platform to maliciously disseminate false or misleading messages that lead to real-life harm. As we wait for platforms and governments to step up with duty of care proposals, we can all play our part by sharing verified information, flagging harmful content; and always checking the source.