Imagining the pandemic public: conspiracy-minded or healthily sceptical?
by Dr Warren Pearce, iHuman/Department of Sociological Studies
Public scepticism has proved an invaluable bulwark against excessive expert authority during the COVID-19 pandemic. In the current climate, this may appear a surprising or even provocative statement. After all, do we not find ourselves in the midst of the post-truth era, where misinformation and conspiracies related to COVID-19 and other scientifically important issues (such as climate change) run riot on social media platforms, confusing the public and eroding our democracies?
Well, yes, all of those statements have some truth in them: politicians often have a 鈥榝lexible鈥 relationship with facts, conspiracy theories remain in circulation regarding the origins of COVID-19 and the motivations behind vaccine programmes, and the vaunted ideal of the public having a 鈥榮hared set of facts鈥 seems further away than ever. Post-truth concerns that emerged during the Brexit referendum have reached dizzy new heights in the last 12 months, for example that . However, some of the misinformation and conspiracy research set in train by these developments has its own problems, both in terms of its quality and its implications for democracy.
In this post, I argue that academic research is imagining the pandemic public as conspiracy-minded, failing to account for the instability and secrecy surrounding scientific knowledge during the pandemic, with potentially bad political outcomes that could backfire on public trust in expertise.
Conspiracy thinking or healthy scepticism?
Academic researchers have been quick to respond to what the World Health Organisation has dubbed an 鈥榠nfodemic鈥. (up from 306 in 2019), helping to drive a torrent of media articles criticising social media companies. With funding awarded to multiple research projects about online Covid misinformation, we can expect this research agenda to go from strength to strength in 2021 and beyond.
However, while there is little love lost for social media companies, there is reason to be cautious about the potential impacts of misinformation research on what , where knowledge and inquiry remain central to public life but have also been transformed by the arrival of digital societies. If research focuses too narrowly on misinformation, and is too expansive in its definition of conspiratorial thinking, then it presents a threat to the principle of experts remaining accountable to the public. In short, if the public is imagined to be irrational and conspiracy-minded, then it relieves experts of the need for them to prove their own trustworthiness.
That is not to say that scientists should be subjected to regular questioning by QAnon devotees. Extreme cases are easy to judge. The problem comes in the fuzzy middle, where the rears its head: how to differentiate between science and non-science, legitimate challenge and 鈥榖ad faith鈥 attacks. This problem, and its importance for conspiracy theory research, is neatly encapsulated in 鈥淐oronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England鈥, published in Psychological Medicine in May 2020. Thus far it is the most-cited academic article on Covid-19 conspiracies and misinformation, as well as having significant public impact with 141 media mentions and two citations in policy documents .
In presenting evidence for a 鈥減ublic health information crisis鈥 the authors argue for a distinction between conspiracies and 鈥榟ealthy scepticism鈥, one that I would agree needs to be made. Unfortunately, the article鈥檚 methods do the opposite, calculating 鈥渃oronavirus conspiracy scores鈥 from survey data on both obscure theories (e.g. 鈥淐oronavirus is a plot by globalists to destroy religion鈥) and more modest standpoints (e.g. 鈥淚 don鈥檛 trust the information about the virus from scientific experts鈥). Not surprisingly, the levels of disagreement are much higher for the former (78% for the religion theory) than the latter (44% on distrusting information from experts). The risks of this conflation are twofold.
First, including expressions of healthy scepticism in a definition of conspiracy theorising makes the latter appear more prevalent and generates unreliable evidence for the supposed information crisis. Second, there is the clear implication that members of the public who merely 鈥榙istrust鈥 experts should be put in the same basket as the most extreme fringe theories. At this point, you may be thinking that distrusting information from experts is irrational and can legitimately be grouped together with fringe conspiracists. To see the problem with this view, one only has to look at the uneven history of Covid-19 science over the last 12 months.
The instability of science
As has been widely documented there were very good reasons to be sceptical about the public information being provided by UK government scientists in the early days of the pandemic. In particular, statements about the UK鈥檚 increase in cases being four weeks behind Italy were subject to immediate public scrutiny and challenge. (two weeks prior to Freeman et al.鈥檚 fieldwork began):
鈥淚t鈥檚 fair to say the 鈥4 weeks鈥 comment was met with a bit of scepticism by the general public, eg.. When the Govt鈥檚 Chief Scientist is being openly mocked for his comments, it seems to me that something is seriously wrong. For context, on the 12th March we鈥檇 had about 500 cases and 8 deaths. 15 days earlier on the 26 Feb, Italy had had very similar numbers 鈥 in fact slightly fewer cases and more deaths. In both countries, the numbers of cases and deaths were doubling roughly every 3 days, meaning we would get to Italy鈥檚 then current values of 20,000 cases and 144 deaths in about a fortnight or so (5 doublings = 32x). 4 weeks was obviously risible."
, statements about the growth rate in cases were a key reason provided by government experts for not rapidly following Italy into stringent lockdown measures, a decision that scientists have since admitted 鈥溾.
Case growth rates was not the only example of controversies surrounding government experts. Here I briefly mention two more. First, contrast science advisers鈥 鈥淸e]mollient descriptions of mild illness鈥 with the 鈥渙ften overwhelming experiences鈥 of patients suffering for months on end. A year on, and following sustained pressure and organising from Long Covid patients, the condition is now and the subject of widespread public concern. Second, show how, in the space of two months, UK advice went from dismissing face coverings as something 鈥榳ired into鈥 southeast Asian cultures to becoming mandatory on public transport. Here, again had a role to play alongside those experts arguing that government advisors should adopt a more precautionary interpretation of the evidence for face mask effectiveness.
I raise these examples not to criticise government advisers for changing their mind based on the available evidence, or to downplay the difficulty of providing scientific advice in a pandemic. Rather, these controversies illustrate the instability of scientific knowledge, and how the government has found itself under.
Science in secret
To make matters worse, these controversies took place against . , 鈥渆xperts should be seen as authorized to act only on behalf of their public constituencies and only within parameters that are continually open to review鈥. Yet during the crucial months of March and April, when the fate of the nation was so dependent on how scientific evidence was interpreted and acted upon, minutes and reports from SAGE and other advisory committees were published either with considerable delay or not at all. Even the memberships of advisory committees (with the notable exception of ) were kept secret until .
This lack of transparency provides crucial context for understanding public attitudes to experts during the pandemic. Believing that important information was being kept secret was an understandable conclusion to reach given the situation at the time. Yet, rather than acknowledging such views as being within the bounds of reasonable discourse, Freeman et al.鈥檚 conspiracy research states that a belief that important events are kept secret from the public is a measure of 鈥溾 and 鈥渆xcessive mistrust鈥. A failure to acknowledge these twin prevailing conditions of unstable knowledge and scientific secrecy risk inflating the idea that a conspiratorial mindset is at large within the public.
What to do with a conspiratorial public?
This all matters because once the imagination of a conspiratorial public in academic research, media articles and policy documents leads to some worrying conclusions. For example, prominent sociologist and government scientific adviser, Professor Melinda Mills,.
While Professor Mills acknowledged the challenge of defining misinformation, and that legislation would have a chilling effect on public criticism of the government, she still argued that it should still be considered due to the potential impact of misinformation on vaccine take-up. The lesson taken here from COVID-19 is that the public is vulnerable to misinformation and unable to reflect on the content and motivations of information from experts and non-experts. This is a disappointingly one-dimensional approach to the relationship between experts and publics, all too familiar to scholars from to . In fact, the pandemic has proven scientific knowledge to be unstable, with expertise in key areas being subject to consistent and often persuasive public challenge.
Experts may be sorely tempted to restrict what my iHuman colleague Stevienna de Saille calls and their ability to challenge the received wisdom. Such a move would be in tune with the authoritarian restrictions on public protest we are seeing elsewhere in the UK, but we should think long and hard about how and why academic research is presenting the public as conspiracy-minded, and the consequences for democracy.
A more productive response to the challenges facing knowledge democracies would be to learn lessons from previous public knowledge controversies, and pursue . The presentation and use of expert knowledge is an important concern, particularly when new media technologies have transformed the democratic dynamics around expertise. However, those doing research on these issues should resist the lazy option of imagining the public as irrational and unreflective, and instead reflect themselves on what their own research, and the potential political consequences, will do to public trust in expertise.
If you have any comments on this piece, please get in touch via email or on Twitter .
Thanks to for comments on this post. Responsibility for the views expressed and any shortcomings in the argument are my own.
iHuman
How we understand being 鈥榟uman鈥 differs between disciplines and has changed radically over time. We are living in an age marked by rapid growth in knowledge about the human body and brain, and new technologies with the potential to change them.