The COVID-19 pandemic brought with it, its own issues of what is widely regarded as “infodemic”. According to the World Health Organisation (WHO), infodemic refers to “too much information, including false or misleading information in digital and physical environments during a disease outbreak.” The infodemic causes confusion, leads to risk taking behaviours that can harm health and leads to mistrust in health authorities. It further intensifies or prolongs the disease outbreaks since people become unsure about what precautions they need to take to protect their health and the health of others around them. With the emergence and expansion of social media and internet use, it is even more dangerous as information spreads more rapidly (WHO, n.d). This means that harmful false information can be spread faster. This article conducts empirical review of studies related to infodemic, misinformation and people’s behaviours and perceptions towards COVID-19 misinformation.
Conceptualisations
Ahmed and Eldakar (2021) explored Egyptian Facebook user’s perceptions and behaviour of COVID-19 misinformation. They identified and categorized misinformation shared in the Arabic language on Egyptian social media pages and surveyed Facebook user’s perceptions and behaviours. In particular, the study explored how Egyptians engage with information shared on social media that is related to the COVID-19 outbreak, why they share this information, how demographic characteristics (gender and level of education) impact participants’ information sharing and perception, and their ability to distinguish credible information from misinformation (p. 2). The study was conducted using a mixed method approach.
The findings revealed four kinds of misinformation on Egyptian social media, namely; false claims about the virus or treatment of the virus, false information about the government, false or manipulated content in general, and conspiracy theories. In addition, the spread of COVID-19 misinformation was found to have caused negative feelings among participants. Also, the reasons identified for sharing COVID-19 information were: to reduce boredom; to learn useful information; to get other people’s opinions about the information. Other reasons identified include: to keep in touch with friends; to show the sharer is interested in reading COVID-19 posts on social media; a good way to kill time; to know the latest updates; it makes the sharer feel like he/she affect others; it gives a feeling of joy; and it help in interacting with others. Regarding their ability to distinguish credible information from misinformation, it was found that respondents indicated that: they always make sure the information is correct before they share it; they follow the information published by government institutions such as the Ministry of Health; they search for the same information on other sites to verify it; they join social media groups that highlight false news; and some indicated that they cannot distinguish between correct and incorrect information due to the high number of studies and information about Coronavirus.
Feelings about misinformation sharing on social media include: anger when some people share false information about the seriousness of the virus and methods of treatment; happiness when they made sure that information about the severity of the virus is not correct; worry when they read wrong information about the severity of the virus; uncertainty about wrong information about the Coronavirus; and pride after making sure that the information about the Coronavirus is not correct.
The findings show that gender and educational level of respondents affected the way they dealt with and accepted misinformation. The higher education group had a higher mean score than the lower education group. Regarding gender, the findings suggest a difference in the validity and credibility of information regarding COVID-19 on social media.
The authora recommended that Egyptian health authorities and NGOs in the health sector should adopt the results of the study to be able to identify countermeasures of misinformation.
Combating Digital Disinformation
In a study on strategies for combating the scourge of digital disinformation, Randolph H. Pherson, Penelope Mort Ranta & Casey Cannon (2021) note that political actors and social manipulators were increasingly using social media platforms to reshape popular perceptions for partisan political or social purposes. They observed that, according to Freedom House disinformation tactics were used in at least seventeen countries in 2017, mostly to sway opinion within their own borders, especially in Venezuela, Turkey, and the Philippines. Also, The Guardian in the United Kingdom in November 2017 reported that governments of 30 countries were “using armies of opinion shapers to meddle in elections, advance anti-democratic agendas, and repress their citizens. The objective of the study was to demonstrate how diagnostic techniques developed in the Intelligence Community (IC) could be used to identify the most promising approaches for combating the scourge of digital disinformation and counteract the damage inflicted on democratic institutions. The four distinct strategies for combating digital disinformation are using third-party fact checkers to issue warnings of questionable postings; creating a second, alternative and fact-based Internet; establishing strict global screening protocols; and forming “safe spaces” of validated information in the cloud. They assert that efforts to forge a successful, intelligence-informed antidote to digital disinformation must be multifaceted.
With the first strategy, they state that the government requires major commercial search engines and social media to display warning notices when a user enters a website or views a post that is deemed to be of questionable veracity. These posts could be identified using an army of fact-checkers and Artificial Intelligence (AI) algorithms. They would be categorised as either legitimate, biased, satire, or purposeful deception. This categorisation could be defined in a legislation, which would be as a result of a robust collaborative fact-finding and debate involving both government and private sector players. These warnings they refer to as Pinocchio Warnings.
Regarding the second strategy, the government would create an alternative internet where users are banned from posting digital misinformation. There would be a precertification when users are joining the network and they must also agree to abide by a set of universal standards for exchanging information and insights. Violators of the rules of the network would be banned from accessing the network for life. The platform is termed the Alt-Net.
The third strategy, the Rigid Gateways, refers to AI and human based screening procedures developed by online service providers, to ensure that only acceptable content will be posted on their platforms or websites. The online platforms jointly develop AI screening algorithms for policing their systems and hire an army of analysts and programmers to implement a blended strategy for collaboratively reviewing content.
The final strategy involves the call for leading online service providers to create a safe space or cloud on the internet that houses only validated information from trusted sources. Postings on the cloud would be pre-certified as compliant with international standards for evidence-based analysis and opinion. Anyone or institution could apply to be certified to be able to post on the cloud, which would be called T-Cloud. Standards for posting could include a determination that the information comes from a trusted source; meets academic standards, including peer review; it is deemed acceptable by applying AI algorithms or human review processes. There will be a small charge for the right to post individual papers, books, videos or other media in the cloud.
They note that the viability of the strategies would be dependent on the role of technology, the strength of financial, social and political incentives and obstacles imposed by financial, social and political factors.
Despite the fact that these strategies are laudable, the implementation process would involve a human interface which could affect the objectiveness of the strategies. It also appears that a lot of power is in the hands of governments and their agencies which could make them take advantage of the platforms to perpetrate their own partisan political agenda.
The WHO, mindful of the implications of misinformation on the COVID-19, issues a joint statement with the UN, UNICEF, UNDP, UNESCO, UNAIDS, ITU, UN Global Pulse, IFRC on managing the COVID-19 infodemic aimed at promoting healthy behaviours and mitigating the harm from misinformation and disinformation.
The statement noted that even though technology and social media were being used to keep people safe, informed, productive and connected during the pandemic, they were also being used to amplify information which was continuing to undermine the global response and jeopardise measures to control the pandemic. In a bid to combat the misinformation, WHO member states passed Resolution WHA73.1 on the COVID-19 response at the World Health Assembly in May, 2020.
The Resolution appealed to WHO member states to provide reliable COVID-19 content, and leverage digital technologies across the response. It further called on “international organisations to address mis- and disinformation in the digital sphere, work to prevent harmful cyber activities undermining the health response and support the provision of science-based data to the public” (WHO, 2020).
The other demands of the statement include:
- Call on Member States to develop and implement action plans to manage the infodemic by promoting the timely dissemination of accurate information, based on science and evidence, to all communities, and in particular high-risk groups; and preventing the spread, and combating, mis- and disinformation while respecting freedom of expression (WHO, 2020).
- Urging Member States to engage and listen to their communities as they develop their national action plans, and to empower communities to develop solutions and resilience against mis- and disinformation (WHO, 2020).
- Call on all other stakeholders – including the media and social media platforms through which mis- and disinformation are disseminated, researchers and technologists who can design and build effective strategies and tools to respond to the infodemic, civil society leaders and influencers – to collaborate with the UN system, with Member States and with each other, and to further strengthen their actions to disseminate accurate information and prevent the spread of mis- and disinformation (WHO, 2020).
It is indicative from the WHO Resolution and the call on member states to address issues of mis and disinformation that information disorder is of great concern in all aspects of our lives and that all must make efforts to contribute to the global fight against the menace.
REFERENCES
Ahmed Shehata & Metwaly Eldakar (2021): An Exploration of Egyptian Facebook Users’ Perceptions and Behavior of COVID-19 Misinformation, Science & Technology Libraries, DOI: 10.1080/0194262X.2021.1925203
Randolph H. Pherson, Penelope Mort Ranta & Casey Cannon (2021) Strategies for Combating the Scourge of Digital Disinformation, International Journal of Intelligence and CounterIntelligence, 34:2, 316-341, DOI: 10.1080/08850607.2020.1789425
World Health Organisation. (n.d.). Infodemic. https://www.who.int/health-topics/infodemic#tab=tab_1
World Health Organisation. (2020, September 23). Managing the COVID-19 infodemic: Promoting healthy behaviours and mitigating the harm from misinformation and disinformation. https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation