Guest editorial: The social, ethical, economic and political implications of misinformation

Giandomenico Di Domenico (Cardiff Business School, Cardiff University, Cardiff, UK) (Department of Economics and Political Science, University of Aosta Valley, Aosta, Italy)
Maria Teresa Borges-Tiago (School of Business and Economics, University of Azores, Ponta Delgada, Portugal)
Giampaolo Viglia (School of Strategy, Marketing and Innovation, University of Portsmouth, Portsmouth, UK) (Department of Economics and Political Science, University of Aosta Valley, Aosta, Italy)
Yang Alice Cheng (Department of Communication, North Carolina State University, Raleigh, North Carolina, USA)

Internet Research

ISSN: 1066-2243

Article publication date: 20 November 2023

Issue publication date: 20 November 2023

1104

Citation

Di Domenico, G., Borges-Tiago, M.T., Viglia, G. and Cheng, Y.A. (2023), "Guest editorial: The social, ethical, economic and political implications of misinformation", Internet Research, Vol. 33 No. 5, pp. 1665-1669. https://doi.org/10.1108/INTR-10-2023-947

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited


The dramatic increase in the spreading of false information on the Internet in the form of fake news, rumors and conspiracy theories (Di Domenico et al., 2021b), ultimately fueled by the Covid pandemic (Zarocostas, 2020) has spurred great interdisciplinary academic interest in the phenomenon of misinformation. Misinformation touches upon almost every aspect of our lives, including political decisions (Allcott and Gentzkow, 2017), the perception of health-related information (Cheng and Luo, 2021; Di Domenico et al., 2022), social media users' behavior (Di Domenico et al., 2021a) and brands’ and consumers’ behavior in the marketplace (Chen and Cheng, 2020). Fake news is a trending subject linked with misinformation, with wider coverage in the literature (Ruffo et al., 2023). Fake news is intentionally fabricated to deceive and not all users are able to distinguish it (Borges-Tiago et al., 2020), while misinformation can arise from genuine mistakes or a lack of awareness about the accuracy of the information. Both can be problematic in terms of their impact on public discourse and decision-making but addressing them may require different approaches.

As the dynamics of the modern information-driven world constantly evolve, it is vital to understand the different shades of misinformation better and disentangle its consequences on the broader society. This need has prompted this journal to issue a call for papers for a special issue devoted to misinformation’s social, ethical, economic and political implications.

A summary of the special issue

The call for papers received ample submissions, holding different methodological, theoretical and empirical perspectives. After a rigorous peer-review process, this issue includes 13 full-length papers. These papers cover a wide range of research questions that will deepen and broaden the current understanding of the role of misinformation in the marketplace and, in general, in our information ecosystem. We introduce the 13 accepted papers and categorize them into four themes: new forms of misinformation, the spreading of misinformation online, individuals' perceptions of misinformation and combating misinformation. Such themes provide a convenient framing to grasp misinformation research advancements better and guide future research endeavors.

The first theme, “new forms of misinformation,” addresses the research interest around how new AI-enabled technologies can impact individuals by creating more sophisticated and realistic forms of misinformation. In particular, the three papers on this theme focus on the deepfake technology. Vasist and Krishnan (2023) conduct a meta-synthesis to contextualize deepfakes as a sociotechnical phenomenon and highlight the platform dynamics in deepfakes' production. The authors provide a framework acknowledging the motivations to create deepfakes, how digital platforms facilitate deepfakes’ fabrication and dissemination, and possible interventions to limit their spread online. Sharma et al. (2023) empirically investigate the motivations to share political deepfakes. They highlight that political ideological incompatibility creates political brand hate and, in turn, facilitates the intention to share political deepfakes. The authors suggest that sharing deepfakes becomes a way to seek revenge on the hated party and express ideological hate, strengthening one's own ideological beliefs. Finally, Li and Wan (2023) conduct a mixed-method study to evaluate the influence of ethical concerns and enjoyment on the social acceptance of deepfakes. Their findings show that ethical concerns (i.e. informed consent, privacy protection, traceability and non-deception) affect the social acceptance of deepfakes and thus represent an entry point for the ethical regulation of deepfake information.

The second theme, “the spreading of misinformation online,” addresses the burgeoning question related to how and why misinformation spreads through digital environments. Dabran-Zivan et al. (2023) undertake an interesting search engine algorithm perspective and conduct an algorithmic audit of Google Search, emulating search queries about Covid-related conspiracy theories in four languages (English, Arabian, Russian and Hebrew). They find that the English language provides the highest share of high-quality information, suggesting the existence of structural differences that significantly limit access to accurate information in other languages. Wang et al. (2023) empirically investigate how misinformation, particularly metaverse-related misinformation, infiltrates science and technology forums. Adopting the elaboration likelihood model lens, the authors identify different textual and non-textual cues that fuel the spreading of misinformation. Specifically, they suggest content specialization, consistency and coherence affect users' persuasion from the core path. Conversely, the number of comments, length of text and author characteristics influence the edge path. The third paper on this theme (Chen and Cheng, 2023) adopts a more marketing point of view, analyzing the spreading of product-harm misinformation diffusion. In a mixed-method study, the authors test a model proposing that consumers' skepticism and perceived content credibility influence the diagnosticity of product-harm misinformation. This, in turn, affects consumer trust toward the target company and their intentions to spread negative electronic word-of-mouth about the company.

The third theme discusses “individuals' perceptions of misinformation.” The papers on this theme delve deeper into the psychological processes determining the susceptibility to misinformation and the consequences of spreading misinformation at the individual analysis level. Daunt et al. (2023) discuss why individuals are susceptible to political misinformation. They use a mixed-method approach and identify conspiracy mentality and patriotism as antecedents of belief in political fake news. Such belief, in turn, fuels the engagement with political misinformation. Riaz et al. (2023) delve deeper into the motivations that push individuals to search for health-related misinformation online. The authors identify personal factors (i.e. lack of health information literacy) and environmental factors (i.e. information overload and social media peer influence) that influence individuals' misinformation-seeking behavior. Moreover, such factors are positively associated with social media users' anxiety. The last paper on this theme explores how misinformation affects peer-to-peer interactions. Galande et al. (2023) analyze the text of misinformation accusations on X (formerly Twitter). They identify some textual characteristics of misinformation accusations that will be useful for social media platforms to promptly track and reduce the spreading of misinformation.

The final theme relates to “combating misinformation” and deals with identifying the limits of individuals' willingness to fight misinformation, also proposing policy interventions to curb the phenomenon. The first paper (Gurgun et al., 2023) explores the motivations behind social media users' “online silence” when encountering misinformation. The author conducts a literature review and identifies six factors (i.e. self-oriented reasons, relationship-oriented reasons, content-oriented reasons, technical factors, individual characteristics and others-oriented reasons) that influence social media users' willingness to combat misinformation (and its spreaders) encountered online. In addition, the next two papers on this theme shed light on the individuals' misinformation evaluation process. Ha et al. (2023) conduct a multi-method study and identify the cues that predict individuals' truthfulness ratings of health news. The authors find that source and style cues predict truthfulness better than content cues, with source credibility being the most important cue. Furthermore, through the lens of third-person effect, Chung (2023) indicates that individuals' presumed media influence determines a higher willingness to directly take action to counter misinformation. Conversely, presumed media influence on others predicts support for government- and platform-led initiatives to fight misinformation. Finally, Marx et al. (2023) investigate the communication behavior of health organizations on X during the Covid-19 pandemic, to shed light on how communication framing helps toward fighting misinformation on social media. The authors showed that health organizations used several common and innovative framing devices, such as “infographics,” “pop culture references,” and “internet-native symbolism” to frame their communication of vaccination campaigns. The authors inform decision-makers and public health organizations about tailoring the communication to internet-native audiences and guide strategies to carry out information campaigns in misinformation-laden social media environments.

Conclusions and future research

Collectively, the contributions in this special issue provide insights into the different implications of misinformation on our society. They also inform policy-makers and practitioners, social media platforms in particular, on possible solutions to curb the phenomenon and limit its impact. At the same time, these papers raise interesting questions to further advance our knowledge on this lingering topic.

Nowadays, digital technologies dominate the informational landscape. New AI-enabled technologies provide unprecedented opportunities for malicious actors to create and distribute more realistic and sophisticated forms of misinformation, such as deepfakes and manipulated media. How can social media platforms limit the distribution of malicious deepfakes, balancing users' enjoyment with the need to limit the spreading of misinformation? Moreover, the role of algorithms in limiting or, unfortunately, facilitating users' exposure to misinformation deserves further attention. What is the rationale behind the functioning of such algorithms? What is the origin of the differences in the algorithmically determined selection of content Internet users are exposed to?

Shifting the attention to governmental organizations and individuals, the papers in this issue have shown how tailored communication from health authorities can have a positive impact in limiting the effect of misinformation on consumers. However, consumers often conform to online silence and prefer not to fight misinformation online. How can we create a more proactive digital environment where misinformation is challenged when encountered? Does silence also exist in offline contexts or is it just an online behavior?

Finally, the papers in this issue analyze the direct consequences of misinformation on individuals and organizations. However, misinformation can spillover and have undesirable effects also in other contexts that are not directly targeted by misinformation attacks. Understanding the impact of such indirect misinformation on individuals and organizations in the short and long run will provide us with a more comprehensive overview of the scope of the problem, helping to prompt media literacy skills and making individuals less vulnerable to misinformation.

References

Allcott, H. and Gentzkow, M. (2017), “Social media and fake news in the 2016 election”, Journal of Economic Perspectives, Vol. 31 No. 2, pp. 211-235.

Borges‐Tiago, T., Tiago, F., Silva, O., Guaita Martínez, J.M. and Botella‐Carrubi, D. (2020), “Online users' attitudes toward fake news: implications for brand management”, Psychology and Marketing, Vol. 37 No. 9, pp. 1171-1184.

Chen, Z.F. and Cheng, Y. (2020), “Consumer response to fake news about brands on social media: the effects of self-efficacy, media trust, and persuasion knowledge on brand trust”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 188-198.

Chen, Z.F. and Cheng, Y. (2023), “The diffusion process of product-harm misinformation on social media: evidence from consumers and insights from communication professionals”, Internet Research, Vol. 33 No. 5, pp. 1828-1848.

Cheng, Y. and Luo, Y.J. (2021), “The presumed influence of digital misinformation: examining US public's support for governmental restrictions versus corrective action in the COVID-19 pandemic”, Online Information Review, Vol. 45 No. 4, pp. 834-852.

Chung, M. (2023), “What's in the black box? How algorithmic knowledge promotes corrective and restrictive actions to counter misinformation in the USA, the UK, South Korea and Mexico”, Internet Research, Vol. 33 No. 5, pp. 1971-1989.

Dabran-Zivan, S., Baram-Tsabari, A., Shapira, R., Yitshaki, M., Dvorzhitskaia, D. and Grinberg, N. (2023), “‘Is COVID-19 a hoax?’: auditing the quality of COVID-19 conspiracy-related information and misinformation in Google search results in four languages”, Internet Research, Vol. 33 No. 5, pp. 1774-1801.

Daunt, K.L., Greer, D.A., Jin, H.S. and Orpen, I. (2023), “Who believes political fake news? The role of conspiracy mentality, patriotism, perceived threat to freedom, media literacy and concern for disinformation”, Internet Research, Vol. 33 No. 5, pp. 1849-1870.

Di Domenico, G., Nunan, D., Sit, J. and Pitardi, V. (2021a), “Free but fake speech: when giving primacy to the source decreases misinformation sharing on social media”, Psychology and Marketing, Vol. 38 No. 10, pp. 1700-1711.

Di Domenico, G., Sit, J., Ishizaka, A. and Nunan, D. (2021b), “Fake news, social media and marketing: a systematic review”, Journal of Business Research, Vol. 124, pp. 329-341.

Di Domenico, G., Nunan, D. and Pitardi, V. (2022), “Marketplaces of misinformation: a study of how vaccine misinformation is legitimized on social media”, Journal of Public Policy and Marketing, Vol. 41 No. 4, pp. 319-335.

Galande, A.S.S., Mathmann, F., Ariza-Rojas, C.J., Torgler, B. and Garbas, J. (2023), “You are lying! How misinformation accusations spread on Twitter”, Internet Research, Vol. 33 No. 5, pp. 1907-1927.

Gurgun, S., Arden-Close, E., Phalp, K. and Ali, R. (2023), “Online silence: why do people not challenge others when posting misinformation?”, Internet Research, Vol. 33 No. 5, pp. 1928-1948.

Ha, L., Rahut, D., Ofori, M., Sharma, S., Harmon, M., Tolofari, A., Bowen, B., Lu, Y. and Khan, A. (2023), “Implications of source, content, and style cues in curbing health misinformation and fake news”, Internet Research, Vol. 33 No. 5, pp. 1949-1970.

Li, M. and Wan, Y. (2023), “Norms or fun? The influence of ethical concerns and perceived enjoyment on the regulation of deepfake information”, Internet Research, Vol. 33 No. 5, pp. 1750-1773.

Marx, J., Blanco, B., Amaral, A., Stieglitz, S. and Aquino, M.C. (2023), “Combating misinformation with Internet culture: the case of Brazilian public health organizations and their COVID-19 vaccination campaign”, Internet Research, Vol. 33 No. 5, pp. 1990-2012.

Riaz, M., Wu, J., Sherani, M., Sher, A., Boamah, F.A. and Zhu, Y. (2023), “An empirical evaluation of the predictors and consequences of social media health-misinformation seeking behavior during the COVID-19 pandemic”, Internet Research, Vol. 33 No. 5, pp. 1871-1906.

Ruffo, G., Semeraro, A., Giachanou, A. and Rosso, P. (2023), “Studying fake news spreading, polarisation dynamics, and manipulation by bots: a tale of networks and language”, Computer Science Review, Vol. 47, 100531.

Sharma, I., Jain, K., Behl, A., Baabdullah, A., Giannakis, M. and Dwivedi, Y. (2023), “Examining the motivations of sharing political deepfake videos: the role of political brand hate and moral consciousness”, Internet Research, Vol. 33 No. 5, pp. 1727-1749.

Vasist, P.N. and Krishnan, S. (2023), “Engaging with deepfakes: a meta-synthesis from the perspective of social shaping of technology theory”, Internet Research, Vol. 33 No. 5, pp. 1670-1726.

Wang, X., Feng, X. and Zhao, J. (2023), “Research on influencing factors and governance of disinformation dissemination on science and technology topics: an empirical study on the topic of ‘metaverse’”, Internet Research, Vol. 33 No. 5, pp. 1802-1827.

Zarocostas, J. (2020), “How to fight an infodemic”, The Lancet, Vol. 395 No. 10225, p. 676.

Related articles