Unraveling generative AI in BBC News: application, impact, literacy and governance

Yucong Lao (Research Unit of History, Culture and Communications, University of Oulu, Oulu, Finland)
Yukun You (Department of Media and Communication, University of Oslo, Oslo, Norway)

Transforming Government: People, Process and Policy

ISSN: 1750-6166

Article publication date: 17 May 2024

311

Abstract

Purpose

This study aims to uncover the ongoing discourse on generative artificial intelligence (AI), literacy and governance while providing nuanced perspectives on stakeholder involvement and recommendations for the effective regulation and utilization of generative AI technologies.

Design/methodology/approach

This study chooses generative AI-related online news coverage on BBC News as the case study. Oriented by a case study methodology, this study conducts a qualitative content analysis on 78 news articles related to generative AI.

Findings

By analyzing 78 news articles, generative AI is found to be portrayed in the news in the following ways: Generative AI is primarily used in generating texts, images, audio and videos. Generative AI can have both positive and negative impacts on people’s everyday lives. People’s generative AI literacy includes understanding, using and evaluating generative AI and combating generative AI harms. Various stakeholders, encompassing government authorities, industry, organizations/institutions, academia and affected individuals/users, engage in the practice of AI governance concerning generative AI.

Originality/value

Based on the findings, this study constructs a framework of competencies and considerations constituting generative AI literacy. Furthermore, this study underscores the role played by government authorities as coordinators who conduct co-governance with other stakeholders regarding generative AI literacy and who possess the legislative authority to offer robust legal safeguards to protect against harm.

Keywords

Citation

Lao, Y. and You, Y. (2024), "Unraveling generative AI in BBC News: application, impact, literacy and governance", Transforming Government: People, Process and Policy, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/TG-01-2024-0022

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Yucong Lao and Yukun You.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In an era where technological advancements are reshaping the boundaries of what is possible, the rise of generative artificial intelligence (AI) has captured the collective imagination. With its ability to produce deepfake videos, AI-generated art, chatbot conversations and even fake images depicting fictional scenarios, generative AI has become an influential force in shaping the way people interact with information. At the same time, fears about unintended consequences, exemplified by deepfake videos and fake news circulating on social media (Westerlund, 2019), have ignited serious conversations about the imperative need for a comprehensive regulatory framework to govern generative AI’s development and utilization (Meskys et al., 2020).

As generative AI becomes an integral part of our daily lives, the interplay between individuals’ experiences with AI and their literacy in comprehending its intricacies becomes important, particularly in the context of evolving skills and capacities. The capability of individuals to navigate this technological landscape is instrumental, and as society grapples with the implications of generative AI, it becomes increasingly evident that public literacy is integral to effective governance. This understanding extends to the roles played by both the general public and government authorities in shaping the regulatory environment for AI technologies.

Navigating the intricate landscape of AI governance requires a nuanced comprehension of how generative AI resonates with the public. This understanding ensures that governance practices are not only effective but also reflective of the diverse needs and expectations of a technologically engaged society. The present paper aims to contribute to this understanding by delving into the public discourse of individuals’ experiences and current practices of AI governance regarding people’s literacy in the realm of generative AI, shedding light on the avenues for enhanced governance of generative AI.

As generative AI continues to push the boundaries of innovation and ethics, public media becomes a central conduit for conveying the nuances, challenges and potential benefits associated with this technology. Hence, the current article endeavors to explore people’s experiences with generative AI by analyzing news articles from public service broadcasting media, specifically BBC News, as a case study. The choice of BBC News adds both validity and limitations (see the methods), allowing for a focused examination of generative AI perceptions in a reputable and widely accessible media outlet.

Through the lens of public media, the present research aims to uncover the ongoing debate on generative AI, literacy and governance while providing nuanced perspectives on stakeholder involvement and recommendations for effective regulation and utilization of generative AI technologies. The research questions are as follows:

RQ1.

What application and impact of generative AI are presented in BBC News articles?

RQ2.

How is people’s literacy on generative AI portrayed in BBC News articles?

RQ3.

What roles do various stakeholders play in the public discourse around generative AI, and in what ways can government authorities contribute to enhancing generative AI literacy?

2. Literature review

2.1 The rise of generative artificial intelligence

The current landscape of AI research has witnessed a profound transformation with the emergence and evolution of generative AI. According to Sætra (2023), generative AI is an umbrella term, describing machine learning solutions trained on massive data sets to create output responding to prompts input by users. From the report published by consultant company McKinsey and Company (2023), generative AI is stated as being a set of algorithms that can be used to produce new content, such as audio, code, images, text, simulations and videos. Scholars such as Bandi et al. (2023) have argued that generative AI is powerful technology for content creation. In Bandi et al. (2023) work, the researchers have summarized various approaches of generative AI systems to produce content, covering text-to-text, text-to-image, text-to-audio/speech, text-to-code, code-to-text and image text.

In the field of this emerging technology, ChatGPT and DALL-E are two representative applications that open many people’s horizons toward the era of generative AI (Hirvonen et al., 2023). Developed by OpenAI company, ChatGPT is a chatbot that can synthesize answers based on trained data to answer any asked question; similarly, DALL-E 2 is an image generator that can create images based on textual input (Brandtzæg et al., 2023; Fui-Hoon Nah et al., 2023; McKinsey and Company, 2023). With advancements in language models, these two applications are undergoing significant evolution. For example, the advanced large multimodal model GPT-4 enables the system to describe trends or generate captions for pictures input by users (Terrasi, 2023); additionally, the latest version of the DALL-E system, known as DALL-E 3, enhances the quality of the images that are output (Metz, 2023).

With the flourishing of signature applications, such as ChatGPT, the generative AI market is predicted to grow into a $1.3tn market by 2032 (Catsaros, 2023). In this context, a large amount of existing research has focused on generative AI used in different areas and the impacts caused by these applications (e.g. Brynjolfsson et al., 2023; Ebert and Louridas, 2023; Michel-Villarreal et al., 2023; Walters and Murcko, 2020). For instance, Brynjolfsson et al. (2023) have conducted research on people’s adoption of generative AI tools to offer conversational assistance to customer support agents and found that these tools can help increase the productivity of agents by 14%. Michel-Villarreal et al. (2023) have used ethnography as the main approach to engage with ChatGPT and identified the opportunities (e.g. 24/7 support and accessibility, personalized learning and tutoring, etc.) and challenges (e.g. lack of awareness and understanding, resource constraints, etc.) brought by generative AI on higher education. Furthermore, numerous studies have delved into the application of generative AI systems in diverse domains, including medicinal chemistry (Walters and Murcko, 2020), the software industry (Ebert and Louridas, 2023) and beyond.

As generative AI becomes more prevalent, concerns regarding its usage have also escalated. Scholars such as Lambert and Stevens (2023) have demonstrated a series of concerns toward ChatGPT in terms of academic integrity, accuracy of information/misinformation, biases, discrimination, stereotypes, misuse/abuse/ethics and privacy/security. Similarly, Fischer (2023) has also used ChatGPT as an example and illustrated ChatGPT’s harmful usage, such as threats to jobs, misinformation and so forth and design, such as human cost, environmental cost and so forth. In addition, Sætra (2023) has discussed the challenges of generative AI at the macro (e.g. environmental costs), meso (e.g. bias and discrimination) and micro (e.g. persuasion and manipulation) levels. Research has shed light on the current issues related to generative AI and sought solutions to tackle these problems.

While confronting these challenges brought about by generative AI, a number of previous studies have focused on the discussion of combating generative AI-related harms. For example, Alasadi and Baiz (2023, pp. 2969–2970) have listed solutions to address generative AI issues, such as developing “cost-effective AI solutions,” establishing “partnerships and collaborations,” encouraging “open-source AI initiatives” and incorporating “AI into educational funding models.” Other works have concentrated on fighting the negative impacts associated with a specific type of generative AI application, such as deepfakes. For instance, from the technical side, Katarya and Lal (2020) have presented three approaches of deepfake detection involving fake image detection, fake video detection and fake audio detection. In addition, scholars from the field of law have explored the capability and limitation of current law to properly deal with problematic deepfake pornography (Gieseke, 2020; Mania, 2024).

Situated in the era of generative AI, the present article examines people’s experiences with generative AI spanning fields rather than centering on a specific model, product or field. By analyzing the real-life examples presented by news articles, the authors aim to comprehend both the benefits and challenges posed by generative AI. Furthermore, the study aims to build connections between these experiences, individuals’ skills and capabilities in the realm of generative AI and the current state of AI governance by governmental bodies.

2.2 Artificial intelligence literacy

With AI dramatically shaping people’s everyday lives, a comprehensive understanding of AI literacy becomes pivotal for individuals navigating the intricate intersections of technology, information and society. Topics related to AI literacy are not only widely discussed in academic circles but are also featured prominently in documents issued by governments or institutions. A large amount of documents are related to the actions that boost people’s information literacy (e.g. National Artificial Intelligence Advisory Committee, 2023). For example, the US National Artificial Intelligence Advisory Committee (2023) has published a file listing the different contexts that require AI literacy, pointing out recommendations for enhancing people’s AI literacy in America. Moreover, UNESCO (2022) has released a report to map out the K-12 (from kindergarten to the 12th grade) AI curricula endorsed by either national or regional governments around the world, showing a comprehensive landscape of AI education for cultivating young people’s AI literacy.

In terms of academic works, many scholars have attempted to conceptualize or build a framework for evaluating AI literacy (e.g. Long and Magerko, 2020; Ng et al., 2021; Perchik et al., 2023; Yi, 2021). In the field of human–computer interaction studies, Long and Magerko have first clarified the definition of AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (Long and Magerko, 2020, p. 2). Subsequently, based on an exploratory literature review, Long and Magerko (2020) have further pointed out 17 competencies constituting AI literacy. In line with Long and Magerko’s approach, Ng et al. (2021, p. 4) used the same approach and conceptualized AI literacy that involved four aspects: knowing and understanding AI, using and applying AI, evaluating and creating AI and AI ethics.

Notably, most of the literature addressing AI literacy has focused on young people, including children or students, primarily concentrating on AI education (Casal-Otero et al., 2023; Domínguez Figaredo and Stoyanovich, 2023; Druga et al., 2019; Kong et al., 2023; Lee et al., 2021). For example, Casal-Otero et al.(2023, p. 6) have conducted a systematic literature review regarding AI education in K-12 curriculum and found that current education should integrate AI literacy by acquiring AI knowledge to “recognize artifacts using AI, learning how AI works, learning to live with AI”. By organizing an AI workshop for middle school students, Lee et al. (2021) have observed participants’ participation and engagement with AI-related activities and found that young students had fundamental AI literacy.

Kong et al. (2023) paid attention to university students’ conceptual building of AI. They have designed and tested an AI literacy program based on a conceptual framework to enhance people’s conceptual understanding, literacy, empowerment and ethical awareness. By lowering the barrier to entry for AI literacy, their work has shown the positive effect of such a program, indicating that it can be extended to include more participants such as senior secondary school students and the general public.

Still, only a few studies have focused on enhancing AI literacy across the public or different educational fields and levels. Going beyond K-12 and technical aspects of AI systems, Domínguez Figaredo and Stoyanovich (2023) have argued for a “stakeholder-first” approach to design educational projects by targeting a broader range of audiences from industry experts to laypeople and focusing more on the ethical, legal and social implications of AI.

Ng et al. (2022) have argued for AI literacy for all. They have underlined that AI literacy should be acquired by all learners, even though there are different contents and approaches for different education levels across K-16 (from kindergarten to the 16th grade). They have highlighted that all citizens, from kindergarteners, primary and secondary students to non-computer science university students, should learn AI to facilitate their living, working and learning to contribute to a better society.

In particular, German statistical consultant Schüller (2022) has suggested a framework for data and AI literacy for everyone in a data-driven society. Not only schools, teacher training and higher education, but extracurricular and vocational training are needed to enable people’s capability to gain data-related insights and shape decision-making. The framework aims to facilitate people’s transdisciplinary competence and lifelong learning from three perspectives: the application-oriented perspective, the technical–methodological perspective and the socio-cultural perspective. This requires public statistical institutions to develop and promote training programs, for example, in cooperation with educational actors such as adult education centers or public libraries. She has used the education app “Stadt | Land | Datenfluss” (cofounded by the German Federal Ministry for Economic Affairs and Energy) to enhance Germans’ data literacy to illustrate how the framework benefits adult education.

In addition, although many studies have focused on the topic of AI literacy, few specifically have delved into the nuances of generative AI literacy. Some of the existing studies explored generative AI literacy in the realm of education (e.g. Relmasira et al., 2023; Cao and Dede, 2023). For example, Relmasira et al. (2023) have optimized generative AI education principles by implementing a three-session classroom intervention in an Indonesian school and analyzing students’ reflection papers. Cao and Dede (2023) have made valuable recommendations for educators seeking to enhance students’ understanding of generative AI.

In response to the limited scholarly attention given to generative AI literacy and literacy for the general public, rather than the young and/or students, the current study is designed to bridge this gap. The analysis based on individuals’ experiences with generative AI, as shown in news stories, can lead us to identify some of the key components of generative AI literacy, thus providing insights not just for educators or technologists but also for the government and stakeholders with the power of governance.

2.3 Artificial intelligence governance

With the rapid advancement and increasing application of AI technologies across the private and public sectors, the literature in the field of AI governance is continuously expanding (Lütge et al., 2021). According to Mäntymäki et al., AI governance is defined as follows:

A system of rules, practices, processes, and technological tools that are employed to ensure an organization’s use of AI technologies aligns with the organization’s strategies, objectives, and values; fulfills legal requirements; and meets principles of ethical AI followed by the organization (Mäntymäki et al., 2022, p. 604).

In this sense, AI governance can be performed by not only governments or other institutions in the public sector but also private organizations, which has been indicated by previous research. For instance, the private organization the Data Privacy Group has published strategies for implementing effective AI governance (Borner, 2023). Some governments have also been taking action and have come up with regulations and policies for AI governance, such as the impact assessment of an AI regulation published by the European Commission (2021). In 2023, the European Parliament and Council reached an agreement on the European Union’s Artificial Intelligence Act to protect citizens’ rights (Hainsdorf et al., 2023).

When it comes to academia, the topic of AI governance and regulation has been widely explored. In 2018, Dafoe (2018, p. 49) has published a research agenda on AI governance outlining the principles of AI governance, including solving the problems related to security (AI safety, conditional stabilization) and autonomy (freedom, continuity, sovereignty). In this context, the researcher has argued that governance institutions should be able to guarantee the safety of AI technologies and be resilient to changes and challenges (Dafoe, 2018, pp. 50–51).

Other studies on AI governance have often been seen as an overview of current practices based on a literature review or mapping out AI governance activities (e.g. Birkstedt et al., 2023; Kuziemski and Misuraca, 2020). For example, relying on different examples of AI governance practices in Canada, Poland and Finland, Kuziemski and Misuraca (2020) have pointed out the current practices from the perspectives of drivers, goals, barriers and risks. Birkstedt et al. (2023) have conducted a systematic literature review on the articles related to AI governance and identified AI governance literature from four dimensions: technology, stakeholders, context and regulation and processes.

Remarkably, AI literacy is widely seen as one of the most essential elements of AI governance interventions. As Larsson et al. (2023) have illustrated, in discussions related to AI, advocating for literacy has evolved into a widely accepted normative stance when addressing governance issues. For example, “establishing internal oversight capabilities and literacy” is one of the perspectives in a report of the Government of Canada for ensuring the quality of the system (Kuziemski and Misuraca, 2020, p. 5). Simultaneously, from the literature, the government, here taking a main role of the public sector, is one of the most important stakeholders surrounding AI governance. Furthermore, Al Zadjali (2020) has pointed out that the government should be capable of generating value for all stakeholders through various actions. This research is positioned between AI literacy and governmental AI governance, trying to bring the implications to current governmental policymaking from the perspective of AI literacy, especially generative AI literacy.

Furthermore, in considering AI governance, a large number of scholars have opted to examine the subject through the lens of real-life policies or enactments, with limited research being conducted on its portrayal in the media. Regarding the research of media representation, the literature has leaned more on elaborating the portrayal of AI technology and its artifacts, such as large language models such as ChatGPT or image generators such as DALL-E, rather than delving into discussion of AI governance behind AI issues. For example, Xian et al. (2024) have used global news coverage on generative AI from 2021 to 2023 to explore the distribution of topics and sentiment across time and space; they have identified various key topics from business, corporate technological development, regulation and security and education. The business- and corporate-related articles have shown a more positive sentiment, while those on regulation and security have shown a more reserved, neutral to negative sentiment, reflecting major concerns in different fields. This finding echoes current research on generative AI’s benefits and risks (Bandi et al., 2023).

It is imperative to examine AI governance through the lens of media representation. Given that the media serves as a conduit between the public and various societal actors, it bears the responsibility of disseminating information and heightening awareness of pertinent issues. Hence, to address this gap, the present study will take a close look at AI governance based on an analysis of news articles released by BBC News, which is renowned for its authoritative role in public service broadcasting.

3. Methodology and methods

3.1 Case study

The current research was a qualitative study oriented by a case study. According to Bazeley, “qualitative analysis is fundamentally case-oriented” (2013, p. 5). A case study is a valuable approach for researchers to explore in depth a phenomenon in a process, program, event, activity or other context (Baxter and Jack, 2008; Creswell, 2014). This approach allows researchers to apply any method of data collection (Priya, 2021) and underscores the situated interrelatedness of various characteristics and causes of the particular phenomenon (Bazeley, 2013).

Led by this approach, the present study chose generative AI-related online news coverage on BBC News as a case based on two considerations. First, BBC is one of the most influential public service broadcasters in the world and always embraces the unique value of public service broadcasting (BBC, 2004). The BBC News website remained at the top of visits among 50 influential news media (Majid, 2024), which can be regarded as representative compared with other news media. Second, public service broadcasting is a kind of media aiming to guard citizens’ integrity and interests and, meanwhile, inform, educate and entertain audiences (Banerjee and Seneviratne, 2006, p. 12; Gorham, 1967, p. 221, cited in Grummell, 2009, p. 270). The information published by these media is often perceived as authoritative information and also a good lens through which to observe what is going on in society. Overall, the news coverage published by BBC News is a proper case for analysis.

3.2 Sampling

The present study focused on news articles published by the BBC News website. To identify pertinent articles, a keyword search was executed on the BBC website using the search term “generative AI.” The search was conducted on January 24 and directed to all content published on the BBC News website. The search results showed 290 pieces of news coverage presented in various forms, encompassing radio, video, games, images, programs and news articles. In line with Marshall (1996), the sampling from this coverage was based on specific criteria:

  • Because we were aiming at analyzing news articles, 139 news pieces in non-article form were excluded from our selection.

  • The article needed to present content related to generative AI.

This strategy helped us cover as much rich textual material related to generative AI as possible. Led by this strategy, we went through the content of these articles and excluded 71 articles not related to generative AI and two duplicates. Ultimately, 78 news articles (see Appendix) related to generative AI, each marked with a unique ID number, were chosen as the sample.

3.3 Data analysis

A qualitative content analysis (Schreier, 2012) was conducted based on a sample of news articles. The texts of the news articles were segmented into every sentence as a unit for analysis. The analysis was conducted in an inductive, data-driven way, allowing the categories to emerge from the empirical material (Schreier, 2012, p. 25). By assigning successive parts of the empirical data to categories, we sought to interpret the meaning of this qualitative material in a systematic way (Schreier, 2012, p. 1). Based on the analysis, 16 categories were extracted: text generation, image generation, audio generation, video generation, integrated generative AI tools, negative impacts, positive impacts, understanding generative AI, using generative AI, evaluating generative AI, combating generative AI harms, government authorities, industry, organizations/institutions, academia and affected individuals/users. Furthermore, four themes were identified from these categories: applications, impacts, generative AI literacy and AI governance practices. For example, the code “Meta has announced a series of new chatbots to be used in its Messenger service” (73) was grouped into the category of “text generation,” contributing to forming the theme of “applications”; the code “Faked AI images and videos of politicians are also exacerbating the problem of online misinformation” (70) was directed to the category of “negative impacts,” belonging to the theme of “impacts.” Through this process, a coding result was finalized, as shown in Figure 1.

Given our selection of news articles exclusively from the BBC as the sample, it is vital to acknowledge the potential presence of bias in these samples. Attempting to reduce this limitation and improve the reliability of our analysis, we applied strict strategies to systematically select our samples.

4. Results

4.1 Generative artificial intelligence applications

Table 1 shows the diverse applications of generative AI reported by news stories, shedding light on the public discourse surrounding this emerging technology. Here, generative AI is primarily used in generating texts, images, audio and videos. In addition, as generative AI advances, other applications, such as integrated generative AI tools, have emerged in people’s everyday lives as well.

Notably, regarding text generation and image generation, the examples show two divisions of applications – text/image generator and conversational AI systems, such as ChatGPT (54) and DALL-E (26). For video generation, generative AI is used not only for creating fake videos, such as deepfakes (24), but also for building characters, such as news presenters (8) and a holographic video of Elvis (74) [1]. In addition, there are two types of integrated generative AI tools presented in new stories: AI systems offering a set of services, such as Copilot (19) [2], and AI agent devices, for example, the phone-like device R1 [3] that allows users to circumvent apps (13).

4.2 Impacts of generative artificial intelligence

The impacts of generative AI were mainly portrayed from two perspectives: its risks and benefits. The examples in Table 2 reveal the risks as a negative impact, including misuse and abusive use, mis- and disinformation, copyright infringements, creativity erosion, job displacement, bias, capability obscurity and environmental issues. Remarkably, misuse and abusive use include two branches: the applications maliciously manipulated by specific users, such as generating naked images of children (41), and misuse caused by the system itself, such as chatbot making illegal purchase led by its wrong models (45). Moreover, regarding the risks of mis- and disinformation, generative AI can produce inauthentic information like fake news stories (33) and false information, such as ChatGPT-generated false answers to students’ homework (54).

Table 3 shows the examples of the benefits brought by generative AI. The benefits, or the positive impact of generative AI, consist of innovation fostering, inspiration acquisition, skills/knowledge popularization, cost reduction, productivity boost and customer experience enhancement. A few benefits are directly linked to content creation, knowledge dissemination and cost savings, while the others contribute to improving customers’ and people’s working experiences.

4.3 People’s generative artificial intelligence literacy

Table 4 displays people’s generative AI literacy, as indicated in the news articles, including a set of elements – understanding, using, evaluating generative AI and combating generative AI harms. People primarily show their understanding of generative AI by talking about the technology itself, its applications and its impact. They pay particular attention to the applications, for example, the developers of the application (29) and functionality (30). The impact discussed by people covers both negative and positive impacts, which overlaps with the results of the aforementioned section.

Regarding using generative AI, the article quotes come from different groups – professionals (4), practitioners (7) and students (18) – as developers or customers. Usage among students echoes one of the acknowledged benefits of generative AI – skills/knowledge popularization – as discussed in previous sections.

When considering the literacies of using and evaluating AI, both positive and negative aspects are apparent. The evaluation of generative AI significantly revolves around its future, marked by a spectrum of optimistic and pessimistic tones. For example, some interviewees perceive generative AI as valuable for artistic work (7), while others express concerns about the potentially serious consequences of manipulating generative AI (61).

Moving beyond usage and evaluation, the literacy of combating generative AI harms is unveiled through two perspectives: avoiding potential harm and dealing with harm. Individuals made efforts to discern false information or keep specific regulations in mind (24) to steer clear of being misled (9). Simultaneously, they may take legal actions or stand up (70) to protect their rights when harmed by generative AI applications (32).

4.4 Stakeholders and existing artificial intelligence governance practices regarding generative artificial intelligence literacy

The instances in Table 5 are categorized by different stakeholders, involving government authorities, industry, organizations/institutions, academia and affected individuals/users. Government authorities have played an essential role in the four practices related to generative AI. First, they have tried to comprehend the current status of AI by conducting investigations (21, 37) and collecting opinions from the public (76). Second, government authorities have been dedicated to disseminating knowledge of generative AI through publishing relevant authoritative reports, policies and laws (71) and promoting education programs (46), aiming to draw public attention and facilitate the accessibility of these tools. Third, government authorities have been actively collaborating with other stakeholders related to generative AI. The collaborations span different fields and regions; for example, international representatives from various areas signed the Bletchley Declaration [4] on AI Summit hosted in the UK (50), working together to build a better environment for developing and utilizing generative AI. Finally, the government authorities have strode to make policies and laws to drive the regulations of generative AI (55).

The examples of industry practices expose five perspectives: offering diverse tools, popularizing knowledge, regulating generative AI, understanding the current status and collaborating with other stakeholders. It should be noted that the industry acts as a generative AI service provider that offers massive tools for people (7). Moreover, to regulate generative AI use, tech companies not only internally formulate regulations to standardize developers’ utilization of generative AI (68) but also externally make policies to lead to users’ positive consumption (21).

Additionally, organizations/institutions and academia also contribute to AI governance regarding generative AI literacy in various ways. Organizations/institutions primarily engage in the practices of popularizing knowledge of generative AI, understanding the current status of generative AI, collaborating with other stakeholders and combating generative AI harms. Prominently, some organizations, such as an organization focusing on tackling child sexual abuse (41), help people who are victims of generative AI applications. For academia, they underline the importance of conducting research on generative AI to enhance people’s AI literacy (18) through the practice of understanding the current status and collaborating with other stakeholders.

Finally, the BBC News articles also illustrate how users and affected individuals made their efforts to elevate AI governance regarding AI literacy. For these stakeholders, three approaches, including combating generative AI harms, understanding their current status and collaborating with other stakeholders, form the basis of their practice. Noticeably, people are trying to use their ways of understanding generative AI better, such as young people’s exploration of classmates’ use of ChatGPT (54) and addressing the issues related to AI (41).

5. Discussion

5.1 Four paradoxes of generative artificial intelligence’s impact

We identify four paradoxes regarding the impacts of generative AI from BBC News articles. First, there is a paradox between creativity reduction and improvement, particularly in the field of art. From the news articles, we find that, on the one hand, generative AI is described as being used to spark creators’ inspiration; however, on the other hand, people show their worries about an erosion of creation caused by the overuse of technology.

The second paradox is between beginner-friendly use and misuse and abusive use. Although one of the advantages highlights that generative AI applications reduce the technical barriers to content creation, making them more accessible to users, it also underscores the heightened risks of misuse and abusive utilization of generative AI.

Third, the results have indicated a paradox between job growth and displacement. For instance, a report from Goldman Sachs [5] in 2023 estimates that AI can replace the equivalent of 300 million full-time jobs; it simultaneously argues that there might be new jobs emerging alongside a boom in productivity (12).

The fourth paradox is directed at the tension between energy saving and waste. With productivity improvement and job displacement by machines, generative AI may be helpful in saving more energy. However, one of the negative impacts points out that generative AI applications can use more power than conventional applications (67).

Some of the impacts mentioned above align with the perspectives on the impacts of generative AI outlined by, for example, Bandi et al. (2023), Brynjolfsson et al. (2023), Fischer (2023) and Sætra (2023), particularly regarding aspects such as content creation, productivity enhancement, threats to jobs and environmental costs. Expanding upon the literature, the present study points out paradoxes in terms of the impacts of generative AI as additional nuances, shedding light on the complexity of generative AI use and its consequences.

We posit that these four paradoxes necessitate specific considerations in individuals’ generative AI literacy and AI governance practices. We advocate for all stakeholders to foster an environment conducive to a comprehensive understanding and judicious use of generative AI. Addressing these issues requires the establishment of clearer boundaries differentiating proper use, overuse, misuse and the abusive use of generative AI. This aligns with competencies integral to AI literacy (Long and Magerko, 2020; Ng et al., 2021). In addition, in addressing those concerns related to energy wastage, we recommend that government authorities and industry stakeholders discharge their responsibilities in AI governance, including the formulation of regulations.

5.2 “Combating generative artificial intelligence harms” as a unique element of generative artificial intelligence literacy

With AI literacy being seen as civic competence (Hirvonen et al., 2023), the results of the present study indicate several components of AI literacy pertaining to generative AI. Drawing from the comprehensive model proposed by Long and Magerko (2020), which encompasses 17 competencies and 15 considerations integral to AI literacy, we have constructed a framework of people’s generative AI literacy based on our analysis:

  • understanding the technology, application and impact of generative AI;

  • using generative AI properly and ethically;

  • evaluating the positive and negative sides of generative AI; and

  • avoiding potential harms and dealing with harms of generative AI.

This proposed framework is structured around four essential elements: understanding, using, evaluating generative AI and combating generative AI harms. Within these elements, we have identified and delineated nine competencies and considerations aimed at fostering a nuanced understanding of and responsible engagement with generative AI technology. Our findings align with previous research by Long and Magerko (2020), who also have pointed out perspectives on understanding, using and evaluating AI technology. Building on this, our study extends the literacy framework by underscoring the new aspect of “combating generative AI harms”. Importantly, the emergence of the category “combating generative AI harms” that signifies a unique dimension within the generative AI framework, one distinct from traditional AI literacy models. This suggests that generative AI, as portrayed in media and public discourse, is often presented with potential challenges that may have adverse effects on individuals’ daily lives. Consequently, individuals have expressed a heightened sense of caution and take proactive measures in response to these perceived issues. Notably, some findings of combating generative AI harms echo previous research by, for instance, Mania (2024), who also has discussed legal protection as a significant way to fight generative AI harms.

The inclusion of “combating generative AI harms” as a distinctive element within the generative AI framework underscores the imperative for individuals to not only understand and use this technology but also to actively address and mitigate potential risks. This novel element also accentuates the need for the implementation of robust AI governance practices to safeguard individuals against the potential negative impacts of generative AI.

Remarkably, our framework for enhancing people’s generative AI literacy holds significant practical value. In terms of education, this framework can be regarded as a basis for developing generative AI curricula in educational institutions, here involving K-12 schools to universities. Additionally, regarding the ethics and policy development of generative AI, this framework can serve as a compass for policymakers and ethicists to establish AI ethic guidelines and regulations.

5.3 The role of government authorities in generative artificial intelligence governance

Informed by the literature of Mäntymäki et al. (2022) and Larsson et al. (2023), we adopt a literacy perspective to examine AI governance, emphasizing the practices of various stakeholders involved in the governance of generative AI. Our analysis has underscored the pivotal role assigned to government authorities in this multifaceted ecosystem. Government authorities have been depicted as key connectors, collaborating with industry, academia and organizations to conduct research, institute regulations and provide legal safeguards for individuals. In light of this, we advocate for government authorities to assume the role of coordinators, leveraging resources and power among diverse stakeholders.

Emphasizing the importance of co-governance, we have highlighted the crucial role of government authorities in facilitating collaboration among stakeholders from different sectors, disciplines and regions. A notable example is the AI Summit at Bletchley Park, UK, where representatives from technology industries, government authorities and academia convened to discuss AI-related issues `(50). This summit led to the signing of the Bletchley Declaration by 28 countries and the EU, underscoring a commitment to ensuring the safe use of AI. The example of co-governance resonates with the argument proposed by Al Zadjali (2020), who similarly has emphasized the government’s role in creating value for all involved stakeholders. As vividly depicted by BBC News and disseminated to broad audiences, these instances may intrigue citizens’ interest in AI literacy, potentially fostering increased civic engagement in co-governance initiatives.

Furthermore, our analysis has indicated the need for intensified regulation of generative AI by government authorities. This reflects the fact that government authorities are taking high responsibility for making regulations and policies in this ecosystem, which is in line with the literature published by the European Commission (2021) and Hainsdorf et al. (2023). As shown by the previous sections illuminating the potential negative impacts on individuals, government authorities possessing legislative authority are uniquely positioned to offer robust legal safeguards to protect against harm.

6. Conclusion

In our qualitative content analysis of 78 BBC News articles on generative AI, we examine its diverse applications and profound impacts on daily life. AI technologies applied in content generation can have both negative and positive impacts on people’s everyday lives, as portrayed in the news articles. Our study also unravels four paradoxes intricately linked to generative AI literacy and governance practices regarding creativity, accessibility, work and environment, providing a nuanced perspective on this evolving technology.

A distinctive facet within the generative AI literacy framework emerged –“combating generative AI harms.” This element, based on traditional AI literacy models, signifies the imperative for individuals not only to understand and use generative AI but also to actively address and mitigate potential risks. This unique dimension underscores the proactive measures individuals take to counteract the potential negative impacts associated with generative AI applications.

Government authorities play a pivotal role as coordinators in the generative AI landscape: They foster collaboration among diverse stakeholders, including industry, academia and organizations, contributing to a cohesive approach to regulating generative AI. As a response to potential harm, we advocate for intensified regulation by government authorities, leveraging their legislative authority to offer robust legal safeguards. Furthermore, we promote increased media responsibility in raising public awareness about generative AI issues and encouraging active civic engagement in AI governance initiatives.

Our findings contribute rich insights into the generative AI research landscape, emphasizing the interconnected nature of literacy and governance through the lens of media discourses. As we conclude, we encourage future research to leverage diverse methodologies, focusing on demographic variations. This will enable a deeper understanding of individuals’ experiences and considerations regarding generative AI literacy and governance, further advancing our comprehension of this transformative technology.

Figures

The coding results of this study

Figure 1.

The coding results of this study

Generative AI applications presented in the BBC News articles

Application Example
Text generation Meta has announced a series of new chatbots to be used in its Messenger service (73)
Image generation Children are making indecent images of other children using artificial intelligence (AI) image generators (1)
Audio generation Amp uses AI to generate all kinds of sounds for companies … . (78) 
Video generation A Kuwaiti media outlet says it has created a virtual news presenter using artificial intelligence (AI) (8)
Integrated generative AI tools Copilot helps users with functions such as searching, writing emails and creating images (19)

Source: Authors’ own creation

The risks of generative AI presented in the BBC News articles

Negative impact (risks) Example
Misuse and Abusive use In a demonstration at the UK’s AI safety summit, a bot used made-up insider information to make an “illegal” purchase of stocks without telling the firm (45)
Mis- and Disinformation Faked AI images and videos of politicians are also exacerbating the problem of online misinformation (70)
Copyright infringements That is because the software “learns” by analysing a massive amount of data often sourced online and people are concerned it draws on their copyrighted work (72)
Creativity erosion “There is so much potential with AI but it also presents risks to our creative community,” said Recording Academy CEO Harvey Mason Jr, launching the initiative (7)
Job displacement This could lower demand for labour, affecting wages and even eradicating jobs (12)
Bias Some psychologists warn that AI bots may be giving poor advice to patients, or have ingrained biases against race or gender (18)
Capability obscurity But there is a problem with this craze for all things AI: Companies claiming AI capability when really their products don’t actually use machine learning (13)
Environmental issues [Generative AI companies] use far more power than conventional applications, making going online much more energy-intensive (67)

Source: Authors’ own creation

The risks of generative AI presented in the BBC News articles

Positive impact (benefits) Example
Innovation fostering Mr McGuinness founded Layered Reality in 2017 with the objective of combining emerging technology with theatrical story telling to create a new form of entertainment (17)
Inspiration acquisition  “I’ve been using it a little bit in my writing just to help advance ideas,” he says (69)
Skills/knowledge popularization What it can, however, do is democratise the process of filmmaking, he says (24)
Cost reduction “AI can help propel the business forward at a much quicker rate of output without requiring increased resources,” she says (30)
Productivity boost They are being recruited to work on systems to cut waste and improve productivity, he added (14)
Customer experience enhancement “The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad That’s not how a human would respond,” she said (18)

Source: Authors’ own creation

People’s generative AI literacy presented in the BBC News articles

Generative AI literacy Competency and
considerations
Example
Understanding
generative AI
Technology In fact, their understanding of AI is more advanced than most teachers - creating a knowledge gap (1)
Application Gemini appeared to have set a “new standard”, highlighting its ability to learn from sources other than text, such as pictures, according to Chirag Dekate, from analysts Gartner (29)
Impact Prof Becky Allen, the app’s co-founder and chief analyst, said some teachers found it easier to use AI to cut down on work than others (34)
Using generative AI Proper use Jonathan Wharmby, who teaches computer science at Cardinal Heenan Catholic High School in Liverpool, uses AI to help with planning and creating resources, such as multiple choice questions - but said there were issues (54)
Malicious use It comes after the UK’s independent terror legislation reviewer was “recruited” by a chatbot in an experiment (21)
Evaluating generative AI Positive evaluation However, some people in the industry have said AI can be a useful tool for creating music and the technology should be embraced (40)
Negative evaluation In June, the EU’s tech chief Margrethe Vestager told the BBC that AI’s potential to amplify bias or discrimination was a more pressing concern than futuristic fears about an AI takeover (49)
Combating generative
AI harms
Avoiding potential harms “If you zoom in on the images you can often see inconsistencies such as the number of fingers,” he says (9)
Dealing with harms This year, Bollywood actor Anil Kapoor won a legal battle to protect his likeness, image, name and voice among other elements. Kapoor called the case verdict “very progressive” and good for other actors as well (24)

Source: Authors’ own creation

Existing AI governance practices regarding generative AI literacy presented in the BBC News article

Stakeholder Practice Example
Government authorities Understanding current status Councillor Antony Hook welcomed the AI document, but called for greater participation of younger members of staff whose suggestions could contribute to a “consortium of bright ideas” (76)
Popularizing knowledge  Government guidance published in June suggests officials can use tools such as ChatGPT as part of their research, or to summarise academic or news reports, if they verify the results (71)
Collaborating with other stakeholders The voluntary document was signed by 10 countries and the EU, including the UK, US, Singapore and Canada. China was not a signatory (46)
Regulating generative AI Governments around the world are trying to develop rules or even legislation to contain the possible future risks of AI (29)
Industry Offering diverse tools Several websites already offer fans the ability to create new songs using soundalike voices of pop’s biggest stars (7)
Popularizing knowledge Character AI told the BBC that safety is a “top priority” and that what Mr Hall described was unfortunate and didn’t reflect the kind of platform the firm was trying to build (21)
Regulating generative AI The Google-owned company also said it would allow people to request the removal of videos that use AI to simulate an identifiable person (38)
Understanding current status A Snap spokeswoman said: “We are closely reviewing the ICO’s provisional decision.” (68)
Collaborating with other stakeholders About 100 world leaders, leading AI experts and tech industry bosses will attend the two-day summit at the stately home on the edge of Milton Keynes (51)
Organizations/institutions Popularizing knowledge  “It is crucial for countries to establish comprehensive social safety nets and offer retraining programmes for vulnerable workers,” Ms Georgieva said. “In doing so, we can make the AI transition more inclusive, protecting livelihoods and curbing inequality.” (12)
Understanding current status “This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so,” Apollo Research says in a video showing how the scenario unfolded (45)
Collaborating with other stakeholders The charity wants teachers and parents to work together (1)
Combating generative AI harms The Lucy Faithful Foundation, which works with offenders to tackle child sexual abuse, said it was bracing itself for an “explosion” of child sexual abuse material created by AI (41)
Academia Understanding current status In a statement, the UK government said it would work with the Alan Turing Institute, a research body, to assess possible risks such as the potential for bias and misinformation (46)
Collaborating with other stakeholders Around 100 world leaders, tech bosses and academics are currently gathering at the UK’s first AI safety summit at Bletchley Park, in Buckinghamshire (48)
Affected individuals/users Combating generative AI harms Ms Al Adib said mothers and fathers of those affected in her village had started a group to help support each other and their children (41)
Understanding current status In interviews, most of our friends and classmates had used ChatGPT, an online tool that can answer questions in human-like language. They said it helped them come up with ideas, research and things like structuring and phrasing (54)
Collaborating with other stakeholders It also found teachers were divided over whether it should be the responsibility of parents, schools or governments to teach children about the harms caused by such material (1)

Source: Authors’ own creation

Samples including 78 news articles

#ID Author(s) Title Publish date Link Column
1 Tom Gerken and Joe Tidy Children making AI-generated child abuse images, says charity 27 November 2023 www.bbc.com/news/technology-67521226 Technology
2 IWF warning over use of AI-generated abuse images 25 October 2023 www.bbc.com/news/uk-england-cambridgeshire-67145583 Cambridgeshire
3 Guy Hedgecoe AI-generated naked child images shock Spanish town of Almendralejo 24 September 2023 www.bbc.com/news/world-europe-66877718 Europe
4 Mark Savage Grimes says anyone can use her voice for AI-generated songs 25 April 2023 www.bbc.com/news/entertainment-arts-65385382 Entertainment and arts
5 Michael Schumacher: Magazine editor sacked over AI-generated ‘interview’ with seven-time F1 champion 22 April 2023 www.bbc.com/sport/formula1/65361193 Formula 1
6 Michael Schumacher: Seven-time F1 champion’s family plan legal action after AI-generated ‘interview’ 20 April 2023 www.bbc.com/sport/formula1/65333115 Formula 2
7 Mark Savage AI-generated Drake and The Weeknd song goes viral 17 April 2023 www.bbc.com/news/entertainment-arts-65298834 Entertainment and arts
8 Antoinette Radford Kuwait news outlet unveils AI-generated presenter Fedha 11 April 2023 www.bbc.com/news/world-middle-east-65238950 Middle East
9 Kayleen Devlin and Joshua Cheetham Fake Trump arrest photos: How to spot an AI-generated image 24 March 2023 www.bbc.com/news/world-us-canada-65069316 USA and Canada
10 Sports Illustrated in further turmoil after AI scandal 20 January 2024 www.bbc.com/news/business-68035275 Business
11 Zoe Kleinman What happens when you think AI is lying about you? 20 January 2024 www.bbc.com/news/technology-67986611 Technology
12 Annabelle Liang AI to hit 40% of jobs and worsen inequality, IMF says 15 January 2024 www.bbc.com/news/business-67977967 Business
13 James Clayton CES 2024: AI pillows and toothbrushes - is it all getting a bit silly? 13 January 2024 www.bbc.com/news/technology-67959240 Technology
14 Brian Wheeler Minister Alex Burghart seeks to avoid outsourcing AI projects to tech firms 11 January 2024 www.bbc.com/news/uk-politics-67944529 UK politics
15 Tom Gerken Gaming voice actors blindsided by ‘garbage’ union AI deal 11 January 2024 www.bbc.com/news/technology-67922303 Technology
16 Susan Hornik Animators say ‘AI isn’t going to get you an Oscar’ 11 January 2024 www.bbc.com/news/business-67922588 Business
17 Danny Fullbrook and Roberto Perrone Elvis AI show more like time travel than Abba hologram - creator 8 January 2024 www.bbc.com/news/uk-england-beds-bucks-herts-67906106 Beds, Herts and Bucks
18 Joe Tidy Character.ai: Young people turning to AI therapist bots 5 January 2024 www.bbc.com/news/technology-67872693 Technology
19 Imran Rahman-Jones Microsoft announces AI key on Windows 11 PCs 4 January 2024 www.bbc.com/news/technology-67881373 Technology
20 Yasmin Rufo Elvis Evolution: Presley to be brought to life using AI for new immersive show 4 January 2024 www.bbc.com/news/uk-england-london-67871115 Entertainment and arts
21 Chris Vallance and Imran Rahman-Jones Urgent need for terrorism AI laws, warns think tank 3 January 2024 www.bbc.com/news/technology-67872767 Technology
22 Ben Morris Tech Trends 2024: AI and electric vehicle deals 22 December 2023 www.bbc.com/news/business-67273155 Business
23 AI cannot patent inventions, UK Supreme Court confirms 20 December 2023 www.bbc.com/news/technology-67772177 Technology
24 Devang Shah Bollywood: How AI may affect India’s vast film industry 18 December 2023 www.bbc.com/news/world-asia-india-67657873 India
25 Madeline Halpert Sports Illustrated publisher fires CEO Ross Levinsohn after AI scandal 12 December 2023 www.bbc.com/news/world-us-canada-67619015 USA and Canada
26 AI: EU agrees landmark deal on regulation of artificial intelligence 9 December 2023 www.bbc.com/news/world-europe-67668469 Europe
27 Tom Gerken Google admits AI viral video was edited to look better 8 December 2023 www.bbc.com/news/technology-67650807 Technology
28 Shiona McCallum and Zoe Kleinman Google claims new Gemini AI ‘thinks more carefully’ 6 December 2023 www.bbc.com/news/technology-67630454 Technology
28 Suranjana Tewari Nvidia boss Jensen Huang confident about AI safety 6 December 2023 www.bbc.com/news/business-67633980 Business
30 Sooraj Shah What do employers expect staff to know about AI? 5 December 2023 www.bbc.com/news/business-67378441 Business
31 Faisal Islam and Shiona McCallum OpenAI chaos not about AI safety, says Microsoft boss 30 November 2023 www.bbc.com/news/technology-67578656 Technology
32 Amazon latest tech giant to announce AI chatbot 29 November 2023 www.bbc.com/news/technology-67565021 Technology
33 Chloe Kim Sports Illustrated accused of publishing AI-written articles 28 November 2023 www.bbc.com/news/world-us-canada-67560354 USA and Canada
34 Hazel Shearing AI helps out time-strapped teachers, says report 28 November 2023 www.bbc.com/news/education-67433036 Family and education
35 Zoe Kleinman Sam Altman: The extraordinary firing of an AI superstar 18 November 2023 www.bbc.com/news/technology-67461363 Technology
36 Zoe Kleinman AI boss Sam Altman ousted after board loses confidence 18 November 2023 www.bbc.com/news/business-67458603 Business
37 Zoe Kleinman AI chief quits over ‘exploitative’ copyright row 17 November 2023 www.bbc.com/news/technology-67446000 Technology
38 Mark Savage YouTube tests AI tool that clones pop stars’ voices 16 November 2023 www.bbc.com/news/articles/c4n9erzrg93o News
39 Katy Prickett Hallucinate is Cambridge Dictionary AI-inspired word of 2023 15 November 2023 www.bbc.com/news/uk-england-cambridgeshire-67424335 Cambridgeshire
40 Riyah Collins Bad Bunny not happy about AI track using his voice 8 November 2023 www.bbc.com/news/newsbeat-67355245 Newsbeat
41 Gemma Dunstan AI: Fears hundreds of children globally used in naked images 8 November 2023 www.bbc.com/news/uk-wales-67344916 Wales
42 Sean McManus Can AI cut humans out of contract negotiations? 7 November 2023 www.bbc.com/news/business-67238386 Business
43 Lucy Hooker Musk says his new AI chatbot has ‘a little humour’ 5 November 2023 www.bbc.com/news/business-67327060 Business
44 Gordon Gorera AI risks are unknown even to GCHQ, Anne Keast-Butler tells BBC 3 November 2023 www.bbc.com/news/uk-67301402 UK
45 Philippa Wain and Imran Rahman-Jones AI bot capable of insider trading and lying, say researchers 3 November 2023 www.bbc.com/news/technology-67302788 Technology
46 Paul Seddon and Becky Morton AI summit: Education will blunt AI risk to jobs, says Rishi Sunak 2 November 2023 www.bbc.com/news/uk-politics-67296825 UK politics
47 MaryLou Costa Why are fewer women using AI than men? 2 November 2023 www.bbc.com/news/business-67217915 Business
48 Tom Gerken and Imran Rahman-Jones Rishi Sunak: AI firms cannot ‘mark their own homework’ 1 November 2023 www.bbc.com/news/technology-67285315 Technology
49 Shiona McCallum, Chris Vallance and Jennifer Clarke What is AI, how does it work and what can it be used for? 1 November 2023 www.bbc.com/news/technology-65855333 Technology
50 Zoe Kleinman and Tom Gerken King Charles: Tackle AI risks with urgency and unity 1 November 2023 www.bbc.com/news/technology-67172229 Technology
51 Danny Fullbrook AI summit brings Elon Musk and world leaders to Bletchley Park 1 November 2023 www.bbc.com/news/uk-england-beds-bucks-herts-67273099 Beds, Herts and Bucks
52 AI named word of the year by Collins Dictionary 1 November 2023 www.bbc.com/news/entertainment-arts-67271252 Entertainment and arts
53 Sara Neill Ocula Technologies: Belfast AI firm to invest £11m in R&D 31 October 2023 www.bbc.com/news/uk-northern-ireland-67264171 Northern Ireland
54 Theo and Ben ‘Most of our friends use AI in schoolwork’ 31 October 2023 www.bbc.com/news/education-67236732 Family and education
55 Shiona McCallum and Zoe Kleinman US announces ‘strongest global action yet’ on AI safety 30 October 2023 www.bbc.com/news/technology-67261284 Technology
56 Zoe Kleinman Can Rishi Sunak’s big summit save us from AI nightmare? 28 October 2023 www.bbc.com/news/technology-67172230 Technology
57 James Gregory and Zoe Kleinman Rishi Sunak says AI has threats and risks - but outlines its potential 26 October 2023 www.bbc.com/news/uk-67225158 UK
58 Chris Vallance AI could worsen cyber-threats, report warns 25 October 2023 www.bbc.com/news/technology-67221117 Technology
59 Bea Swallow National Star College students to gain independence using AI technology 25 October 2023 www.bbc.com/news/uk-england-gloucestershire-67102641 Gloucestershire
60 Joe Tidy Paedophiles using AI to turn singers and film stars into kids 25 October 2023 www.bbc.com/news/technology-67172231 Technology
61 Darren Waters Google Pixel’s face-altering photo tool sparks AI manipulation debate 21 October 2023 www.bbc.com/news/technology-67170014 Technology
62 Mariko Oi Nvidia and iPhone maker Foxconn to build ‘AI factories’ 19 October 2023 www.bbc.com/news/business-67153669 Business
63 Jane Wakefield Is AI about to transform the legal profession? 19 October 2023 www.bbc.com/news/business-67121212 Business
64 Zoe Kleinman Microsoft’s new AI assistant can go to meetings for you 18 October 2023 www.bbc.com/news/technology-67103536 Technology
65 Jessica Parker and Zoe Kleinman German Chancellor Olaf Scholz could snub British AI summit 17 October 2023 www.bbc.com/news/technology-67118264 Technology
66 Perisha Kudhail Could an AI-created profile picture help you get a job? 12 October 2023 www.bbc.com/news/business-67054382 Business
67 Zoe Kleinman and Chris Vallance Warning AI industry could use as much energy as the Netherlands 10 October 2023 www.bbc.com/news/technology-67053139 Technology
68 Shiona McCallum Snapchat: Snap AI chatbot ‘may risk children’s privacy’ 6 October 2023 www.bbc.com/news/technology-67027282 Technology
69 Megan Lawton and Riyah Collins Doja Cat and Jonas Brothers songwriters say AI is not to be feared 3 October 2023 www.bbc.com/news/newsbeat-66993297 Newsbeat
70 Tom Singleton Tom Hanks warns dental plan ad image is AI fake 2 October 2023 www.bbc.com/news/technology-66983194 Technology
71 Paul Seddon AI chatbots do work of civil servants in productivity trial 29 September 2023 www.bbc.com/news/uk-politics-66810006 UK politics
72 Tom Gerken Apple to buck layoff trend by hiring UK AI staff 29 September 2023 www.bbc.com/news/technology-66954267 Technology
73 James Clayton Meta announces AI chatbots with ‘personality’ 27 September 2023 www.bbc.com/news/technology-66941337 Technology
74 Zoe Kleinman Spotify will not ban AI-made music, says boss 26 September 2023 www.bbc.com/news/technology-66882414 Technology
75 Chris Vallance AI risks destabilising world, deputy PM to tell UN 22 September 2023 www.bbc.com/news/technology-66879709 Technology
76 Christian Fuller and Simon Finlay Council to use AI to catch speeding motorists 20 September 2023 www.bbc.com/news/articles/c51jn2yn73zo News
77 James Clayton ‘Overwhelming consensus’ on AI regulation – Musk 14 September 2023 www.bbc.com/news/technology-66804996 Technology
78 Dougal Shaw AI and sound – helping firms build their own ‘sonic identity’ 14 September 2023 www.bbc.com/news/business-66330890 Business

Source: Authors’ own creation

Notes

1.

Elvis Aaron Presley is a famous American singer and actor who has been said to be one of the most influential cultural icons of the twentieth century.

2.

Developed by Microsoft, Copilot is an integrated generative tool that allows users to conduct a group of tasks, such as searching for specific information, generating texts, creating images based on prompts, and so forth.

3.

R1 is a standalone AI device, developed by AI startup Rabbit, that can connects to users’ mobile apps and operates tasks for users.

4.

The Bletchley Declaration, agreed by countries attending the AI Safety Summit 2023 at Bletchley Park, Buckinghamshire, proclaims a new global effort to release vast benefits provided by AI while prioritizing its safety.

5.

The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm.

Appendix

Table A1

References

Al Zadjali, H. (2020), “Building the right AI governance model in Oman”, in Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, pp. 116-119.

Alasadi, E.A. and Baiz, C.R. (2023), “Generative AI in education and research: opportunities, concerns, and solutions”, Journal of Chemical Education, Vol. 100 No. 8, pp. 2965-2971.

Bandi, A., Adapa, P.V.S.R. and Kuchi, Y. (2023), “The power of generative AI: a review of requirements, models, input–output formats, evaluation metrics, and challenges”, Future Internet, Vol. 15 No. 8, p. 260.

Banerjee, I. and Seneviratne, K. (2006), Public Service Broadcasting: A Best Practices Sourcebook, AMIC, UNESCO Publication.

Baxter, P. and Jack, S. (2008), “Qualitative case study methodology: Study design and implementation for novice researchers”, The Qualitative Report, Vol. 13 No. 4, pp. 44-559.

Bazeley, P. (2013), Qualitative Data Analysis, Sage Publications, London.

BBC (2004), “Building public value renewing the BBC for a digital world”, available at: https://downloads.bbc.co.uk/aboutthebbc/policies/pdf/bpv.pdf (accessed 22 January 2024).

Birkstedt, T., Minkkinen, M., Tandon, A. and Mäntymäki, M. (2023), “AI governance: themes, knowledge gaps and future agendas”, Internet Research, Vol. 33 No. 7, pp. 133-167.

Borner, I. (2023), “Implementing effective AI governance”, The Data Privacy Group, available at: https://thedataprivacygroup.com/blog/implementing-effective-ai-governance/ (accessed 10 January 2024).

Brandtzæg, P.B., You, Y., Wang, X. and Lao, Y. (2023), “Good and bad machine agency in the context of human-AI communication: the case of ChatGPT”, International Conference on Human-Computer Interaction, Vol. 23, No. 1, pp. 3-23.

Brynjolfsson, E., Li, D. and Raymond, L.R. (2023), “Generative AI at work (No. w31161)”, National Bureau of Economic Research.

Cao, L. and Dede, C. (2023), “Navigating a world of generative AI: suggestions for educators”, The Next Level Lab at Harvard Graduate School of Education, Vol. 5 No. 2.

Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B. and Barro, S. (2023), “AI literacy in K-12: a systematic literature review”, International Journal of STEM Education, Vol. 10 No. 1, p. 29.

Catsaros, O. (2023), “Generative AI to become a $1.3 trillion market by 2032, research finds”, available at: www.bloomberg.com/company/press/generative-ai-to-become-a-1-3-trillion-market-by-2032-research-finds/ (accessed 10 January 2024).

Creswell, J.W. (2014), Research Design: Qualitative, Quantitative, and Mixed Method Approaches, 4th ed. SAGE Publications, London.

Dafoe, A. (2018), “AI governance: a research agenda”, Governance of AI Program, Future of Humanity Institute, University of Oxford, Oxford, UK, pp. 1442-1443.

Domínguez Figaredo, D. and Stoyanovich, J. (2023), “Responsible AI literacy: a stakeholder-first approach”, Big Data and Society, Vol. 10 No. 2, pp. 1-15.

Druga, S., Vu, S.T., Likhith, E. and Qiu, T. (2019), “Inclusive AI literacy for kids around the world”, Proceedings of FabLearn 2019, Vol. 5 No. 1, pp. 104-111.

Ebert, C. and Louridas, P. (2023), “Generative AI for software practitioners”, IEEE Software, Vol. 40 No. 4, pp. 30-38.

European Commission (2021), “Impact assessment of the regulation on artificial intelligence”, available at: https://digital-strategy.ec.europa.eu/en/library/impact-assessment-regulation-artificial-intelligence (accessed 10 January 2024).

Fischer, J.E. (2023), “Generative AI considered harmful”, Proceedings of the 5th International Conference on Conversational User Interfaces, Vol. 5No. No. 3, pp. 1-5.

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K. and Chen, L. (2023), “Generative AI and ChatGPT: applications, challenges, and AI-human collaboration”, Journal of Information Technology Case and Application Research, Vol. 25 No. 3, pp. 277-304.

Gieseke, A.P. (2020), “The new weapon of choice: Law's current inability to properly address deepfake pornography”, Vanderbilt Law Review, Vol. 73, p. 1479.

Gorham, M. (1967), Forty Years of Irish Broadcasting, Talbot, Dublin.

Grummell, B. (2009), “The educational character of public service broadcasting: from cultural enrichment to knowledge society”, European Journal of Communication, Vol. 24 No. 3, pp. 267-285.

Hainsdorf, C., Hickman, T., Lorenz, S. and Rennie, J. (2023), “Dawn of the EU’s AI act: political agreement reached on world's first comprehensive horizontal AI regulation”, available at: www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai (accessed 26 January 2024).

Hirvonen, N., Jylhä, V., Lao, Y. and Larsson, S. (2023), “Artificial intelligence in the information ecosystem: affordances for everyday information seeking”, Journal of the Association for Information Science and Technology.

Katarya, R. and Lal, A. (2020), “A study on combating emerging threat of deepfake weaponization”, 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), IEEE, pp. 485-490.

Kong, S.-C., Cheung, W.M.-Y. and Zhang, G. (2023), “Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical awareness”, Educational Technology and Society, Vol. 26 No. 1, pp. 16-30.

Kuziemski, M. and Misuraca, G. (2020), “AI governance in the public sector: three tales from the frontiers of automated decision-making in democratic settings”, Telecommunications Policy, Vol. 44 No. 6, p. 101976.

Lambert, J. and Stevens, M. (2023), “ChatGPT and generative AI technology: a mixed bag of concerns and new opportunities”, Computers in the Schools, pp. 1-25.

Larsson, S., Haresamudram, K., Högberg, C., Lao, Y., Nyström, A., Söderlund, K. and Heintz, F. (2023), “Four facets of AI transparency”, Handbook of Critical Studies of Artificial Intelligence, Edward Elgar Publishing, Umeå, pp. 445-455.

Lee, I., Ali, S., Zhang, H., DiPaola, D. and Breazeal, C. (2021), “Developing Middle school students' AI literacy”, Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, Association for Computing Machinery, New York, NY, United States, pp. 191-197.

Long, D. and Magerko, B. (2020), “What is AI literacy? Competencies and design considerations”, in Proceedings of the 2020 CHI conference on human factors in computing systems, Association for Computing Machinery, pp. 1-16, New York, NY, United States.

Lütge, C., Poszler, F., Acosta, A.J., Danks, D., Gottehrer, G., Mihet-Popa, L. and Naseer, A. (2021), “AI4people: ethical guidelines for the automotive sector-fundamental requirements and practical recommendations”, International Journal of Technoethics, Vol. 12 No. 1, pp. 101-125, doi: 10.4018/IJT.20210101.oa2.

Majid, A. (2024), “Top 50 biggest news websites in the world: December sees traffic slumps at ten biggest for second month in a row”, PressGazette, available at: https://pressgazette.co.uk/media-audience-and-business-data/media_metrics/most-popular-websites-news-world-monthly-2/ (accessed 22 January 2024).

Mania, K. (2024), “Legal protection of revenge and deepfake porn victims in the European Union: findings from a comparative legal study”, Trauma, Violence, and Abuse, Vol. 25 No. 1, pp. 117-129.

Mäntymäki, M., Minkkinen, M., Birkstedt, T. and Viljanen, M. (2022), “Defining organizational AI governance”, AI and Ethics, Vol. 2 No. 4, pp. 603-609.

Marshall, M.N. (1996), “Sampling for qualitative research”, Family Practice, Vol. 13 No. 6, pp. 522-526.

McKinsey and Company (2023), “What is generative AI?”, available at: www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai#/ (accessed 10 January 2024).

Meskys, E., Kalpokiene, J., Jurcys, P. and Liaudanskas, A. (2020), “Regulating deep fakes: legal and ethical considerations”, Journal of Intellectual Property Law and Practice, Vol. 15 No. 1, pp. 24-31.

Metz, R. (2023), “Dall-E 3 is so good it’s stoking a revolt against AI scraping”, The Japan Times, available at: www.japantimes.co.jp/business/2023/11/06/tech/ai-dalle-artist-revolt/ (accessed 10 January 2024).

Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D.E., Thierry-Aguilera, R. and Gerardou, F.S. (2023), “Challenges and opportunities of generative AI for higher education as explained by ChatGPT”, Education Sciences, Vol. 13 No. 9, p. 856.

National Artificial Intelligence Advisory Committee (2023), “Recommendations: enhancing AI literacy for the United States of America”, available at: https://ai.gov/wp-content/uploads/2023/12/Recommendations_Enhancing-Artificial-Intelligence-Literacy-for-the-United-States-of-America.pdf (accessed 10 January 2024).

Ng, D.T.K., Leung, J.K.L., Chu, S.K.W. and Qiao, M.S. (2021), “Conceptualizing AI literacy: an exploratory review”, Computers and Education: Artificial Intelligence, Vol. 2, p. 100041.

Ng, D.T.K., Leung, J.K.L., Su, M.J., Yim, I.H.Y., Qiao, M.S. and Chu, S.K.W. (2022), “AI literacy for all”, in Ng, D.T.K., Leung, J.K.L ., Su, M.J., Yim, I.H.Y., Qiao, M.S. and Chu S.K.W. (Eds), AI Literacy in K-16 Classrooms, Springer International Publishing, Cham, pp. 21-29.

Perchik, J.D., Smith, A.D., Elkassem, A.A., Park, J.M., Rothenberg, S.A., Tanwar, M. and Sotoudeh, H. (2023), “Artificial intelligence literacy: developing a multi-institutional infrastructure for AI education”, Academic Radiology, Vol. 30 No. 7, pp. 1472-1480.

Priya, A. (2021), “Case study methodology of qualitative research: key attributes and navigating the conundrums in its application”, Sociological Bulletin, Vol. 70 No. 1, pp. 94-110.

Relmasira, S.C., Lai, Y.C. and Donaldson, J.P. (2023), “Fostering AI literacy in elementary science, technology, engineering, art, and mathematics (STEAM) education in the age of generative AI”, Sustainability, Vol. 15 No. 18, p. 13595.

Sætra, H.S. (2023), “Generative AI: Here to stay, but for good?”, Technology in Society, Vol. 75, p. 102372.

Schreier, M. (2012), Qualitative Content Analysis in Practice, Sage Publications, Los Angeles, LA.

Schüller, K. (2022), “Data and AI literacy for everyone”, Statistical Journal of the IAOS, Vol. 38 No. 2, pp. 477-490.

Terrasi, V. (2023), “GPT-4: how is it different from GPT-3.5?”, Searching Engine Journal, www.searchenginejournal.com/gpt-4-vs-gpt-3-5/482463/#close (accessed 10 January 2024).

UNESCO (2022), “K-12 AI curricula: a mapping of government-endorsed AI curricula”, available at: https://unesdoc.unesco.org/in/documentViewer.xhtml?v=2.1.196andid=p::usmarcdef_0000380602andfile=/in/rest/annotationSVC/DownloadWatermarkedAttachment/attach_import_b2ef4ccb-7075-4ecf-8620-1fb450c2223e%3F_%3D380602eng.pdfandlocale=enandmulti=trueandark=/ark:/48223/pf0000380602/PDF/380602eng.pdf#1223_23%20K-12%20AI%20Curricula_layout_INT.indd%3A.106253%3A1714 (accessed 10 January 2024).

Walters, W.P. and Murcko, M. (2020), “Assessing the impact of generative AI on medicinal chemistry”, Nature Biotechnology, Vol. 38 No. 2, pp. 143-145.

Westerlund, M. (2019), “The emergence of deepfake technology: a review”, Technology Innovation Management Review, Vol. 9 No. 11.

Xian, L., Li, L., Xu, Y., Zhang, B.Z. and Hemphill, L. (2024), “Landscape of generative AI in global news: topics, sentiments, and spatiotemporal analysis”, arXiv preprint arXiv:2401.08899.

Yi, Y. (2021), “Establishing the concept of AI literacy”, JAHR, Vol. 12 No. 2, pp. 353-368.

Further reading

Semeler, A., Pinto, A., Koltay, T., Dias, T., Oliveira, A., González, J. and Rozados, H.B.F. (2024), “Algorithmic literacy: generative artificial intelligence technologies for data librarians”, EAI Endorsed Transactions on Scalable Information Systems, Vol. 11 No. 2.

Corresponding author

Yucong Lao can be contacted at: Yucong.Lao@oulu.fi

Related articles