Newsletters

AI and Misinformation on Social Media: Addressing Issues of Bias and Equity across the Research-to-Deployment Process

04/29/2024

Brandon Sepulvado and Amelia Burke-Garcia, NORC at the University of Chicago

Misinformation ranks among the top challenges for public health and public opinion surrounding health topics. AI-generated misinformation increases the spread and exposure to misleading health and medical information, posing a major challenge to health and well-being and impacting historically marginalized and minoritized communities.

At the same time, AI offers the possibility to help tackle problems stemming from misinformation. We describe key ways that AI can help combat health-related misinformation on social media with a focus on addressing issues of bias and equity that can arise from conceptualization through deployment of models in health communication efforts.

DETECTING & IDENTIFYING MISINFORMATION

One approach to studying misinformation on social media might be called “fact checking.” In this approach, researchers identify statements made within social media posts and then verify whether the statements are true or false based on existing databases of knowledge or factual claims.

Another approach to studying misinformation—one taken by NORC in an ongoing study investigating misinformation surrounding COVID-19 vaccines—focuses instead on leveraging AI to identify the framing strategies used in social media posts.

Rather than arbitrating whether a statement made within a post is true or false, AI is used to assess the rhetorical strategies it uses. For example, does a post emphasize uncertainty in safety or knowledge around COVID-19 vaccines, or does it try to garner legitimacy by referencing an authority, such as a public figure or government agency? Examples of such framing strategies may be found in the US Surgeon General’s Health Disinformation Toolkit, CISA’s disinformation documentation, and the Union of Concerned Scientists’ Disinformation Playbook.

Identifying framing strategies alleviates the burden of determining whether something is accurate or inaccurate – and in the space of new and emerging health information, this can be quite valuable as the science may be changing quickly.

It also offers insights into how such misinformation may be shared and allows for communicators of evidence-based health information to choose different messages or frames to share their message as a result.

This approach can also be complemented by the fact checking approach. The framing strategies have been long recognized by science and health communication researchers, but, by coopting these frames, misinformation spreaders make it difficult to identify what is and is not misinformation. Combining the identification of framing strategies with a fact checking approach means that it can be possible to identify misinformation more accurately.

COUNTERING MISINFORMATION

AI might also help actively counteract misinformation.

Some early research suggests that AI models can be used to deliver accurate health information in an empathetic way, thus garnering acceptance of that information from the individuals who interact with the AI models. Building on our understanding of the role of opinion leaders in health promotion programs, health communication professionals might be able to leverage these strengths to build AI agents that convey accurate health information in a trustworthy and credible way.

This approach, called “Health Communication AI” by Burke-Garcia and Soskin Hicks, “blends the authenticity of social media influencers with AI’s technological scale capabilities informed by accurate and up-to-date health- and health communication-related expertise.”[i]

Perhaps most powerfully, however, is the reality that AI might be able to do this at scale. Currently, much of the communication of reliable health information depends on human interactions, either face-to-face or through digital interactions – which  does not scale effectively to address the problem of health misinformation. “Health Communication AI” addresses this issue.

RISK OF BIAS THROUGHOUT THE AI DEVELOPMENT PROCESS

When using AI to study misinformation on social media, bias can arise in many ways. We provide a non-exhaustive overview below. As we hope is evident, these biases can arise at every step of the research process to build AI tools and in the subsequent use of these tools; without vigilance in identifying and remedying each, the same AI systems that hold great promise to benefit health communication can also produce disproportionate negative impacts to historically marginalized and minoritized communities.

Data Collection. One type of bias can arise during the data collection process and result in systematic differences in how AI systems behave. For example, the fact checking approach tends to focus primarily—if not exclusively on—text data.

For social media platforms like X (formerly, Twitter), this strategy might make sense, but for platforms that center images and videos, it neglects an important type of data. Social media posts are often collected on the basis of keyword queries, meaning that due diligence must be performed to ensure that a representative set of posts are returned.

Data deletion is another issue for social media platforms as some people may be more likely to delete their posts, such as after a major event or public outcry.

Data Labeling. Similarly, the process of annotating data from which AI systems are built can bias the performance of the systems.

In the framing strategy approach, the different strategies labeled by researchers might not be inclusive of all possible outcomes or strategies or systematically ignore communication strategies used by certain groups.

When researchers create training data from which AI tools can learn, it is also important that everyone understand the rhetorical strategies in the same way. Ambiguity in the underlying communication strategies being investigated can result in instability and bias in AI systems built to monitor public health communication.

Fact checking approaches rely upon a set of “facts” against which statements are compared. If the database of facts is out of date or incomplete, or otherwise skewed in some way, the results will be biased.

Models. Some models used within AI systems might also contribute to biased performance. A burgeoning line of research shows that large language models (LLMs), such as ChatGPT, often contain biases including along racial, gender, and national lines, and similar biases have been found in computer vision models that are necessary to harness the vast image and video data on social media platforms.

Use of AI Tools. After AI tools are developed, there may still be ways in which such tools result in disproportionate negative effects, whether intended or unintended. For example, engaging with AI tools requires access to the internet and to technologies and devices capable of running relevant software.

Relatedly, many developers from historically marginalized and minoritized communities—who are best poised to recognize the needs of and develop solutions for their communities—might not have financial, computing, or other resources to develop AI solutions that compete with those from large corporations.

LOOKING FORWARD: HEALTH EQUITY AND AI

With the ubiquity of misinformation and disinformation online, understanding the flow of  these types of information and countering them with reliable information is vital to ensuring public health.

AI offers the possibility to help tackle these challenges, and  equity must remain a cornerstone around which AI development revolves. The field of AI is evolving rapidly, and we already see well known AI tools adopting the forms and perspectives of the privileged.

Without having all communities at the table when researching, developing, and deploying AI, we risk irrevocably eroding trust. Historically marginalized and minoritized communities must be part of the process of building the scientific agenda and engaging communities to do the work of developing AI tools for countering misinformation and disinformation.

[i] Burke-Garcia, A. & Soskin Hicks, R. (in press). Scaling the Idea of Opinion Leadership to Address Health Misinformation: 1 The Case for “Health Communication AI”. Journal of Health Communication.