Newsletters

Can Generative AI Replace Market Research? Not Yet

11/25/2024

Can Generative AI Replace Market Research? Not Yet

Jacob Nelson, Harris Poll

The past few years have seen an explosion of interest in generative AI models, and here at The Harris Poll, we’ve been diving deep into how these tools can fit into the market research world. Large Language Models (LLMs) are undeniably impressive in assisting the researcher—but can they replace traditional research methods?

We set out to answer that question by testing an ambitious idea: could an LLM generate meaningful market research insights entirely “ex nihilo”—that is, without collecting any actual data? Spoiler alert: the answer is no. But the process of conducting this research offered interesting lessons about both the promise and the limitations of these AI tools.

A Case Study in Pricing Research

The inspiration for our experiment came from a real client project we worked on in 2023. We conducted traditional pricing research for a premium consumer packaged goods (CPG) brand sold at warehouse clubs like Costco and Sam’s Club. This was during a time of rising inflation, and the client needed guidance on how to price both their current products and new pack sizes to stay competitive and profitable.

For this study, we surveyed 523 participants recruited from online panels and used tried-and-true methods: a simplified Van Westendorp (VW) price sensitivity analysis and a store-shelf conjoint study. These approaches have reliably helped brands like this one understand price perceptions and demand elasticity.

Then came the fun part: seeing how six different LLMs stacked up against these traditional methods. Using carefully crafted prompts, we asked the models to estimate optimal prices, price perceptions, and price elasticity.

The Results: Not Ready for Prime Time

We were skeptical from the start, and unfortunately, the results confirmed our doubts. Here’s what we found:

  1. Overestimating Price Levels The LLMs consistently overshot on price. In 81% of cases, their “optimal” price predictions were higher than the survey-derived insights—often by more than 25%. That’s a huge miss, especially when pricing precision is critical for our clients.
  2. Logical Inconsistencies When asked for a good “bargain” price versus a good “high-end” price, the models often returned identical values. This lack of nuance showed a fundamental gap in their ability to think like—or understand—a consumer.
  3. Struggles with Context We suspected the models were biased by their training data, likely focused on online prices. For warehouse clubs, where prices are often lower, this made their predictions wildly off base. Even when we tried supplementing the prompts with real-world price data from online sources, the outputs became even more inconsistent.
  4. Flawed Elasticity Estimates The LLMs’ attempts at estimating price elasticity were simplistic. Some merely mirrored input prompts, suggesting a 10% price hike would lead to a 10% drop in sales—ignoring the complex dynamics of real-world markets.

Where LLMs Excel (and Where They Don’t)

Despite these challenges, LLMs aren’t useless in market research. Far from it! When properly employed, we believe that they excel at tasks like summarizing survey results, generating initial hypotheses or prior distributions, identifying competitors, and analyzing open-ended survey responses, among other things.

But when it comes to generating research insights “ex nihilo,” especially in areas requiring measurement precision, they fall short.

The Takeaway: Complement, Don’t Replace

Generative AI is exciting, and its cost-efficiency is tempting, but it’s not ready to stand on its own in market research. Instead, we continue to see it as a complementary tool—a way to make traditional research methods faster, richer, and more efficient. They are tools for enhancing our work, but still far from ready to take the lead.