logomvr

Brennan Held

AI in Marketing Research: Savvy Researcher or Summary Specialist?

AI is rapidly advancing and shaking things up across a variety of industries, and marketing research is no exception. It’s clear that AI holds immense potential for analytic use in this field. Yet, we wanted to personally dig in to understand how AI stacks up against the claims. To gain a true picture of AI capabilities, MarketVision has been actively testing different AI analysis tools within our qualitative research practice to identify both their limits and potential to deliver value.  

UNLOCKING AI STRENGTHS

From our experiences to date, AI-powered text summaries tend to accurately capture high-level themes and directional insights from qualitative data. Acting as an impartial third party, AI can efficiently aggregate data points across sources into cohesive narratives. For certain types of research like concept testing, using AI to churn through the raw data first can help surface high-level takeaways that can be used as a springboard for deeper analysis. 

NAVIGATING AI LIMITS 

However, AI isn't a silver bullet. While powerful, it also has its limits.  

  • Surface vs. Synthesis: While adept at identifying patterns and themes in data, AI falls short in terms of providing the deeper strategic implications and recommendations - the crucial "so whats?" that drive actionable insights.

  • Quality In Drives Quality Out: AI's effectiveness hinges on the quality of data it's trained on and the inputs provided. Flaws, ambiguities, or nuances in the source material can lead to skewed or confounded outputs. In one study, AI struggled to differentiate concepts with their alternate versions, leading to muddled outputs. A careful human eye is required to verify AI's claims and findings.

  • Accuracy and Hallucinations: While infrequent, we did run into a few cases where AI either generated false claims or cited fake verbatims to match its analysis. Whether these issues resulted from human error (e.g., flawed prompting), or a wrench in the algorithm, it does give us pause about the risk of blindly trusting AI outputs. As a result, we caution users to vet claims, particularly those that don’t mesh with the researcher’s experience and intuition.

  • Lacks Nuance: Human communication is rife with nuances like sarcasm, metaphors, and even geographical dialects that AI often fails to understand properly. Human communication relies significantly on the way things are said - or the things that are left unsaid - and AI’s current proclivity for taking statements at face value often leads to skewed interpretations. These are all things most human analysts would catch intuitively, but where AI is thwarted.

MAXIMIZING AI POTENTIAL 

Yet, just because AI is limited, doesn’t mean it’s useless. The most effective approach is to leverage AI as a collaboration partner alongside directive human intelligence. And like in any successful partnership, we have found that preparation is key. 
 
Here are a few simple prep steps we’ve found helpful: 

  • Verbal Cues: If you’re doing concept testing, make sure your labels are mentioned frequently in interviews to ground the AI with context to associate with respondent evaluations. It’s similar to live research when a client is sitting in the backroom and can’t see what concept the respondent is looking at, so the moderator overtly calls out the appropriate code to ensure everyone can track the conversation. AI needs this too!

  • Keyword List: Additionally, while interviews are going on, create a keyword list either respondent-specific or project-wide – to focus the AI’s analysis. We’ve also found this meaningfully improves transcription accuracy.

  • Smart Queries: Beyond just providing the AI with data, the way queries are structured is crucial. Specific questions rather than overly broad ones (e.g., “Describe respondents’ reaction to the headline?” versus “How did respondents feel about concept R as a whole?”) tend to provide more valuable information. Avoiding multi-part questions is ideal.

WHERE WE’VE LANDED

In short, AI outputs cannot be taken at face value.  Human analysts remain essential to interpret AI outputs and understand their real-world business impact. Embracing AI as a ‘Summary Specialist’ can be a smart complement to an Insights Team.  It’s akin to the novice researcher who effectively summarizes the facts, but who doesn’t yet have the research chops to synthesize and unlock deep insights.  That’s where we human analysts come into play. Leveraging AI frees up our mental bandwidth to do what we ‘Savvy Researchers’ do best: dig for gold (aka elicit strategic insights that drive meaningful action). 

STAY TUNED for an update on our experiences with AI: Next up, we will be exploring a Human vs. AI head-to-head match-up around exploratory type research topics.  

About the Author:

Brennan Held

Brennan Held is a Senior Research Associate on our Qualitative Team

Please note that this website uses cookies to deliver our services to you, improve your experience and measure the usability of the site. By continuing to use this website you acknowledge the use of cookies and how they’re being used.