AI Is Great. Until It Isn’t. Where AI-Generated Qual Falls Short.

We’ve all heard the clarion call of AI. The promises of greater efficiencies. Streamlined operations. Faster innovation. Cost savings. For those of us who live and breathe qualitative research, proponents tell us that AI is the inevitable future of our industry. For clients, AI seems to hold the key for delivering the insights they need faster, and for less.

And there’s some truth to these claims. AI does offer significant potential advantages for driving efficiencies of researchers and the speed of qual. AI tools are indeed adept at quickly analyzing unstructured feedback, interview transcripts, themes, and patterns. AI can rapidly sift through enormous qualitative datasets, analyzing not just respondent sentiments but online reviews, social media posts, videos, and all manner of content out there in the ether.

What takes human analysts days or weeks to complete, AI can do in minutes and even seconds. It can cluster data to identify patterns and themes on the fly. Machine learning algorithms can analyze traits and preferences embedded within qualitative data to uncover new audience segments that lead to more targeted messaging and campaigns. Taking this vast amount of information, AI makes quick work of condensing it into digestible findings that secure stakeholder buy-in.

By automating the data crunching and manual processes, AI frees human moderators and analysts to focus on what they do best: delving deeper with respondents to uncover rich, nuanced insight that drives smarter business decisions. For more on this symbiotic relationship, read our last blog post.

Yes, AI is great! That is, until it isn’t.

AI Buyer’s Remorse

Businesses appear to have gone all-in with AI. In 2024, US corporate investment in AI initiatives reached $252.3 billion—up 12.1% over the previous year. Industry analysts predict that US companies are likely to spend upwards of $320 billion on AI in 2025. Recent reports, however, indicate that a growing number of companies are beginning to experience AI buyer’s remorse.

After investing millions in Watson Health, and its flagship Watson for Oncology AI, IBM was forced to sell off the division for a fraction of its investment. The reason: hospitals found the AI system frequently delivered unsafe or incorrect medical advice, often missing clinical nuances or misapplying therapies based on limited training data. The whole debacle is seen as one of the most notorious AI failures in healthcare.

In another infamous case, Amazon ended up scrapping its AI-based recruitment system developed to streamline hiring, due to bias against women. Trained on resumes from a mostly male-dominated tech workforce, the algorithm favored male applicants and penalized resumes that included references to women’s colleges or achievements.

Then there’s Zillow’s AI Zestimate tool, which relied on predictive models to set home prices, and ended up overestimating the value of tens of thousands of properties. As a result, Zillow had to resell homes for less than their purchase price, resulting in significant financial losses. The company ultimately closed its home-buying division, blaming AI inaccuracies.

More recently, the DoNotPay online subscription service promoted its chatbot as “the world’s first robot lawyer,” claiming it could handle complex legal tasks. When the bot failed to provide reliable answers, the company was fined by the FTC, exposing the limitations and risks of AI when handling critical services.

Turns out, AI isn’t always a good substitute for humans. Just ask Swedish FinTech giant Klarna. After slashing 40% of its staff to make room for AI, the company ended up rehiring the replaced human employees, citing a decline in performance and service.

And while businesses may be excited by AI, their customers are often less than enthused. When Coca-Cola released an AI-generated ad campaign, for instance, the company was widely criticized for substituting real human creativity, causing significant damage to the brand’s reputation. After language app Duolingo announced itself as an AI-first company earlier this year, with plans to phase out the use of human contract workers, the company reported lower than expected user growth attributed to public backlash.

These companies represent a broader trend in 2025: businesses that initially poured millions of dollars into AI are now pulling back or recalibrating after realizing the cost efficiencies and financial returns have fallen below expectations. In fact, a growing number of experts (including OpenAI CEO Sam Altman) are predicting an AI bubble that’s ready to burst, leading to a crash similar to the dot-com bust in the early 2000s.

AI in Qual: Is It Working?

So what of AI use specifically in market research? Companies appear to be embracing AI on that front as well. Microsoft Teams, for example, uses natural language processing (NLP) to analyze millions of user comments to quickly pinpoint issues with its interface so developers can make updates in days instead of months.

Unilever replaced traditional focus groups with AI-enabled facial expression analysis of video ads to see where their audience was disengaging. The company credits the system for improving the ads and boosting brand recall by 20%. PepsiCo also turned to AI to mine consumer behavior and social media sentiment, then used the findings to massage product messaging for a line of sparkling water.

They’re not alone. According the Qualtrics’ 2025 Market Research Trends Report, 89% of researchers are already using AI tools and 83% reported that their organizations plan to significantly increase AI investment in 2025. What’s more, nearly 71% believe that the majority of market research will be done using synthetic responses within three years.

But is AI really all that when it comes to qual? Recent research suggests otherwise.

Imprecise and Superficial

A study evaluating ChatGPT for qualitative analysis of interview transcripts identified several shortcomings, including imprecise approximations and major errors. The study’s authors determined that ChatGPT had difficulty detecting nuanced or implicit content, failed to verify coding consistency, and tended to generate only surface-level summaries. Their conclusion: although AI can provide a starting point, it frequently falls short in the depth and precision necessary for robust qualitative research.

Lack of Human Judgement, Bias, & No Context

Another study revealed that overreliance on AI can come at the expense of human judgment essential for understanding context and emotion—things AI often has trouble recognizing. AI systems can adopt biases from the data they were trained on, which may distort the results. And many AI tools work like a “black box,” making it hard to know how they arrived at their conclusions. Because of these challenges, researchers emphasize the importance of combining AI-generated findings with human review. This way, the insights remain accurate, meaningful, and ethically responsible.

Deskilled Researchers

Another concern uncovered by researchers is the risk of de-skilling. As AI takes over tasks like coding data or spotting patterns, researchers may lose opportunities to build important analytical skills and develop a deeper understanding of the qual field.

Uniform & Generic Findings

There’s also a bigger issue: AI might make research outcomes too uniform. Because AI works by learning patterns, it often reproduces what already exists instead of generating truly new or challenging ideas. This could limit creativity and diversity of thought, which are essential for keeping qualitative research vibrant and intellectually rich.

Data Privacy Concerns

And then there’s the problem of data privacy. AI tools often handle personal data without making it clear how that information is being used or stored. Because of this, issues like privacy and consent become especially important.

What AI Lacks, Human Researchers Have

For all its promises around automated efficiencies, faster turnaround, and lower costs, AI-alone just isn’t enough to get the rich, nuanced, meaningful insight businesses need to make decisions that actually move the needle. And here’s why.

  • AI can’t interpret subtle human nuances. It misses the context in open-ended responses and misreads colloquial language, metaphors, cultural references, sarcasm, humor, and other human qualities.
  • When trained on faulty or biased data, AI models can perpetuate errors and mislead decisions with skewed results. This can reinforce stereotypes or lead to discriminatory outcomes.
  • Mass collection and processing of qualitative data raises privacy and regulatory questions, especially when sensitive consumer information is involved.
  • Organizations that rely solely on AI run the risk of losing the human judgment and creative reasoning essential to qualitative market research.
  • AI frequently hallucinates—making up nonexistent insights—or fails to grasp the complexities of cultural, emotional, or social factors that drive real human behavior.

How to Avoid AI Failure

When it comes to AI in qual, it doesn’t have to be all or nothing. At Thinkpiece, we’ve found the most effective qual strategy involves a blend of AI’s efficiency and analytical power with human creativity, intuition, and critical thinking. Our experienced human researchers led our studies, ensuring that the data, patterns, and themes identified by AI are accurate, interpreted correctly, and contextualized, avoiding missteps that arise from algorithmic blind spots.

AI is a great tool for qualitative market research—so long as we understand, acknowledge, and respect its limitations and rely on human experts and judgment to keep it in check. For qual researchers to survive and thrive, the question isn’t whether to use IA or not. Rather, we should be asking ourselves: How will we use AI? What’s the wisest application for this powerful technology?

For us, the answer is can be found in our new quick-turn qual solution, ThinkFast—expert-led, human-to-human qual insights delivered with AI efficiency and speed in five to 14 days. It’s AI done right, and qual research done better.