AI Buyer’s Remorse
Businesses appear to have gone all-in with AI. In 2024, US corporate investment in AI initiatives reached $252.3 billion—up 12.1% over the previous year. Industry analysts predict that US companies are likely to spend upwards of $320 billion on AI in 2025. Recent reports, however, indicate that a growing number of companies are beginning to experience AI buyer’s remorse.
After investing millions in Watson Health, and its flagship Watson for Oncology AI, IBM was forced to sell off the division for a fraction of its investment. The reason: hospitals found the AI system frequently delivered unsafe or incorrect medical advice, often missing clinical nuances or misapplying therapies based on limited training data. The whole debacle is seen as one of the most notorious AI failures in healthcare.
In another infamous case, Amazon ended up scrapping its AI-based recruitment system developed to streamline hiring, due to bias against women. Trained on resumes from a mostly male-dominated tech workforce, the algorithm favored male applicants and penalized resumes that included references to women’s colleges or achievements.
Then there’s Zillow’s AI Zestimate tool, which relied on predictive models to set home prices, and ended up overestimating the value of tens of thousands of properties. As a result, Zillow had to resell homes for less than their purchase price, resulting in significant financial losses. The company ultimately closed its home-buying division, blaming AI inaccuracies.
More recently, the DoNotPay online subscription service promoted its chatbot as “the world’s first robot lawyer,” claiming it could handle complex legal tasks. When the bot failed to provide reliable answers, the company was fined by the FTC, exposing the limitations and risks of AI when handling critical services.
Turns out, AI isn’t always a good substitute for humans. Just ask Swedish FinTech giant Klarna. After slashing 40% of its staff to make room for AI, the company ended up rehiring the replaced human employees, citing a decline in performance and service.
And while businesses may be excited by AI, their customers are often less than enthused. When Coca-Cola released an AI-generated ad campaign, for instance, the company was widely criticized for substituting real human creativity, causing significant damage to the brand’s reputation. After language app Duolingo announced itself as an AI-first company earlier this year, with plans to phase out the use of human contract workers, the company reported lower than expected user growth attributed to public backlash.
These companies represent a broader trend in 2025: businesses that initially poured millions of dollars into AI are now pulling back or recalibrating after realizing the cost efficiencies and financial returns have fallen below expectations. In fact, a growing number of experts (including OpenAI CEO Sam Altman) are predicting an AI bubble that’s ready to burst, leading to a crash similar to the dot-com bust in the early 2000s.
AI in Qual: Is It Working?
So what of AI use specifically in market research? Companies appear to be embracing AI on that front as well. Microsoft Teams, for example, uses natural language processing (NLP) to analyze millions of user comments to quickly pinpoint issues with its interface so developers can make updates in days instead of months.
Unilever replaced traditional focus groups with AI-enabled facial expression analysis of video ads to see where their audience was disengaging. The company credits the system for improving the ads and boosting brand recall by 20%. PepsiCo also turned to AI to mine consumer behavior and social media sentiment, then used the findings to massage product messaging for a line of sparkling water.
They’re not alone. According the Qualtrics’ 2025 Market Research Trends Report, 89% of researchers are already using AI tools and 83% reported that their organizations plan to significantly increase AI investment in 2025. What’s more, nearly 71% believe that the majority of market research will be done using synthetic responses within three years.
But is AI really all that when it comes to qual? Recent research suggests otherwise.
Imprecise and Superficial
A study evaluating ChatGPT for qualitative analysis of interview transcripts identified several shortcomings, including imprecise approximations and major errors. The study’s authors determined that ChatGPT had difficulty detecting nuanced or implicit content, failed to verify coding consistency, and tended to generate only surface-level summaries. Their conclusion: although AI can provide a starting point, it frequently falls short in the depth and precision necessary for robust qualitative research.