Recruitment Perspectives

AI respondents in qual research.

In the final installment of our Perspectives series focused on recruitment challenges, we’re digging into the potential risk AI presents for qual researchers, recruiters, and clients. CEO Bonnie Dibling, field manager Cori Bussetti, and technology moderator Chris Dethloff discuss whether there’s cause for concern about fake respondents using AI to get into studies.

The Question
Should recruiters and qual researchers be worried about fake respondents using AI?
Consensus:

Fraudsters using AI tools and bots to get into studies is currently not a big problem for qual researchers and recruiters, particularly when it comes to complex B2B studies that require respondents with specialized knowledge and specific experiences. But it may become a future concern, as AI tools advance.

CEO Perspective
“If someone using ChatGPT managed to fake their way through a screener, the chances are extremely high that they’ll be identified as a fraud almost immediately.” — Bonnie Dibling, CEO

For healthcare studies, it’s usually pretty clear — pretty quickly — whether respondents know what they’re talking about once we start interviewing them.

I can potentially see a fraudster using AI to get through a screener, and I do know it’s happening in quant. Quant already has a problem with fraudulent survey submissions from people who just want to get the incentive. AI and the proliferation of bots are likely only making that problem worse. It’s clearly an issue for quant researchers because it degrades the sample quality and therefore the data.

That’s where qual has an advantage. If a bot or someone using a tool like ChatGPT managed to fake their way through a screener, the chances are extremely high that they’ll be identified as a fraud in the interview or focus group almost immediately. Especially in complex areas of study like technology and healthcare.

You may see more of an issue with AI in research methods like bulletin boards, which are moderated anonymous online discussion forums whose participants are incentivized. But at Thinkpiece, we’re not seeing AI fraudsters showing up in our studies.

Field Manager Perspective
“I don’t think AI fraud is a big concern in the qual world. It’s just too easy to verify the respondent. But I do think it might get worse as AI tools get smarter.” — Cori Bussetti, Field Manager

I’ve seen a few instances where an open-ended screener may have gotten a few AI-generated responses. In cases like that, where some red flags are raised, all you have to do is touch base with the respondent via phone to validate that they’re: a) a real person, and b) are who they claim to be. You can also do a quick search on LinkedIn to vet their credentials.

So I don’t think AI fraud is a big concern in the qual world yet. It’s just too easy to verify the respondent and find out if they’re faking or a bot. But I do think it might get worse as AI tools get smarter. I could see a scenario, for example, of someone using AI to fake that they’re a patient so they can get into a study for access to a prescription medication.

It might also end up as a tool for corporate espionage, with companies using AI to fake their way into their competitors’ studies to get confidential information or sabotage and skew results.

So, it will become even more important for qual recruiters and researchers to do their due diligence, create strong screeners that weed out the fraudsters, and verify that respondents are really who they claim to be.

Tech Moderator Perspective
“For qual, right now there’s probably a zero-percent chance that someone using AI is going to get into the study even if they get past the screener — especially not in technology.”— Chris Dethloff, Tech Moderator

For now, fake respondents using AI are more of an issue for quant than it is for qual. I know that quant recruiters and researchers have done a good job of identifying when it’s an AI responding to their survey — if the answers are too clean or grammatically correct, for example.

For qual, right now there’s probably a zero-percent chance that someone using AI is going to get into the study even if they get through the screener — especially not in technology studies. People may try to fake it to get paid, but as soon as I interview them face-to-face, I can tell they’re faking it and end the interview.

Conceivably, someone could try to use AI to make it through a screener. It’s the recruiter’s job to keep the AI fakers out. That’s also why it’s so important to have a good screener that will be able to identify anyone who doesn’t really know what they’re talking about or who is trying to impersonate someone with specific technology knowledge or experiences.

For now, I don’t see AI as a threat for qual research recruitment. Of course, as are all things AI-related, this could change. But it would take a tremendous amount of work for someone to use AI to finagle their way into a study, and for what purpose? Whatever they end up getting out of it would probably not be worth the time and effort.