Writing the Perfect Screener

The Hidden Challenge in B2B Tech Research

John Dibling
COO, Technology & Finance Lead
February 7, 2025

There’s no getting around it: B2B tech research is hard. Especially when the technology landscape is constantly shifting right under your feet. Understanding complex technical material is hard. Writing the discussion guide with the right, relevant prompts is hard. Having deeply technical conversations with experts who seem to speak another language (often of the programming variety) is hard. Writing reports that translate tech-talk into actionable strategic business insight is hard.

But the hardest part about B2B tech research? Writing the screener. It’s not close.

Consider just how important the screener is for a successful B2B tech study. These projects often require finding and talking to highly specialized and rarified respondents with extremely specific experiences and skillsets. If the screener doesn’t do its job, your study may end up with participants who don’t meet the criteria, leading to insights that aren’t accurate, meaningful, or useable. And that’s a huge waste of time and money, for everyone involved. Getting the screener just right is imperative. It’s also incredibly challenging, for a few key reasons.

So why is it so hard? A few reasons.

Meaningless Tech Titles

One of the first questions on a screener is typically, “What is your job title?” In B2B tech, that doesn’t always work because people’s responsibilities change as quickly as the technology does. So titles often don’t tell you what they actually do.

For example, on a recent study, one respondent had the title, “Senior Machine Learning Engineer.” That should mean that they develop, train, experiment with, refine and optimize ML models. But when answering our open-ended questions, it became clear that what they were actually responsible for was deploying ML models into production, and providing ongoing monitoring and maintenance. That job is often called MLOps Engineer, and it’s a completely different job from a Machine Learning Engineer. They probably go to all the same meetings, but they have completely different responsibilities and perspectives.
If we had recruited the respondent with the title “Senior Machine Learning Engineer,” expecting them to have that job, it would have been a huge waste of money and time.

On the very same study, another respondent had the title “Manager Director and Global Head of Cloud Native Core Banking Integration.” Quite a mouthful, but I get the idea. He should be responsible for implementing cloud integrations in certain applications. After reading his responses however, he revealed that he was actually Chief AI Officer, but they hadn’t changed his title! Again, a job that could not be any more different than his actual title would suggest.

Simply asking for a title isn’t enough, particularly in a rapidly changing and complex field like technology. Unlike healthcare, where titles typically tell you exactly what someone does, tech titles might get you in the ballpark — or they might land you in the football field on the other side of town. As such, the screener needs to go beyond what’s on the business card to reveal the respondent’s actual skills, knowledge, and experiences to make sure they’re the right fit for the study.

Misinterpretation Kills
Complex tech studies often require respondents with highly specific technical experience — the kind that can’t be faked, approximated, or easily found. Here’s a specific example.

For another recent study, we needed to interview key opinion leaders in Confidential Computing — a specific technology that creates Trusted Execution Environments where encrypted data can be processed without decryption. Sounds super niche, right? It is.

We estimated there were maybe 250 people globally who could legitimately participate in our research, and we needed to find 25 of them. So how does one go about writing a screener to achieve that tall task? You can’t screen on title alone, because as we established earlier, titles are virtually meaningless. You can’t come right out and ask the respondents if they have experience in Confidential Computing, because almost nobody does. And yet almost everyone will say they do — misinterpreting this very specific role for a more general familiarity with data confidentiality that’s on top of everyone’s mind these days. But Confidential Computing is not data confidentiality. Not the same thing at all.

But rather than dwelling on what can’t be done, let’s focus on what we’ve found works when it comes to writing screeners for tech research. In our years of specializing in B2B tech research, we’ve found there are three “secrets” to successful screening. Let’s list them now.

Secret 1: Don’t screen people in. Screen people out.

Typically, recruiters and researchers are using screeners to screen people in — finding those respondents who meet the study’s criteria. Instead, think of it as a tool for screening people out — identifying respondents who don’t have the specific and relevant skills, experiences, and knowledge you need, and cross them off the list. To do this, however, the screener’s questions must leave zero room for misinterpretation: i.e. confusing Confidential Computing technology with general data confidentiality.

This intolerance for misinterpretation leads us right into the second secret.

Secret 2: Lean heavily on carefully crafted open ended questions.

I’d bet that when you read the words, “Confidential Computing” you had an instinctive general idea what I meant by that. If you’re like almost everybody else on the planet, you probably thought about website security, or maybe GDPR, or some other confidentiality-related thing. That’s misinterpretation, and it happens constantly in B2B tech screeners.

We’ve found that the best way to avoid deadly misinterpretation is to let the respondent use their own vernacular, and not expect them to figure out what we mean with ours. It shifts the responsibility to us to interpret their answers, but that’s OK. We’re moderators, after all. That’s our job.

In screeners for highly technical projects, one or two open-ended questions can be an effective way to identify respondents who will add the most value to the study.

Here’s a general guideline for what an open-ended question might look like. “We are looking for individuals with deep and highly technical hands-on expertise regarding [some constraint] with [some technology] within [some domain]. In a few sentences, please share the unique qualifications you would bring to this conversation.”

Here’s another example from a screener we’re running right now: “In the previous question, you mentioned that you use [a specific computer component] for [a type of application]. In a few sentences, describe what the application does which uses that component. In other words, what does your app do?”

This approach works for a number of reasons. Firstly, it allows respondents the opportunity to use their own technical language and jargon, which are clear demonstrations of their skills, knowledge, and areas of expertise. Secondly, it creates boundaries that keep respondents from going off the rails and down rabbit holes, while allowing for meaningful exploration. Lastly, and most importantly, it puts the interpretative burden on the researcher, rather than on the respondent. In other words, we’re not relying on the respondent to correctly interpret the term Confidential Computing, but rather — based on their answers — interpreting for ourselves whether the respondent has experience with this technology.

So what’s the catch?

Secret 3: Expertise Matters. A lot.
Not every researcher is going to have the technical competencies and chops to find the right respondents for complex B2B technology research projects. Interpreting open-ended responses on a screener requires lived experience in the technical world.

Remember that Senior Machine Learning Engineer example? When he describes his day-to-day work in an open-ended question, the research team must be able to immediately recognize that in fact the respondent is actually doing MLOps work and is therefore not the right fit. Without deep technical experience, this crucial distinction might get missed, and it could make or break the research quality.

And what about that Managing Director and Global Head of Cloud Native Core Banking Integration? If the study actually calls for Chief AI Officers — a role that as-of-yet barely exists in many organizations, and almost never has that title — the research team might have disqualified this respondent based on his outdated title alone. When he describes his actual responsibilities in the open-ended response, researchers with technical expertise may be able to recognize that he is indeed exactly the right respondent for the study. His organization simply hadn’t caught up with a formal title change.

Bottom line is this kind of expertise can’t be faked. And it can’t be gained via osmosis from talking with other experts in the field. This peer-to-peer expertise comes from having lived and breathed it, working in the trenches of technology for years if not decades. Only then can you read between the lines of technical jargon, recognize the real roles behind the titles, and identify the perfect respondents — even when their titles might suggest otherwise.