Meaningless Tech Titles
One of the first questions on a screener is typically, “What is your job title?” In B2B tech, that doesn’t always work because people’s responsibilities change as quickly as the technology does. So titles often don’t tell you what they actually do.
For example, on a recent study, one respondent had the title, “Senior Machine Learning Engineer.” That should mean that they develop, train, experiment with, refine and optimize ML models. But when answering our open-ended questions, it became clear that what they were actually responsible for was deploying ML models into production, and providing ongoing monitoring and maintenance. That job is often called MLOps Engineer, and it’s a completely different job from a Machine Learning Engineer. They probably go to all the same meetings, but they have completely different responsibilities and perspectives.
If we had recruited the respondent with the title “Senior Machine Learning Engineer,” expecting them to have that job, it would have been a huge waste of money and time.
On the very same study, another respondent had the title “Manager Director and Global Head of Cloud Native Core Banking Integration.” Quite a mouthful, but I get the idea. He should be responsible for implementing cloud integrations in certain applications. After reading his responses however, he revealed that he was actually Chief AI Officer, but they hadn’t changed his title! Again, a job that could not be any more different than his actual title would suggest.
Simply asking for a title isn’t enough, particularly in a rapidly changing and complex field like technology. Unlike healthcare, where titles typically tell you exactly what someone does, tech titles might get you in the ballpark — or they might land you in the football field on the other side of town. As such, the screener needs to go beyond what’s on the business card to reveal the respondent’s actual skills, knowledge, and experiences to make sure they’re the right fit for the study.
Misinterpretation Kills
Complex tech studies often require respondents with highly specific technical experience — the kind that can’t be faked, approximated, or easily found. Here’s a specific example.
For another recent study, we needed to interview key opinion leaders in Confidential Computing — a specific technology that creates Trusted Execution Environments where encrypted data can be processed without decryption. Sounds super niche, right? It is.
We estimated there were maybe 250 people globally who could legitimately participate in our research, and we needed to find 25 of them. So how does one go about writing a screener to achieve that tall task? You can’t screen on title alone, because as we established earlier, titles are virtually meaningless. You can’t come right out and ask the respondents if they have experience in Confidential Computing, because almost nobody does. And yet almost everyone will say they do — misinterpreting this very specific role for a more general familiarity with data confidentiality that’s on top of everyone’s mind these days. But Confidential Computing is not data confidentiality. Not the same thing at all.
But rather than dwelling on what can’t be done, let’s focus on what we’ve found works when it comes to writing screeners for tech research. In our years of specializing in B2B tech research, we’ve found there are three “secrets” to successful screening. Let’s list them now.
Secret 1: Don’t screen people in. Screen people out.
Typically, recruiters and researchers are using screeners to screen people in — finding those respondents who meet the study’s criteria. Instead, think of it as a tool for screening people out — identifying respondents who don’t have the specific and relevant skills, experiences, and knowledge you need, and cross them off the list. To do this, however, the screener’s questions must leave zero room for misinterpretation: i.e. confusing Confidential Computing technology with general data confidentiality.
This intolerance for misinterpretation leads us right into the second secret.