9 Tips for Using AI Safely in Market Research

There’s been plenty of hype around Artificial Intelligence (AI) and how it’s going to transform our world — including the world of market research. What’s lacking is practical guidance for using AI in our daily work as researchers. For the next series of Thinkpiece blog posts, we’re focusing on less hype-ful and more helpful AI insight.

For the first of this series, we’re sharing our top tips for using AI safely. True, AI offers the potential to help make research more efficient and effective. But AI technology also comes with a host of potential risks, particularly when it comes to data protection and privacy. After all, AI apps are designed to collect as much information — including sensitive info — as possible to keep learning and becoming “smarter.”

Before you invest in that shiny new AI app or tool, here are nine basic safety practices to keep your, your clients’, and your respondents’ data secure and protected.

1. Choose Your AI Apps Wisely

Before you start using any AI app or tool, take some time to research the company behind it. You’ll want to make sure the developer is legitimate, offers good customer support, and has a solid reputation particularly around security.

Avoid fly-by-night and sketchy apps, and do some digging into the tools you’re considering. Check out the company’s portfolio to see what they’ve done before, and the quality of those products or services — keeping a lookout for any bad press or reviews. Make sure the company is well versed in the latest AI and machine learning frameworks, and is clearly an expert in the field. Last but not least, scrutinize their commitment to security (see tip number 3).

2. Don’t Overshare

Assume that any data you share with an AI app will be used to improve its machine learning — and is at the mercy of the app developer. Which means you should also assume that this data is vulnerable to breaches.

To that end, avoid sharing any personal, confidential, or sensitive data — especially your clients’ or respondents’ info. For example, any proprietary client information, such as a new product that’s being held under wraps, should definitely not be shared with any AI app or tool. A general rule of thumb we follow: if you don’t want it blasted on social media, don’t share it with an AI app. And if you come across AI-generated content that requests sensitive information, run the other way.

You might also want to invest in additional security tools designed to prevent oversharing of confidential and sensitive data. LLM Shield, for instance, prevents language learning machines (LLMs) like ChatGPT from leaking your clients’ or respondents’ personal or private information.

 3. Peruse the Privacy Policy

Reading the fine print may be a pain, but it’s important to know what you’re signing up for. So be sure to peruse the privacy policy, terms, and conditions for any AI app you’re thinking about using. This will let you know how the app plans to collect, store, protect, and use any data you share with it.

As previously mentioned, most apps use the data you share to make it “smarter.” But the app’s developers may also be exploiting data for other purposes, such as selling personal information to third parties who want to completely freak you out with personalized ads. The app’s privacy policies should give you confidence that the developers won’t do anything dubious with your data and are taking thorough measures to protect your info from hackers.

4. Customize Your Security Settings

After carefully reviewing the app’s privacy policy and deciding you’re comfortable with it, don’t stop there. Go into the app’s privacy and security controls and change any settings to meet your preferences.

For example, the app may give you the option to have data automatically erased after a certain amount of time, or you may be able to choose to delete the data yourself manually. You might also be able to review and erase search histories and clear conversations to delete anything you and the AI app might have “chatted” about.

5. Train Your Team

It’s a good idea to formalize your own security policies and practices for using AI apps and tools, as well. Guidelines can cover a wide range of topics, including how the app should be used in alignment with your or your clients’ values; complying with laws related to data privacy; identifying and addressing potential AI biases; reducing risks of exposing or mishandling sensitive data; the role of humans in ensuring AI app safety; and steps to take in the event of AI-related errors or disputes.

Train your team and anyone who will be using the AI app on these policies, practices, and guidelines to make sure they’re understood and followed.

6. Keep Up with Compliance

It’s a good idea to familiarize yourself and any other users on your team with current compliance and regulatory requirements around the use of AI. The Health Insurance Portability and Accountability Act (HIPAA), for example, enforces strict guidelines around protecting patient information and privacy which might easily be violated using AI. If you’re conducting research in Europe, you’ll want to be aware of any potential General Data Protection Regulation (GDPR) issues when using AI tools.

Compliance and regulations around the use of AI are ever evolving, so it’s important to stay on top of these changes to protect yourself as well as your clients and respondents.

7. Follow Good IT Security Practices

The growing prevalence of AI makes robust IT security protocols and hyper-vigilance even more critical, now that the area of attack is much larger. Keep employing the usual security best practices, such as creating strong passwords for all apps and websites, making sure all your software is up to date with the latest versions, and employing reputable anti-malware and anti-virus software.

This last point is especially important since it’s now possible to create malware that observes what you enter into an LLM like ChatGPT and then sends that information to a malicious actor for the purpose of stealing sensitive data. It’s also possible to use prompts engineered to “hypnotize” or trick an AI tool into doing things it normally wouldn’t, such as compromising the user’s data or producing incorrect or malicious responses.

These hypnotized AI apps can be used in phishing attacks to steal data from a user who thinks they’re interacting with a reputable source. Following good IT security practices will help protect you against this new crop of AI-enabled hacking.

8. Remember: AI Isn’t Perfect

Far from it, in fact. AI algorithms are subject to bias, which can lead to inaccurate, unfair, and even offensive results — all of which we definitely want to avoid in market research. Case in point: you might have read about the New York-based law firm that used ChatGPT for legal research and submitted a filing that referenced six completely fabricated legal cases, complete with bogus decisions, quotes, and internal citations — putting the firm in legal jeopardy as a result.

Which is why it’s critical that you don’t rely or make decisions based on AI alone. Have an actual human review, vet, and confirm any content or results generated by an AI app — paying close attention to potential inaccuracies, prejudices, or outright lies.

9. Just Keep Learning

One lesson we can glean from AI: the more we learn, the smarter we get. If you’re interested in leveraging AI for market research, we recommend learning as much as you can about the subject before you invest in and start using any AI tools or apps.

One good source of knowledge is Amazon, which recently launched its “AI Ready” initiative with free courses that teach you how to use generative AI (GenAI) to create text, images, and other media. Think ChatGPT and DALL-E. We just completed Amazon’s Generative AI Learning Plan for Decision Makers course, and recommend it for anyone interested in understanding more about using GenAI in their organizations.

You can also schedule a no-cost tutorial session with our in-house AI expert and Director of Technology Research, John Dibling, and download our free resource guides including “Understanding AI & Why It Matters for Market Research” and “8 Tips for Using ChatGPT in Market Research” here.

While one of the aims of AI is to simplify our lives, in reality it’s added a lot of complexity. And that complexity is constantly changing. We’re here to help you safely leverage AI as an effective, collaborative market research tool — human to human. Let us know if you have questions or are looking for more guidance.