- AI x Customer Research
- Posts
- AI x Customer Research - July '24
AI x Customer Research - July '24
AI for Analysis, the big LLMs' privacy policies, and a tool for comparing LLMs yourself
Read time: < 15 minutes
Welcome back!
As promised, it was finally time for a big topic this month: AI Analysis. But there are too many research methods to test analysis on all at once, so I started with qualitative analysis this month.
Part Two with Quantitative AI Analysis is coming soon!
Lately, my inbox has been packed with AI data privacy questions, like, “How are LLMs using the data I upload and input?” We’ll talk about that this month, too.
Then there’s a way for you to find the best LLM for your needs, faster, and a few news highlights.
I hope you can learn quickly from this edition (and then have some vacation!). 🏖️
Let’s do this!
In this edition:
🧐 AI Analysis Part I (Qualitative): Is AI analysis accurate and faster, or does it create more time-consuming checkup work?
⚖️ Figure out which LLM is best for you by testing models simultaneously. A tool that lets you try prompts and compare LLM responses side by side.
🗂️ How the big LLMs are using your data. I dug through their legal pages to figure out how they’re using your prompts and uploads.
📰 News we should know for customer research.
Plus links to a few supplementary tools on my radar… 👇
WORKFLOW UPGRADES
🧐 AI Analysis: Is it worth it?
As a researcher, I’m protective of the analysis process. At least a handful of my research colleagues have told me lately that they feel like AI analysis tools are stepping on their turf. But…is it worth using for the speed, or do we compromise depth and accuracy?
My verdict: It’s an almost-guaranteed speed hack, if you choose the right tailor-made research analysis tools (and not just ChatGPT).
I tested 10 platforms in total, but 6 analysis tools made the cut this time because many are in “beta” and not quite ready for reliable use at work.
The platforms: Reveal, Dovetail, Notably, GreatQuestion, User Evaluation, and Collectif.
Just like the note-takers tested last month, each tool here offers different output formats for different uses, plus different levels of depth.

Thinking of choosing an AI analysis tool? The time-saver functions I think you should look for:
Automatic notes to sticky notes (helps with doing your own analysis)
Auto-tagging and clustering notes
Ability to toggle the level of theme detail to get the right level for your specific work
Chatting with the data to ask specific, targeted follow-up questions that you may not have explicitly asked in the session, but that there may be some evidence for/against in transcripts
Chatting with the data to quickly pull out quotes for a specific topic
Logging notes/highlights in a table that you can export for use in other ways
And yes, I do have a favorite so far.
Despite not being the most visually appealing, Reveal saved me the most time. It offered accuracy, depth, reliable matching of participant answers to research questions and the ability to chat with the data to dig deeper into specifics and gather proof points fast.
PROMPTING PLUS
⚖️ Compare prompts and outputs to find the best LLM
I’ve been using Nailed It to compare the outputs that various LLMs generate based on a single prompt.
When to use this: You’re not a prompting master yet, but you’ve written a fairly clear, conversational “project brief” that should be usable in an LLM. You just don’t know which LLM could give you the kind of results you’re hoping for, and you don’t want to sign up for all of them at the same time.
Note: prompting is a skill, and learning to prompt well has a big effect on whether your prompts will generate something worthwhile in any LLM. But regardless of skill level, you can test simple conversations and requests side by side in NailedIt to see how LLM responses differ.

A view of the prompt/uploaded data (left) and two LLM models’ outputs in response.
DATA PRIVACY
🙅 How LLMs are using your input
I dug through T&C’s and privacy policies to figure out what they say about using your input. (And yes - I did this manually and got AI’s “perspective”).
Here’s the summary:
Both Anthropic (Claude) and Google (Gemini) seem to be more committed to data privacy and safety than OpenAI (ChatGPT). They have more restrictive and transparent use of your data, and fewer cases when they use it for data training.
Anthropic and Google specify that they save your prompts and data in cases of safety and security issues (ex: hate speech).
Google is the only one to clarify its handling of your voice data - they say that they do not to not store voice data on Google servers.
Microsoft Azure (OpenAI) is clearly the most private version, allowing teams to use a local instance of OpenAI ensuring that none of your private/confidential data will be sent back to OpenAI.
They’re all somewhat vague on how long they store your data.
See how they compare on a few more privacy checks.

AI NEWS
Trend: Desk research seems to be getting some major upgrades across the board…
〰️
⏱️ OpenAI releases SearchGPT prototype for real-time data and more accurate answers
OpenAI is testing the new SearchGPT, a tool to boost search with AI. It gives quick answers from the web, mixing chatty AI with up-to-date info. Users get clear, relevant replies.
Why this matters: SearchGPT could greatly improve desk research by providing accurate answers with live web data. Unlike ChatGPT, it offers real source citations and up-to-date information.
〰️
🔀 Gemini gets an upgrade with high speed and “related links”
Gemini's free plan now uses Gemini 1.5 Flash - making the model faster than ever - and provides related links to your queries.
Why this matters: You'll see "related links" with your results to help you learn more about a topic or fact-check the results more easily.
Coming soon: they’re adding the ability to connect Google Drive files for immediate document analysis and visualization from data.
Plus…
I’m playing with Email Whisperer to speed up writing personal responses in back-and-forth emails with research participants
Jelled.ai is on my list, too, for writing “informed emails”, plus providing insights from my inbox
Mermaid AI might replace my (still mostly manual) diagramming tasks where I plot customer journeys or steps to complete tasks as reported in research…
WHAT’S COMING NEXT?
Here’s what I’ll share in the next few editions -
AI Analysis Part II: Quantitative is in the works 📊
CustomGPTs for research: a tutorial for creating your own
My process for anonymizing everything I put into LLMs when using a personal account
Moving the AI moderators to end of year but still testing in the background
…and more!
As always, thanks for being here.
If you have specific questions you want me to cover, you can ask them personally or anonymously at any time here.
Until next time,
Caitlin