- AI x Customer Research
- Posts
- AI x Customer Research - August '24
AI x Customer Research - August '24
Build a Custom GPT, 6 Low-risk Data Sources for AI testing, and more
Read time: ≈11 minutes
Welcome back!
There’s one thing I’ve been doing for ≈1 year that I haven’t done justice in this newsletter yet.
While solid prompting skills are certainly foundational, custom GPTs have done more to optimize my research workflows on an average weekday.
If you haven’t already created Custom GPTs to handle repetitive research tasks for you on almost-autopilot, I hope this will be a big help. I made a few videos walking you through how I created and use my “Bias Hunter” GPT so you can follow the steps to make your own in 1.5 hours (or less).
Plus, I share a favorite hack for repetitive prompting in ChatGPT, so you’ll never have to copy-paste prompt pieces again.
Lastly, I mentioned last time that my inbox is full of AI data privacy questions. Many of them come from people who aren’t testing much on their own for fear of LLMs abusing private data. I created a guide to help you to keep testing AI with less data risk so fear doesn’t keep you from learning.
Here’s the August edition -
In this edition:
🦾 Creating a Custom GPT (Tutorial). Key steps and tips for creating your first (or better-than-ever) custom GPT for research tasks.
💫 Stop manually writing the same prompts all the time. The tool I use to automatically fill in parts of prompts I use daily/weekly.
💆 6 Worry-free data sources for AI testing. If you’re struggling to learn faster because you don’t know which data to use, this is for you.
📰 News: LLM model updates that can make AI more secure, and more.
Keep reading! 👇
WORKFLOW UPGRADES
🦾 Creating a Custom GPT
It took me just 1 hour and 10 minutes from start to finish to create a Custom GPT I called Bias Hunter.
To do the full process, I estimate that you’ll need a little more time, (≈1.5 hours) to make your own, even if it’s your first time.
Start off right: Choose a task where a custom GPT could save you many hours per week on a task you usually do often, that’s time-consuming, or that takes mental energy. (But obviously, don’t offload your biggest, riskiest decisions to AI).
The videos: This is a tutorial where I go through -
How my GPT works
The prep work you should to do to set up for success
Writing instructions for your custom bot
Conditional instructions for handling multiple different cases or protocols in one set of prompts
Test and revise your instructions for better results
Plus bonus tips you might not have tried, even if you’ve done this before

Access the Tutorial videos HERE.
You can also copy my exact GPT instructions there to start testing Bias Hunter yourself.
PROMPTING PLUS
⚡️ Skip the copy-pasting of frequently-used prompt parts
I run a lot of prompts repeatedly, and even more prompt pieces - contexts like the type of project I work on, or a description of my audience profile. So I started using TextBlaze to do that for me.
When to use this: You’re used to adding the same essential prompt pieces to your prompts all the time. You’re constantly copy-pasting. If you regularly use the same types of information (the same role, the same audience description…), this will save you some sore fingers.
As an Arc browser user, I find it a bit annoying that TextBlaze is a Chrome browser plugin - but I still switch to Chrome just to use it.

DATA PRIVACY
💆 6 Worry-free data sources for AI testing
I get a lot of messages from people in one of two groups. They’re desperate to test AI more than they have so far, but they’re stuck, because -
A. Their employer still doesn’t allow them to use AI for many tasks at work, or
B. They’re too scared to use their team’s real data, for fear that releasing the information into the AI void might result in misuse, making it publicly accessible to competitors, using it to train the AI… 🙅♀️
As a skeptical optimist, I believe we need to make room for AI experimentation, just not at the cost of data abuse. I’ve been putting together sources of safer data we can use to run worry-free tests of AI tools and processes.
Here’s the quick overview of data sources:
Publicly available data sets
Podcast transcripts to simulate interviews
Social media comments and reviews
Data from online communities and forums
Internal meetings for simulated interviews
Anonymized survey results
See my guide on getting data from these sources, testing with them and tools to use for the process.
AI NEWS
Trend: Efficient, smaller models are driving a shift to on-device AI…
📱 Gemma 2 2B: A Lightweight AI Powerhouse
Gemma 2 2B was released this month with just 2.6 billion parameters (this is relatively small), and yet it outperforms larger models like GPT-3.5 in key benchmarks. It’s a major show by Google that small model efficiency is improving for running AI directly on devices.
Why this matters: On-device AI enhances data privacy and security, allowing sensitive information to be processed locally without cloud dependency. That matters for customer research, or any work tasks where we’re using customer data we’re responsible for protecting and respecting.

Photo credit: Google
〰️
🛠️ Meta Launches AI Studio for U.S. Creators
Meta‘s AI Studio now allows U.S.-based users to build personalized AI chatbots.
Why this matters: This tool could be leveraged to create customized research assistants (just like the one I created in the Custom GPTs tutorial above), for streamlining your repetitive research tasks. It’s nice to have another option, especially if OpenAI/ChatGPT aren’t the LLM company you want to work with (see this edition for more on that).
〰️
🔬 Sakana's AI Scientist: Automating Scientific Research
Sakana AI introduced the world’s first AI Scientist capable of designing, conducting, and reporting experiments autonomously. The AI Scientist was even behind multiple published research studies (see one of them here).
Why this matters: Something like the AI Scientist could be applied to corporate customer research in the future. If it can automate the design of more complex experiments and data analysis, it has real potential to give teams without an experienced researcher the ability to run studies they used to need an expert for. (Yikes?) Sounds scary right now, but it’s already becoming possible fast.
🛠️ Artifacts Now Available for All Claude.ai Users
Anthropic has rolled out Artifacts for all Claude.ai users (even on the Free plan), enabling users to create and collaborate on projects across both web and mobile apps. When using Artifacts, you can organize your Claude work into clearer projects, and interact with it in a split view, with your input (prompts) are on the left and the output (code, text, or visual) reflects your feedback on the right.
Why this matters: Artifacts enhance the collaborative potential of Claude.ai in customer research by allowing teams to quickly generate, iterate, and share work outputs like diagrams, dashboards and (perhaps most interestingly) prototypes for research.