AI x Customer Research - September '25

Testing an AI-embedded browser for research uses, a 4-month roadmap to adopt AI meaningfully, and more...

Read time: 16 minutes

Adopting AI isn’t easy - even a few years in.

This edition was going to be focused entirely on the new Dia browser and its potential uses in customer research.

So there’s that - but then I heard this over and over in August and September:

"We've been using AI for a while, but everyone in the team still uses it differently - and we’re still not sure what works best."

I knew I had to cover that.

There’s a bit of a trend happening right now: non-beginners in actively AI-dabbling teams are just not seeing the magic happen with AI.

If you’re leading or even just quietly championing more effective AI use across your research, product, or design team — and you’re noticing wildly different adoption patterns — I have a roadmap for you.

Let’s dive in.

In this edition:

  1. 🖥️ 5 research tasks I found easier in the new Dia browser. Many tests led to a few use cases I preferred in Dia vs. an LLM interface (+ warnings).

  2. 🗺️ My 4-month roadmap to solid AI adoption. The steps required to find AI workflows that work for your team and find use cases where you’ll get the most from it.

  3. 📰 AI News: More memory across chats could get messy for customer insights work. Memory is cool until it feeds your analysis with another project’s context. What to know.

WORKFLOW UPGRADES

📝 5 research tasks I found easier in the new Dia browser

I got early access to The Browser Company’s new browser Dia (with built-in AI), and as a long-time Arc fan, I was excited to see what they’d do with AI.

I tested a whole bunch of customer research tasks and found 5 tasks that were actually easier in Dia — mostly because of how Dia’s browser-native setup reduces switching costs by letting you prompt while pulling from open tabs.

Below the 5 tasks that worked for me, you’ll find some additional warnings + notes worth knowing about Dia’s AI setup.

1. Improving presentations

If you run internal workshops or present findings in slide decks, you can use Dia’s AI functions to quickly pull your slide deck tab into the prompt chat and get improvements in seconds.

Quick intro: There are two ways to chat with Dia AI - (1) in a new tab where your “search” bar is also a prompt input field, and (2) in a sidebar within the tab where the content you’re working with is. You’ll see both in this clip 👇

2. Fact-checking claims

Got a stakeholder saying “users just want X”? Drop their quote into a new Dia tab. It can search across your open tabs of data, documents and insights presentations and return a clearer summary or validation much faster than dragging everything into a new GPT session.

3. Building a prompt library (with slash commands)

Dia lets you save reusable prompts as custom shortcuts (called “Skills”). For example:

  • Create a skill called /review-mining

  • Store your best prompt for mining customer reviews

  • Use it anytime, on any page or open tab, just by typing /review-mining

It’s like TextBlaze (I mentioned that tool ages ago), but native to the browser and easier to manage.

4. Auto-summarizing your work

Maybe you need to prove to your manager that you were productive this week. Or just want to double-check progress toward your own targets, like me. Dia can pull from your browser activity and summarize it any way you tell it to. I used it to create a weekly update that:

  • Mapped to my research to-do list

  • Created a 1-paragraph summary

  • Noted weekly targets checked off in bullets

Inspired by this Dia Skill: Daily Wrap

5. Comparing tools side-by-side

If you’re testing a few tools for research, and you’ve opened a bunch of landing pages in your browser, Dia helps you quickly compare -

  • Pull in info from all open tabs (no copy-pasting info or links into an LLM)

  • Ask for a TL;DR comparison

  • Zero back and forth between landing pages, context explanation and results

⚠️ Tip: If you’re used to writing long, elaborate prompts in LLMs for a comparison task like this one, they won’t typically work here. Short wins. See the prompt I used in the video (8 different longer variations all generated errors!).

〰️

Which model is Dia using?

Dia refers to GPT-5 and GPT-Thinking within your Skills library, where you can choose which model to use for each Skill (e.g. prompt template) you save.

A few warnings…

  • Dia’s memory is ON by default. It clearly states that it stores your site visits, chats, and preferences on their servers. Proceed with caution (especially with PII or sensitive research material).

  • No visual generation. It won’t create charts or visuals the way ChatGPT or Claude can — so where you need to create graphs, mockups, or images, you’ll want to stick to your LLM interface.

〰️

Want to try Dia yourself?
👉 I’ve got invites for a few of you to skip the waitlist here

AI FUNDAMENTALS

🎨 My 4-month roadmap to solid AI adoption

I’ve spent 2024-2025 supporting clients with training and longer-term adoption support - so I’ve seen a variety of issues getting AI on board and getting real value from it.

But most of the blockers come from the same problematic patterns - things that can be easily fixed if you have the right steps in mind.

I created this roadmap to help research and design teams go from scattered experimentation to finally figuring things out. (It’s based on working with all those teams over the last 2 years).

The Roadmap Overview -

  • Audit where AI can actually matter

  • Make prompting a priority (really - learn how to do it well)

  • Track things until you see consistency

  • Define human checkpoints

  • Run “bake-offs” on real work

  • Create shared playbooks

  • Build evidence trails people can audit

  • Score consistency with a simple rubric

  • Create a forum for comparison and discussion

This takes you from “we’re trying a few things” to “we have AI-backed systems that make us faster and better and the proof to back it up”.

📍 Get the full roadmap + details here:
Adopting AI That Works for Customer Research (in 4 Months)

AI NEWS

📰 More memory across chats could get messy for customer insights work.

Both Claude and ChatGPT have recently just rolled out expanded memory capabilities:

  • Claude’s memory is now live for Team and Enterprise users. It remembers past conversations, adapts to your projects over time, and can carry context across chats automatically. You can see what it remembers, edit it, and turn on “incognito mode” if you don’t want it storing a conversation. (Source)

  • ChatGPT’s updated memory (from August’s GPT-5 release ) works similarly — gradually learning preferences, remembering your name or goals, and influencing future chats even if you don’t explicitly reference past ones. It can be toggled on/off in Settings > Personalization (and you can even see all the specific context it has saved from your chats).

Why it matters

We want AI to know what we’re working on. But in research, we often need tight boundaries between projects: different data sets, hypotheses, stakeholder inputs all need to be kept separate

But memory means context bleeds are possible across chats in ways that are hard to spot.

〰️

How memory works (in both ChatGPT and Claude)

  • Memory is stored at the user level, not per chat

  • It silently influences new chats, even if you didn’t reference past ones

  • It uses implicit memory — it may remember something you didn’t ask it to remember

  • You can turn it off globally or for one chat (use “incognito”)

  • You can view and edit what it remembers in your settings (Settings > Personalization > click the “Manage” button)

  • Deleting a chat does not delete memory it saved from that chat

〰️

What we can do

Some quick ways to stay in control:

  • Use incognito chats if keeping things separate is more important than shared continuous knowledge

  • Regularly delete or review memory before starting a new research task

  • Label projects clearly in prompts, so you can spot when context is leaking

  • Verify outputs — especially if something sounds too familiar

Memory is helpful, until it isn't. If you’re seeing signs of spillover from previous studies, data and context input you’ve given, consider that memory might be the culprit.

Pssst - you always get a course discount!

The November cohort of my AI Analysis course is open for enrollment.

So if you’re -

  • wishing you could cut analysis time measurably, but…

  • feeling like anything coming from AI could be hallucinated…

  • and spending more time fixing outputs than is saved using AI…

We fix all of that in the course (also, it’s rated 4.8/5 on Maven).

Plus, the best things about it:

  • You get 1:1 help from me privately, on your very personal, use case specific challenges

  • There’s so much content - it will continue guiding you for months after our cohort

  • …and you’ll have updated content for 6 months!

More details here 👇

It’s almost Q4! See you next month.

-Caitlin