Keep Thinking
The AI you reach for says more about you than you think
During this year’s Super Bowl, Anthropic launched a provocation against ads in OpenAI’s ChatGPT with a clean, simple message:
“Ads are coming to AI. But not to Claude.”
“Keep thinking.”
An elegant, almost philosophical reflection of Anthropic’s ethos (and, ironically, an ad) that suggests depth. Patience. Deliberation. A model designed for reasoning rather than rapid output. In a crowded AI landscape with plenty of slop, it’s a claim worth questioning. Because if you think enough about how people actually use these tools, a slightly different picture emerges.
More than purpose, the tools themselves evoke behavior, state of mind, and intent. The most popular tools don’t operate in identical spaces; they’re tasked with different problems. While they all change and grow as the ecosystem does, it happens irregularly and faster than most of us can keep up with.
The defining difference between AI platforms comes down to when people reach for them. Each tool has begun to capture a particular moment in our thinking — the point where someone decides they need something. The same person can work through a strategy problem first thing in the morning and ask a random medical question at 2 p.m. The personas aren’t necessarily user types. They might be states of mind that tools are optimizing to serve. It’s not just “who do you want to be” but “which version of yourself are you reaching for right now?”
“Keep thinking” is lofty precisely because, when we turn to AI, we frequently aren’t thinking. The ad isn’t describing behavior. It’s making an identity play.
ChatGPT: The Everyman
The ChatGPT user is in motion, physically and mentally. They’re juggling tasks, finishing something, then starting something else.
ChatGPT pushes things forward. A paragraph becomes clearer, a research article becomes a summary, a messy idea becomes a slide outline. Prompts cluster heavily around writing assistance, information retrieval, and practical guidance. It’s the behemoth on the scene, despite the occasional scandal or disappointing release.
It’s someone trying to get things done. In meeting that need, ChatGPT is often painted as a yes-man: complimentary and performatively helpful, even when the results don’t meet expectations. The company recently announced it would retire models known for “teaser-style phrasing” and overly cautious responses, describing a version of ChatGPT “designed for adults” that treats users “like adults.” The reputation has traveled all the way back to the company’s own all-hands meetings. Whether that recalibration lands is another question.
Meanwhile, OpenAI is preparing for a potential IPO, consolidating its browser, ChatGPT app, and Codex coding tool into a single desktop superapp. The move makes the positioning explicit: it’s no longer just a chatbot. It’s a productivity platform. OpenAI reportedly now has over 900 million weekly active users and is “orienting aggressively” toward high-compute enterprise use cases. ChatGPT still has an entrenched position as the default AI tool for millions of people — whether it’s the best at any one task or not — because that moment, when something half-finished needs help, occurs constantly.
Then there’s the ad. The Super Bowl spot was Anthropic’s joke to make, but OpenAI made it the punchline: ads are now actually rolling out inside ChatGPT, in a pilot with three of the world’s largest advertising agencies. The program is currently live for roughly 5% of mobile users, with analysts projecting it could generate more than $30 billion in ad revenue by 2030.
Claude: The Deliberator
The Claude user sits down with a different kind of problem. They’re wrestling with something complicated: a strategic decision, an argument that doesn’t quite hold up, a question with too many variables.
Common Claude prompts reflect that. Claude has gradually attracted users who value that style of interaction, and the recent advertising campaign makes the positioning explicit. For many, it’s exactly what they want to hear: some users genuinely want a model that behaves less like a writing assistant and more like a reasoning partner.
What that framing flatters, though, is worth examining. The Deliberator persona may be the one users most want to believe they embody. Whether they’re actually sending those prompts — or pasting in a half-written email at 11 p.m. and asking Claude to “make it better” — is a different question. The identity on offer is prestigious; the actual use is probably messier.
There’s also a more complicated story running alongside the campaign. Anthropic has refused the Pentagon’s demand to make Claude available for “all lawful purposes,” drawing a line specifically at mass surveillance and fully autonomous weapons. The Defense Department responded by labeling Anthropic a “supply chain risk” — normally reserved for foreign adversaries — and setting a six-month deadline to remove Claude from military systems. xAI’s Grok, which agreed to the “all lawful use” standard without hesitation, has since been integrated into classified Defense systems. It says something about Claude that the ads don’t: the model with the most refined reasoning is also the one with the most specific restrictions on how that reasoning can be used. Claude’s territory is narrower. That can be a feature and a constraint.
Perplexity: The Skeptic
They’re reading an article, and something feels off. The claim sounds too confident. The statistic seems oddly specific. The author didn’t link to the study they’re referencing.
“What sources support this claim?”
Perplexity feels different from most chatbots. The interface pushes users toward verification. Every answer arrives with citations. That small design decision shapes behavior. People don’t paste paragraphs asking for rewrites. They want sources, comparisons, and summaries of reporting. They want receipts. Perplexity made its own foray into advertising last year to drive the point home: “When you need to get it right, ask Perplexity.”
Gemini and Copilot: The Admin
Despite what the AI evangelizers would have you believe, many people rarely — or ever — intentionally open an AI tool at all, yet they work in AI-enabled environments every day.
Maybe they’re in Gmail or Outlook, editing a document, or reviewing a spreadsheet. The AI is already there: flashy new barnacles on an old interface offering all kinds of helpful improvements.
These users aren’t seeking out AI. It has simply appeared inside the tools they already rely on. That distribution strategy explains why Gemini and Copilot user numbers surged after being embedded across Google’s and Microsoft’s ecosystems. The big, exciting number is self-engineered. The quality of the features, in many cases, is secondary.
The persona here isn’t philosophical; it’s forced. It’s the accidental solution to bullshit jobs. It’s the drudgery and busywork usually left to the under- or unpaid rank-and-file. Most people never chose this tool. It was chosen for them.
Grok: The Reactor
Grok users come from a different direction. They’re not writing emails or analyzing strategy. They’re watching people argue — or jumping into the fray themselves. Grok lives inside X, right next to the discourse, and responds publicly.
Its most common job is explaining what’s happening in a conversation. It has also been weaponized by those looking to bolster an argument, whether such evidence exists or not. The platform’s troubles have graduated from unfortunate to severe: what began as Grok generating nonconsensual explicit images of adult women has since escalated into lawsuits from minors alleging the tool was used to produce child sexual abuse material from their photos. A bipartisan coalition of 35 attorneys general demanded answers. The Pentagon integrated Grok into classified systems the same week the lawsuits were filed.
xAI is moving fast in every direction — Pentagon contracts, a $20 billion Series E, SpaceX acquisition, Wall Street hiring to train Grok in finance. The user who wants chaos explained quickly is getting a tool that increasingly operates at an institutional scale. Whether those two things are coherent is a question xAI seems uninterested in asking.
Who said what? What does this mean? Is this true? Someone trying to make sense of chaos — through curiosity, interrogation, or confirmation — quickly. The tool keeps delivering. So does the chaos.
DeepSeek: The Optimizer
Whatever they’re developing, they’ve already used three other models for this. They know what good output looks like, and they’re impatient with the overhead. They want the function generated, the algorithm explained, and the bug fixed. They did the math on cost per token and made a decision most people wouldn’t bother with.
The controversy didn’t particularly slow them down. If anything, it sorted the room.
Yet, there’s new competition they may not have anticipated: open-source agentic tools like OpenClaw have emerged as an alternative to both proprietary and Chinese models, popular with developers who want full control and no dependence on big AI infrastructure. The room keeps sorting.
Among these big players, there’s been little movement outside of the occasional blip, so how often do people switch? Casual users stick. Someone who learned to paste things into ChatGPT at work isn’t running benchmarks at home. Inertia is real, and for the majority of users — the accidental ones, the occasional ones — it’s probably the dominant force.
The people most likely to have read this far aren’t those users. Heavy users, the ones who actually form opinions about these tools, tend to accumulate rather than commit. They develop a small stable of models, each assigned its own lane.
Migration, when it happens, isn’t gradual. It’s triggered: by a quality cliff, a task the current tool visibly can’t handle, or a peer recommendation that arrives at exactly the right moment of frustration. People don’t leave on a Tuesday. They leave when something fails them on something that matters.
The tools don’t just attract identities, they reinforce them. The Reactor gets more reactive. The Deliberator grows more comfortable mistaking AI-assisted reasoning for their own. The Admin surrenders more of their workflow without ever having chosen to. The grooves deepen.
When user behavior is so sticky, and the personas are so defined, is a campaign like Anthropic’s revolutionary or self-reinforcing? As platforms and their user bases self-segment, our loyalties become tied to different aspects of ourselves: some noble, some far less so.
That still leaves us with a choice. When you look out at what these different platforms attract, encourage, and promote — which version of yourself are you reaching for?



Great article. I am both intrigued and concerned by what I read here. One thing I am certain of, is that it is good to know this information.