Is ChatGPT Shopping Research Really About Products?
- Nick Gray
- 1 hour ago
- 8 min read

“Why ChatGPT recommendations will reward explainable brands and destroy meaningful ones.”
So though the general conversations I have with friends, family, brands and retailers there's a general consensus where most people assume artificial intelligence for retail will just be a much faster version of Google.
A new interface.
A shorter shortcut.
A much smarter recommendation engine that saves you from scrolling.
But when OpenAI launched Shopping Research inside ChatGPT in late 2025, the ground shifted in a way that most still haven’t processed. The announcement was crazy in itself, super modest and almost understated, “ChatGPT will help you research products and build a thoughtful guide to help you decide.” There were no grand claims about revolution, no flashy demos, no Silicon Valley swagger that I have loved seeing. And yet for me, what it just quietly enabled is far more consequential than anything Amazon or Google has done in the last decade.
For the first time ever, millions of consumers are being handed a decision-making model rather than a list of links and that will change how people shop, how people learn, and eventually, how people think.
So what does ChatGPT actually do when you research for products
When people think of ChatGPT, most think it is simply summarising the internet for us. We picture an algorithm scraping blog posts, extracting bullets, and serving them back to us in a polite tone. It's literally that misunderstanding that's rooted in how we’ve all been conditioned by search engines.
ChatGPT is not a search index. It is a reasoning engine.
What that means is when you ask for product recommendations, it is performing three processes all at the same time and understanding these is really essential to understanding the future of retail.
1. Pattern recognition
It doesn’t read one review the way a human does. It sees millions of them.It sees the arguments people might make for and against purchases, the words they use to justify price, the phrases that appear when someone is disappointed, the sentiment patterns that correlate with returns and the emotional language that correlates with loyalty.
So instead of thinking:
“This is a running shoe. It is made of X materials.”
It thinks:
“This is a running shoe frequently chosen by people who live in hot climates, run on urban terrain, and value stability over cushioning.”
It understands why people choose, not just what they buy. Super clever.
2. Constraint filtering
As humans we often make decisions by eliminating possibilities they already know they don’t want. At Nike we would do this with certain colours of footwear knowing it just guides the customer to the one we want them to buy. ChatGPT does the same thing, except it does it at industrial scale.
So here's how it works. You set boundaries:
Budget: “Under $300.”
Use case: “Daily run, mostly concrete.”
Longevity: “Must last at least two to three years.”
Identity: “Premium but not flashy.”
Instead of making you scroll through 40 browser tabs, it narrows the universe to a small group of plausible candidates. The result then is not information. It is the relevance of the search boundaries.
3. Decision modelling
Ok so brace yourself as here’s the part most people don’t see: ChatGPT is not just comparing attributes, it is balancing emotional trade-offs that humans make subconsciously.
Humans do this emotionally:
“I don’t want to look like I’m trying too hard.”
“I don’t want to regret paying an extra $80.”
“I want something that makes me feel competent.”
And when ChatGPT does it its logically:
performance vs comfort
price vs longevity
identity signalling vs practicality
credibility vs trend
So basically it isn't replacing intelligence. It is compressing it.
When you use it properly, you are no longer collecting raw inputs, you are shaping the entire decision environment. I can still feel your excitement as you read that. The part that most people don’t notice though is the emotional cost of convenience. When a model hands you the “right” answer every time, what happens is we slowly stop noticing how products make us feel. You stop trying things that are unfamiliar and you stop discovering.
You have now outsourced your intuition.
And once intuition goes, taste follows and it's because taste is actually born from exposure, discomfort, experimentation and often regret. No machine can feel regret on your behalf.
The challenge is a lot of brands won't see any of this. They will assume that the challenge is informational, that the model needs clearer data, more features, more comparisons, or stronger SEO signals. We will sit in a boardroom and someone will eventually say, “We need to optimise our product pages for ChatGPT,” (I've literally heard this) and everyone nods because it sounds strategic, it sounds modern, and it feels like the sort of thing a responsible business should be doing. And in doing so,
they will walk themselves right into the trap.
Here’s the thing, the moment you start optimising for the model, you also start sanding the edges off your brand. You make your brand language more generic, more predictable, more categorical. You simplify tone, trim narrative, flatten meaning, and we intentionally bring every sentence closer to the centre of the bell curve. We do this because we think it's going to become easier to understand when really what you're doing is becoming easier to replace.
ChatGPT does not reward originality. It rewards consensus.
It does not reward myth. It rewards clarity. It does not reward the strange, dangerous edges of culture. It rewards the documented, the referenced, the statistically safe. And this is where I see high-performing brands making catastrophic mistakes: they will attempt to out-compete the model at its own game, meaning they will rewrite product descriptions, standardise tone, ask their marketing teams to make their “value propositions more intelligible to AI.” and agencies will build prompts to “optimise emotional language,” and the crazy bit is founders will genuinely believe they are innovating.
What they won’t realise is they are killing the very thing that makes them economically interesting and all because ChatGPT is not ranking brands on desirability. It is ranking them on explainability.
In my opinion what we will see is the brands that are easiest to explain are the ones that will die first.
Let me show you the difference.
A $120 mass-market shoe with a simple proposition like durable, versatile, wide fit, slots easily into the machine seamlessly.
$380 shoe built for people who run at 5am in silence because it’s the only time their head stops screaming? The model has nowhere to put that.
All it will do is try its best to summarise it into something legible:
“A premium performance trainer designed for neutral runners.”
And in that moment, (I close my eyes and shake my head) the brand’s entire soul evaporates. This is the part that keeps me up at night, because when every purchase becomes an answer instead of a journey, people stop walking the path. It means we stop browsing, don’t experiment, don’t compare and we just learn to trust the output more than trusting our own instincts. And once people stop exercising instinct, they slowly lose the muscle of judgment. That is the silent cost of relying on AI to shop for you. This is the massive mistake I am worried will be made by a lot of retailers simply because they will see and treat AI as a funnel, an exciting tool to smooth friction, to calm uncertainty and just accelerate choice.
"If you assume the future is about convenience. It is not. The future is about meaning."
So here's what happens when everything becomes efficient, the only thing that's left and that has value is the feeling that cannot be automated. When every product is rationalised, the only thing left that matters is the emotional stakes the buyer cannot articulate and when every brand begins to speak in the same voice even though it's clear, concise, perfectly structured. The brands that whisper instead of explain are going to own culture.
I can hear you all wanting to know what this means in a physical store? This is exciting for me personally as it means the staff who used to be “salespeople” now become translators of insecurity. It means merchandising is no longer about visibility; it is about narrative. It means the playlist matters. It means the way someone holds a product before trying it on matters. It means silence, yes that's right, silence, can close a sale when a customer is confronting themselves.
AI can tell you which shoes people buy when they’re ashamed of their performance but it cannot look you in the eye when you’re lacing them up and say, without words, “You’re not alone.” And that is where retail will split in two.
Lets not get it twisted, there will definitely be the brands who chase the model and that optimise to be selected or try be the best recommendation, turning identity into index basically. And there will be the brands who realise that the only defensible position in an AI-shaped market is the one the model cannot rationalise. Brands who realise that their job is not to be clear, but to be consequential.
It's easy really. A shoe that makes sense is a product. A shoe that makes you feel like you’re becoming someone is an absolute ritual. Rituals don’t live in bullet points but in memory, in discomfort, in the quiet negotiations people have with themselves when no-one is watching. They live in the emotion long before the language and in the desire before the explanation.
That is exactly where IGU Global operates.
Not on the surface where the model scrapes. Not in the content layer where SEO agencies play. Not in the performance dashboards where marketers panic. We work in the emotional defaults that shape human behaviour and in the symbolic meaning products carry long after the transaction. We work in the narrative identity people are trying to construct when they buy something they don’t technically need.We work in cultural positioning, the way a product whispers allegiance, not affordability. And above all, we work in taste. Taste is the boundary AI cannot cross. Taste is exclusion. Taste is restraint.
Taste is saying: “This is not for me yet.” Taste is the emotional scarcity that makes belonging feel earned.
So AI rewards the majority and taste rewards the initiated.
So we will see many in the industry run toward optimisation I have no doubt, because that’s the language it knows. Agencies will package “AI-ready” brand frameworks, founders will obsess over prompt templates and semantic SEO, and retailers will inevitably turn their stores into physical mirrors of machine logic, efficient, legible, predictable. And personally there’s nothing malicious in that response. It always comes from a sincere desire to stay relevant, and from the assumption that clarity and structure will make them easier for the model to recommend.
But there is another path, and it sits in the space the model cannot reach. While some brands focus on being selected, others will focus on being remembered and while some chase speed, others will protect depth and instead of refining their answers, others will refine their meaning. When it comes to convenience, it ends at the checkout where meaning begins once the product enters your life. So while AI can collapse options into clarity, it cannot walk you through the parts of yourself that only surface in uncertainty, insecurity, aspiration, self-respect, belonging. It's important we understand these moments are not inefficiencies, but critical when forming our identity as people.
At the end of the day ChatGPT will absolutely help people decide what to buy but it will not help them decide who they are becoming.
In a time where the transaction becomes more frictionless, the identity attached to the purchase becomes the only thing that still matters.
— Nick Gray
Founder & CEO, IGU Global
Related IGU Articles
