Skip to content

Trust as a ranking factor: why AI agents recommend some brands and not others

GS
Gecko Studio
· · Updated Apr 2026
Apreton de manos profesional bajo luz natural, simbolizando la confianza como factor de ranking

Picture this scene, which is no longer hypothetical: a potential client opens ChatGPT and types "I need an SEO agency for a hotel in Ibiza, recommend me three". The model replies with three names, short descriptions and, if the user wants, links. That response has decided, in seconds, who gets the commercial conversation and who doesn't. What's interesting is that this decision wasn't made by a link algorithm or a keyword dashboard: it was made by a model that optimised for trust rather than visibility. Trust, not position. And that's probably the most important piece of SEO news of the last two years, even if almost nobody is framing it that way. This article is about how AI agents decide who to recommend, why trust has become the new ranking factor, and what you need to do to be on that side.

What search engines have done for 20 years compared with what models do now

For two decades, search engines made an implicit promise to the user: "I'll give you many options, you pick the one that convinces you". That's the logic behind 10 organic results. The user saw a list, evaluated titles, descriptions, snippets, sometimes clicked through to two or three, and decided. The engine didn't choose; the user did. The SEO's job was to get on that list as high as possible and make the description more attractive than the rest.

AI agents have broken that contract. When a user asks a model "recommend me three agencies", the model no longer serves 10 neutral options: it picks the ones it considers best and presents them as its recommendations. That's a huge qualitative difference. The user delegates the decision to the model, and the model takes on the responsibility of not getting it wrong. Getting it wrong, for a model, doesn't mean "giving an incorrect answer"; it means "recommending someone who turns out to be a disaster". Models are trained (and increasingly fine-tuned) to minimise that risk. And that completely reconfigures who gets recommended and who doesn't.

What exactly does a model understand as "trust"

Trust in this context isn't a single metric: it's a combination of signals that, together, make the model consider that recommending you carries low reputational risk for it. It's helpful to think about it from the model's point of view, not the business's: if I were an AI assistant tasked with recommending, which signals would I look at to decide who to put first? These are the ones that matter, based on what we've seen in tests and in the labs' own literature:

1. Information consistency over time and across sources

A model looks at whether the information about your business is consistent between what your site says, what external media say, what directories say, what reviews say. If everything points in the same direction, trust rises. If there are contradictions (the site says you're 8 people, LinkedIn shows 2, Google Business lists one address and Facebook another), the model rules you out before even evaluating your capabilities. Not because you're bad, but because recommending someone whose signals aren't clear is riskier.

2. Verifiable human authorship signals

Content signed by identifiable people, with their own history, with external profiles backing it up, carries much more weight. Not because it's easier to index, but because the model can locate a chain of responsibility: "this content was written by X, X has these profiles, these profiles have been active for years, X doesn't look like a ghost". Content with that clear chain is considerably more citable than the same content without a signature or with a vague one.

3. External reputation (what others say about you, not you)

This is probably the one that weighs most and the one almost no one actively works on. Mentions in sector media, detailed reviews on recognised platforms, presence in relevant directories, third-party articles about you, case studies where you're named, collaborations with other known brands — all of that builds a layer of external evidence. A model understands that authority is something granted from outside, not something you declare about yourself.

4. Operational transparency

Signals like an "about us" page that tells who the people are, a clear privacy policy, accessible terms of service, visible prices (or at least ranges), opening hours, real contact details — all of that adds up. These are the things an experienced human eye looks at when landing on a site to decide whether to trust it. Models have learned to watch the same cues.

5. Longevity and continuity

A site that's been publishing regularly for years carries more trust than one that appeared a month ago, even if it's published better content in the last month. Continuity is a proxy for being a stable business. It isn't decisive, but it's a signal models integrate. That's why losing publishing continuity for 6 months on an active blog has an invisible but real cost in citability.

6. Absence of negative signals

Sometimes trust isn't built — it's preserved. If there are very visible negative mentions (public complaints, controversies, legal issues, massive bad reviews, appearances in dubious-reputation listings), the model prefers not to risk it. Not because it's decided you're bad, but because other candidates carry less reputational risk. Reputational hygiene matters a lot more than most people think.

Why models penalise brands that technically do everything right

There's a pattern that confuses many people: businesses with an impeccable website, clean technical SEO, abundant content — that never get recommended by any model. Why? Because technique doesn't make up for the absence of external signals. It's perfectly possible to have a 10/10 website on a technical checklist and a 2/10 on trust in a model's eyes. And that 2/10 is a knockout.

The typical things we've seen in these cases are:

  • Technically impeccable site, but no authorship in the content. All articles signed by "Team" or unsigned. For the model, there's no chain of responsibility.
  • Site with plenty of content, but not a single external mention of the business. If no one talks about you outside your own site, the model has no way to validate anything.
  • Site with a generic or empty "about us" page. No names, no trajectory, no photos, no transparency. Big negative signal.
  • Site with a broad category and no clear specialisation. "Full-service digital marketing for every kind of company". Impossible to cite as a specific recommendation.
  • Site with a short track record. Launched a few months ago, no history. The model prefers candidates with continuity.

Any of these, on its own, doesn't exclude you. Several at once does. And it's surprisingly common to see sites scoring 4 or 5 of these negative signals at the same time and then not understanding why "AI won't recommend me".

Experiments you can run today to see where you stand

The good news is that you can test your own trust position in AI models in half an hour. No special tool needed. It's as simple as it is tedious: sit down, ask the models questions, and note what they reply.

Experiment 1: The direct test

Open ChatGPT, Perplexity, Gemini and Claude. Ask all four the same question your ideal client would ask: "recommend me three [your sector] in [your area]" or "what's the best option for [problem you solve]". Note what each returns. If you show up in any, great; if you don't show up in any, you have trust work to do. If you show up with incorrect information, your priority is fixing the source of the error before anything else.

Experiment 2: The lateral test

Ask for specific names of your competitors ("tell me about [competitor]") and observe what information the model gives: how long they've existed, what their speciality is, which notable projects they have. If the model gives rich detail about your competitor and when you ask about you it responds with generalities or "I don't have enough information", that gap is exactly what you need to close.

Experiment 3: The contradiction test

Ask questions with incorrect premises about your business to see if the model corrects you. "Is it true that [your brand] only works with e-commerce?" If your brand is strong in trust, the model will correct the false premise with correct data. If it replies "I don't have enough information to confirm", you know your presence in external sources is insufficient.

Infographic · Building trust
90-day timeline to earn citations in AI agents

DAYS 1-15
Your own house
"About us" page, authorship with bios, Organization + Person schema, consistent NAP.

DAYS 15-45
External evidence
2-3 guest posts, specialist forums, directories, one interview in sector media.

DAYS 45-75
Citable content
2-3 dense pieces: your own data, an exhaustive guide, a case study with figures.

DAYS 75-90
Measure and adjust
Repeat experiments on ChatGPT, Perplexity, Gemini, Claude. Compare with starting point.

How to build trust in 90 days realistically

Trust can't be bought or sped up with hacks, but it can be moved with concentrated work. In 90 days you can shift enough signals for a model to start considering you recommendable in queries within your niche. This is what we'd apply for a new client starting from scratch.

Days 1-15: Put your own house in order

Before asking for external attention, fix what's internal. Complete "about us" page with names, photos, trajectory, years of experience. Articles signed by specific people, not by "team". Author bios on each profile with links to LinkedIn and external professional profiles. Consistent contact data on site, GBP, socials, directories. Clear privacy policy and terms. Organization and Person schema done well. It's the cheapest and most profitable move: many candidates fail at this first step.

Days 15-45: Generate external evidence

Time to go beyond your own site. Guest posts on 2 or 3 sector media. Active participation in specialist forums, answering questions with quality. Mentions in relevant professional directories (well chosen, not spam). Complete profiles on platforms where your audience looks for you (LinkedIn, sector profile, topical communities). Try to secure at least one interview in specialist media, even a small one. The goal isn't volume: it's for there to be 5-10 external sources talking about you with consistent data.

Days 45-75: Own content with citable value

Now it's time to produce 2-3 substantial pieces of content that can be cited: an analysis with your own data, a complete guide on a sub-topic of your speciality, an anonymised case study with real figures. Not 15 generic posts: 3 pieces that don't exist elsewhere. This is the content models use as a source when someone asks something very specific, and it's what turns a neutral site into a "reference" site.

Days 75-90: Repeat the experiments and measure

On day 90, rerun the ChatGPT, Perplexity, Gemini and Claude tests. Compare with the starting point. Don't expect a miracle: models take weeks to refresh recent signals. But if you've done the work well, there should be visible changes: more complete information when they ask about you, some appearance in recommendations, correction of false premises. If nothing has changed, review what wasn't executed properly.

What doesn't work (and what doesn't deserve your time)

A lot of things get sold as "essential for AEO" that don't actually move trust in models' eyes. To save you money:

  • Buying mentions on dubious-reputation sites. It doesn't build trust — it degrades it. Models integrate source quality, not just quantity.
  • Generating hundreds of AI posts to "feed" models. The only thing you're feeding is noise. Models detect valueless AI-generated content and discount it.
  • Faking knowledge panels. Fake Wikipedia panels and similar are easily detectable and can trigger visible penalties.
  • Obsessing over backlink volume. A link from a serious publication is worth more than fifty from irrelevant blogs. The focus should be on quality, not quantity.
  • Tracking keyword positions without looking at citations. You can be losing trust while keyword positions hold. Don't let a single KPI blind you.

Where all this fits in a serious strategy

Trust isn't a section of the SEO plan: it's the axis of the SEO plan from 2026 onwards. Every tactical action (content, on-page optimisation, links, structured data, authorship) should align towards a common goal: building an entity that models and search engines consider recommendable. When at Gecko Studio we set up 6-12 months of work with a client, this axis is always in the structure: which trust signals exist today, which are missing, which can be built in 90 days, which take 12 months. Without that axis, you end up producing a lot of tactical noise without real results.

Building trust as a ranking factor in the next 90 days

If you accept that trust is the new ranking factor, the calendar for the next 90 days simplifies. You spend the first 15 days putting the internal side in order: a real "about us" page with names and trajectory, articles signed by people with linked bios, contact data consistent across channels and Organization + Person schema done well. From days 15 to 45, generating external evidence: 2-3 guest posts on sector media, participation in specialist forums, complete professional profiles and at least one interview in a publication, however small. From days 45 to 75, producing 2-3 pieces of content with citable value of your own (data, exhaustive guide, case study with figures). And on day 90, you repeat the ChatGPT, Perplexity, Gemini and Claude experiments and compare with the starting point.

This plan doesn't depend on budget: it depends on consistent execution. What we see fail most isn't the strategy, it's giving up at 4 weeks when results aren't visible yet. Trust is built on the exact timeline in which models refresh signals and your mentions start appearing in their responses, and no tool or shortcut speeds that up. Whoever starts today arrives in three months; whoever keeps waiting for it to be clear how models work arrives late when their competitor already has it clear.

Related articles

Want to apply this to your business?

We analyse your case and propose a tailored SEO strategy. Free consultation.

Talk to an expert

Hablemos de tu proyecto

Respuesta en menos de 24 horas

4.8 en Google 200+ proyectos Respuesta <24h