As individuals increasingly turn to Generative AI search tools like ChatGPT instead of Google for product searches, businesses must ponder: how do we secure a top position in these responses?

Unveiling the Technical Landscape

In this exposition, I delve into the intricate technicalities. The field remains rife with uncertainties, yet traditional branding methodologies appear to retain their efficacy.

Observing the Paradigm Shift Toward Generative AI search tools

Generative AI search applications are revolutionizing the global landscape. For rapid information retrieval and query resolution, platforms like ChatGPT and Gemini pose a formidable challenge to search engines such as Google.

Key Distinctions to Consider

We must distinguish between:

  1. The potential decline in website clicks from search engine results (SERPs),
  2. A possible decrease in overall search engine usage or query frequency.

A Gartner study forecasts a 25% drop in search engine usage by 2026 in favor of AI chatbots and virtual assistants.

Projections and Personal Insights

I am skeptical about a major shift by 2026, but I believe future generations will increasingly rely on AI chatbots for information and product research. A 25% shift is plausible over a five to ten-year period rather than two. This change will be gradual due to users’ ingrained habits. I also anticipate a more rapid decline in search engine traffic to websites with the rise of AI Overviews (formerly SGE). I predict a 20% reduction in search engine traffic in the first two years of its introduction, with varying impacts based on search intent, and an increase in “no-click searches” as generative AI provides comprehensive answers.

This phenomenon will abbreviate the duration of the research and customer journeys, transforming the entire exploratory process.

image source: https://www.kopp-online-marketing.com/llmo-how-do-you-optimize-for-the-answers-of-generative-ai-systems

To cultivate awareness in the user journey, it is imprudent to fixate solely on search engine rankings, clicks, or website traffic.

If you inquire with ChatGPT today about a car that meets specific criteria, it will recommend particular models:

Similarly, if you pose the same question to Gemini, it will also suggest particular car models, complete with images.

Intriguingly, the preceding example advises distinct automobile models contingent on their intended utilization.

ChatGPT’s recommendations include:

  • Tesla Model Y
  • Toyota Highlander Hybrid
  • Hyundai Ioniq 5
  • Volvo XC90 Recharge
  • Ford Mustang Mach-E
  • Honda CR-V Hybrid

Gemini’s recommendations feature:

  • Chrysler Pacifica Hybrid
  • Toyota Sienna
  • Mid-Size SUVs broadly
  • Toyota RAV4 Hybrid
  • Row SUVs
  • Toyota Highlander Hybrid

This divergence elucidates that the foundational language model (Large Language Model, LLM) operates variably depending on the AI framework in question.

Moving forward, it will become paramount for enterprises to be acknowledged in such recommendations to be considered within the relevant array of potential solutions.

But what precisely underlies these model suggestions by generative AI?

To unravel this, we must delve deeper into the technological mechanics of generative AI and LLMs.


Excursus: The Mechanics of LLMs

Contemporary transformer-based LLMs like GPT or Gemini pivot on a statistical examination of the co-occurrence of tokens or lexemes.

For this purpose, texts and datasets are deconstructed into tokens for computational handling and situated in semantic realms using vectors. Vectors can encompass entire words (Word2Vec), entities (Node2Vec), and attributes.

In the realm of semantics, the semantic domain is also referred to as an ontology. Although LLMs rely more on statistics than true semantics, they do not constitute ontologies. Nonetheless, AI approximates semantic comprehension due to the vast volumes of data it can process.

Semantic proximity is ascertained by the Euclidean distance or the cosine similarity measure within the semantic domain.

We can identify the connections between products and attributes this way. LLMs (Large Language Models) use natural language processing to encode text and turn it into tokens (deconstructed pieces). This process allows them to classify these tokens into entities and attributes.

The more often certain tokens appear together, the more likely they are to be connected.

AI developers first train LLMs on human labelled datasets. This data comes from various sources, including internet crawls, databases, books, and Wikipedia. In fact, most developers train cutting-edge LLMs on publicly available internet text, like the massive “Common Crawl” dataset with information from over three billion web pages.

However, the exact sources of this initial data are often unknown.

To prevent hallucinations and give LLMs deeper knowledge in specific subjects, we enrich them with content from relevant sources. This process is known as Retrieval Augmented Generation (RAG).

Users can also employ graph databases such as the Google Knowledge Graph or Shopping Graph within the framework of RAG to cultivate a more profound semantic comprehension.


LLMO, GEO, GAIO: Emerging Disciplines for Influencing Generative AI

Companies face the significant challenge of asserting their presence not only in traditional search engines but also in the outputs of language models. They can achieve this by including source references, such as links, or by ensuring mentions of their brand(s) and products.

Researchers are exploring the field of influencing the output of generative AI, which remains largely uncharted. They have proposed several theories with various names such as Large Language Model Optimization (LLMO), Generative Engine Optimization (GEO), and Generative AI Optimization (GAIO).

Reliable evidence for practical optimization approaches is currently sparse. Therefore, researchers mainly derive their insights from a technological understanding of LLMs.

Establishing Credibility as a Thematically Trustworthy Source for Non-Commercial Prompts

For non-commercial prompts, cite the original source and include a hyperlink to the website.

It stands to reason that AI systems with direct search engine access would refer to the highest-ranking content when formulating an answer.

For instance, consider the following prompt: “google core update March 2024.”

The sources referenced in Copilot include:

  • searchengineland.com
  • coalitiontechnologies.com
  • seroundtable.com

The rankings in the standard Bing results for the corresponding search query, excluding videos and news, are as follows:

  1. searchengineland.com
  2. blog.google
  3. searchenginejournal.com
  4. yoast.com
  5. developers.google.com
  6. semrush.com …

Some sources overlap with the search results, but not all.

For the same prompt, ChatGPT lists the following sources:

Google’s Gemini is mentioning following sources:

In addition to relevance, other quality criteria seem to influence source selection, likely akin to Google’s E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) framework.

Studies on Google’s Search Generative Experience (SGE) indicate a high correlation with well-known brands. For instance, Peak Ace’s study in the tourism sector and another study by Authoritas highlight this trend.

Peak Ace analyzed the SGE to identify which travel-related domains are most frequently referenced. Their analysis revealed a trend: links tend to favor well-established brands and authoritative sources. This suggests that building credibility and trust as a reliable source is crucial for success in both traditional search engines and generative AI outputs.

Authoritas has investigated which domains are generally linked from the Search Generative Experience (SGE):

A connection between brand strength and the selection of sources for SGE can be inferred.

Digital Brand and Product Positioning for Commercially Driven Prompts

For purchase-oriented prompts, the primary objective is to have the AI directly recommend a brand or product within shopping grids or outputs.

But how can this be achieved?

A logical starting point is to focus on the user and their prompts. As is frequently the case, understanding the user and their needs is fundamental.

Prompts can offer more context than the few terms typically found in a standard search query. 

Companies should aim to position their brands and products within specific user contexts.

Brands and products should be positioned based on frequently requested attribute classes in the market and prompts (e.g., condition, usage, number of users) as initial reference points.

Where Should This Positioning Take Place?

Understanding which training data an LLM utilizes is crucial, and this depends on the specific LLM:

  • If an LLM has access to a search engine, highly ranked content within that search engine could be a valuable source.
  • You can optimize renowned industry directories, product databases, or other authoritative sources on relevant themes for better positioning.
  • Google’s E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) concept can also play a significant role, particularly for models like Gemini or the SGE, in identifying reliable sources as well as trustworthy brands and products.

By focusing on these areas, companies can enhance their visibility and influence within AI-generated outputs.

Conclusion

The viability of Large Language Model Optimization (LLMO) or Generative AI Optimization (GAIO) as legitimate strategies for influencing LLMs towards specific goals remains uncertain. While skepticism exists within the data science community, others advocate for this approach.

To achieve practical application of LLMO or GAIO, actively pursue these goals:

  • Recognize Owned Media as a Trustworthy Training Data Source via E-E-A-T: Ensure your media is reliable and authoritative by following Google’s E-A-T (Expertise, Authoritativeness, and Trustworthiness) guidelines.
  • Generate Brand and Product Mentions in Qualified Media: Actively seek mentions of your brand and products in reputable sources.
  • Create Co-Occurrences with Relevant Entities and Attributes in Qualified Media: Strive to consistently associate your brand with important attributes and entities within credible sources.
  • Integrate into Established Graph Databases: Proactively include your brand in databases like Knowledge Graph or Shopping Graph.

LLM optimization success depends on the market size: Establishing a brand in niche markets is easier within the relevant thematic context. In smaller markets, brands need fewer co-occurrences to get recognized by LLMs. In larger markets, with numerous players wielding substantial PR and marketing resources, this becomes more challenging.

GAIO or LLMO requires significantly more resources than traditional SEO to shape public perception. Nevertheless, maintaining strong SEO practices is essential, as well-ranked content continues to play a crucial role in training LLMs and achieving visibility.

Digital Authority Management

To develop a comprehensive strategy, consider the concept of Digital Authority Management. For a deeper understanding, read the article “Authority Management: A New Discipline in the Age of SGE and E-E-A-T.”

If LLM optimization is effective, major brands with extensive PR and marketing capabilities will likely gain significant advantages in search engine positioning and generative AI results.

However, traditional SEO strategies remain viable. AI developers also use well-ranked content to train LLMs, so focus on co-occurrences between brands/products and relevant attributes or entities.

We will know for sure whether SEO will lean more towards LLM optimization or continue on its current path only when SGE is fully introduced.


Check out more AI tool.

Elevate Guest Experience with RoomGenie

🚀 Check out NewsGenieYour AI consultant