Atomic Media text

Atomic Media

Archive for the ‘seo news’ Category

« Older Entries | Newer Entries »

Interactive CTV ads boost engagement, fall short on purchases

Friday, August 16th, 2024

A new study reveals the promise and limitations of interactive TV advertising. The key findings:

Why it matters. As CTV ad spend is projected to reach $33 billion by 2025, interactive formats could become a major player in the space.

By the numbers:

Why we care. While not yet driving direct sales, interactive CTV ads offer a wealth of benefits that can significantly impact brand awareness, consumer understanding and overall marketing effectiveness.

What they’re saying:

The big picture. Interactive CTV ads represent a shift from passive viewing to active engagement, offering new opportunities for brands to connect with audiences.

What to watch. Development of add-to-cart functionality and potential for direct purchasing through streaming accounts.

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




Google Ads API v15 to sunset Sept. 25

Thursday, August 15th, 2024

GPT-4 or Google Cloud’s API library- What should you choose for SEO task automation

Google announced the upcoming sunset of Google Ads API version 15, urging developers to migrate to newer versions to maintain uninterrupted API access.

Why we care. Failure to upgrade could result in API requests failing, potentially disrupting advertising operations for businesses.

Key details:

How to prepare:

Between the lines. Regular API version sunsets are part of Google’s strategy to maintain up-to-date and secure systems while encouraging developers to adopt newer features and improvements.

What’s next. Developers should prioritize migration efforts to ensure smooth transitions before the September deadline.

Bottom line. Proactive migration is crucial to avoid potential disruptions in Google Ads API functionality.

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




Publishers report ‘negligible’ traffic impact of Google AI Overviews

Thursday, August 15th, 2024

Two large publishers – Dotdash Meredith and Ziff Davis – say Google AI Overviews haven’t significantly impacted their traffic.

What Dotdash Meredith is saying. Here’s what the IAC (owner of Dotash) wrote its Q2 2024 shareholder letter:

What Ziff Davis is saying. Here’s what Ziff Davis CEO Vivek Shah said on the company’s earnings call:

But. While this all makes sense, it’s important to remember that the rollout of Google AI Overviews has been fairly limited and volatile so far. Overall, AI Overviews appeared for 7% of queries, as of the end of July, according to BrightEdge research. The presence of AI Overviews peaked at 15% in May.

While Ziff Davis may not see AI Overviews as a “significant change” to Google’s search experience, I would add the word “yet” to the end of that sentence. Because Google has clearly stated search is evolving toward AI Overviews.

Why we care. Generative AI has just begun reshaping search and AI will continue to do so over the next decade. Granted, these two reports form a line, not a trend. But AI Overviews (and Search Generative Experience before it) caused a lot of anxiety for SEOs, publishers and content creators. So it’s helpful to get some insight into how publishers are being impacted in these early days of AI Overviews.

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




Google August 2024 core update rolling out now

Thursday, August 15th, 2024

Google released the August 2024 core update today. It will take about a month to fully roll out.

This update is not just a normal core update. The August 2024 core update takes into account the feedback Google heard since the September 2023 helpful content update that seemed to have a negative impact on many small and independent publishers.

What Google is saying. John Mueller, Search Advocate at Google, wrote.

Google said this update aims to promote useful content from small and independent publishers, after Google listened to feedback it received since the release of the March 2024 core update. Mueller added:

This August 2024 core update “aims to better capture improvements that sites may have made, so we can continue to surface the best of the web,” Mueller added.

Guidance updated. Google posted several updates to its help page about core updates, including more in-depth guidance for those who may see changes after an update.

More details. Google told us we should expect a core update soon, after many publishers have become concerned and anxious about the next update.

Since then we have seen a tremendous amount of Google search ranking volatility without a confirmation from Google on a core update or any update of its kind. In fact, this morning, I posted about even more intense Google Search ranking volatility on Search Engine Roundtable.

What to do if you are hit. Google has given advice on what to consider if you are negatively impacted by a core update in the past. Google has not really given much new advice here.

In short, write helpful content for people and not to rank in search engines.

Previous core updates. The previous core update – the March 2024 core update – was the largest core update, according to Google. It started March 5 and completed 45 days later on April 19.

Here’s a timeline and our coverage of recent core updates:

Other updates. We did have a spam update between the last core update and this core update. It was the June 2024 spam update that started on June 20 and took 7 days to finish rolling out, completing on June 27.

Why we care. Many sites are hoping, and have been hoping, to see improvements with the last core update ever since the September 2023 helpful content update rolled out. Most, if not all, of those sites that were hit in September did not see recoveries. They were hoping to see recoveries with the March 2024 core update, but did not.

Now, with this August 2024 core update, many of those sites hit by previous updates will be watching closely to see if their sites recover over the next few weeks.

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




Google AI Overviews now show for signed-out users in the US

Thursday, August 15th, 2024

Google AI Overviews will now appear for all users in the United States, even if they are not signed into their Google account. Google has confirmed with Search Engine Land that AI Overviews are available for signed-out users in the US after we noticed Google testing it this morning.

What it looks like. I just conducted a search while not signed into Google using Chrome and Google showed me an AI Overview at the top of the search results.

Here is a screenshot:

More details. As a reminder, Google launched AI Overviews to US searchers back in May, after the Google I/O event.

Today, Google expanded AI Overviews to six new countries, added new links to the AI Overviews and added new Search Labs experimental tests for AI Overviews.

Jon Henshaw posted on LinkedIn yesterday he was seeing these while signed out, so I dug in and Google confirmed, it is 100% rolled out to all US searchers. I should note, I personally do not see AI Overviews when searching logged in to my Google Workspace account but Google did tell me that Workspace accounts can see AI Overviews.

Why we care. As more and more searchers see AI Overviews, it might lead to different click behavior from the Google Search results. Google touts how AI Overviews should lead to more traffic to publishers, but publishers have good reason to be doubtful of such statements, especially since Google is not showing publishers distinct impression and click data from Search Console on AI Overviews.

This also means that SEO tools will be able to better track these AI Overviews, as Lily Ray pointed out. Mark Traphagen from seoClarity also confirmed that his toolset is seeing AI Overviews in incognito mode.

AI Overviews seem to be here for the foreseeable future and it is our job to ensure publishers and content creators get traffic from Google Search, even with AI Overviews being found at the top of those results.

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




Google AI Overviews volatility continues: Rise, fall, repeat

Tuesday, August 13th, 2024

The visibility of AI Overviews in Google’s search results increased to 12% in July, only to fall back to 7% by the end of the month, according to new data from enterprise SEO platform BrightEdge.

While AI Overviews now have a “stronger presence” compared to June, the visibility of Google’s AI-generated answers previously appeared on 15% of queries in May.

Why we care. Google’s AI Overviews continue to be a huge area of interest and concern among SEOs and content creators. Google is continuing to test and refine formats, citations and the amount of real estate AI Overviews occupy in the search results. The continued volatility clearly indicates AI Overviews are an unsolved problem for Google.

Decreases. AI Overviews no longer appear for the travel and entertainment queries BrightEdge is tracking using its Generative Parser. AI Overviews also no occupy 12.5% less vertical space in Google’s search results.

In July, AI Overviews were less likely to appear for:

Increases. Salary-related queries (e.g., “nurse salary,” “human resources manager salary”) saw a significant increase in AI Overviews – jumping from 7% to 85% in July. AI Overviews were also more likely to be shown for:

Other growth and volatility. AI Overviews overlap with Google’s search results increased overall:

The sources cited by AI Overviews also experienced significant fluctuation, according to BrightEdge. Some websites losing citations included Wikipedia (-5%), CDC (-15%), USA Today (-60%) and Forbes (-30%).

Other findings from the latest data found that AI Overviews showed for:

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




Google Ads API streamlines conversion adjustment uploads

Tuesday, August 13th, 2024

Google will soon make significant changes to its Google Ads API, simplifying the process of uploading conversion adjustments.

Why we care. This update removes a key friction point for advertisers and API users, allowing for more immediate data adjustments and potentially more accurate campaign optimization.

Driving the news. Starting Sept. 9, users can upload conversion adjustments immediately after the original conversion is recorded, eliminating the previous 24-hour waiting period.

Key changes:

What to watch:

  1. API users should remove logic that enforces waiting periods for adjustment uploads.
  2. Applications should be modified to no longer track the removed error codes.
  3. Users relying on successful or failed event count metrics may need to adjust their logic, especially for v17.

Between the lines. This update streamlines the conversion adjustment process, potentially leading to more timely and accurate campaign data for advertisers.

What’s next. Google advised users to update their applications and processes ahead of Sept. 9.

Bottom line. While this change simplifies the upload process, it may require some adjustments to existing workflows and applications for Google Ads API users.

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




Google extends deadline for Hotel Ads commission bidding sunset

Tuesday, August 13th, 2024

Google announced a grace period for its planned phase-out of commission-based bidding in Hotel Ads campaigns. The new deadline is Feb. 20, 2025

Why we care. This extension gives hotel advertisers more time to adapt and carefully plan and implement new bidding strategies to ensure uninterrupted campaign performance.

Key details:

How to prepare:

Get the newsletter search marketers rely on.


See terms.


Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




How Google Search ranking works

Tuesday, August 13th, 2024

How Google ranking works - Dr. Mario Fischer

It should be clear to everyone that the Google documentation leak and the public documents from antitrust hearings do not really tell us exactly how the rankings work. 

The structure of organic search results is now so complex – not least due to the use of machine learning – that even the Google employees who work on the ranking algorithms say they can no longer explain why a hit is at one or two. We do not know the weighting of the many signals and the exact interplay.

Nevertheless, it is important to familiarize yourself with the structure of the search engine to understand why well-optimized pages do not rank or, conversely, why seemingly short and non-optimized results sometimes appear at the top of the rankings. The most important aspect is that you need to broaden your view of what is really important.

All the information available clearly shows that. Anyone who is even marginally involved with ranking should incorporate these findings into their own mindset. You will see your websites from a completely different point of view and incorporate additional metrics into your analyses, planning and decisions.

To be honest, it is extremely difficult to draw a truly valid picture of the systems’ structure. The information on the web is quite different in its interpretation and sometimes differs in terms, although the same thing is meant. 

An example: The system responsible for building a SERP (search results page) that optimizes space use is called Tangram. In some Google documents, however, it is also referred to as Tetris, which is probably a reference to the well-known game.

Over weeks of detailed work, I have viewed, analyzed, structured, discarded and restructured almost 100 documents many times. 

This article is not intended to be exhaustive or strictly accurate. It represents my best effort (i.e., “to the best of my knowledge and belief”) and a bit of Inspector Columbo’s investigative spirit. The result is what you see here.

A graphical overview of how Google ranking works, created by the authorA graphical overview of how Google ranking works, created by the author

A new document waiting for Googlebot’s visit

When you publish a new website, it is not indexed immediately. Google must first become aware of the URL. This usually happens either via an updated sitemap or via a link placed there from an already-known URL. 

Frequently visited pages, such as the homepage, naturally bring this link information to Google’s attention more quickly. 

The trawler system retrieves new content and keeps track of when to revisit the URL to check for updates. This is managed by a component called the scheduler. The store server decides whether the URL is forwarded or whether it is placed in the sandbox. 

Google denies the existence of this box, but the recent leaks suggest that (suspected) spam sites and low-value sites are placed there. It should be mentioned that Google apparently forwards some of the spam, probably for further analysis to train its algorithms. 

Our fictitious document passes this barrier. Outgoing links from our document are extracted and sorted according to internal or external outgoing. Other systems primarily use this information for link analysis and PageRank calculation. (More on this later.) 

Links to images are transferred to the ImageBot, which calls them up, sometimes with a significant delay, and they are placed (together with identical or similar images) in an image container. Trawler apparently uses its own PageRank to adjust the crawl frequency. If a website has more traffic, this crawl frequency increases (ClientTrafficFraction).

Alexandria: The great library

Google’s indexing system, called Alexandria, assigns a unique DocID to each piece of content. If the content is already known, such as in the case of duplicates, a new ID is not created; instead, the URL is linked to the existing DocID.

Important: Google differentiates between a URL and a document. A document can be made up of multiple URLs that contain similar content, including different language versions, if they are properly marked. URLs from other domains are also sorted here. All the signals from these URLs are applied via the common DocID. 

For duplicate content, Google selects the canonical version, which appears in search rankings. This also explains why other URLs may sometimes rank similarly; the determination of the “original” (canonical) URL can change over time.

Alexandria collects URLs for a document.Figure 1: Alexandria collects URLs for a document.

As there is only this one version of our document on the web, it is given its own DocID. 

Individual segments of our site are searched for relevant keyword phrases and pushed into the search index. There, the “hit list” (all the important words on the page) is first sent to the direct index, which summarizes the keywords that occur multiple times per page. 

Now an important step takes place. The individual keyword phrases are integrated into the word catalog of the inverted index (word index). The word pencil and all important documents containing this word are already listed there. 

In simple terms, as our document prominently contains the word pencil multiple times, it is now listed in the word index with its DocID under the entry “pencil.” 

The DocID is assigned an algorithmically calculated IR (information retrieval) score for pencil, later used for inclusion in the Posting List. In our document, for example, the word pencil has been marked in bold in the text and is contained in H1 (stored in AvrTermWeight). Such and other signals increase the IR score. 

Google moves documents considered important to the so-called HiveMind, i.e., the main memory. Google uses both fast SSDs and conventional HDDs (referred to as TeraGoogle) for long-term storage of information that doesn’t require quick access. Documents and signals are stored in the main memory. 

Notably, experts estimate that before the recent AI boom, about half of the world’s web servers were housed at Google. A vast network of interconnected clusters allows millions of main memory units to work together. A Google engineer once noted at a conference that, in theory, Google’s main memory could store the entire web. 

It’s interesting to note that links, including backlinks, stored in HiveMind seem to carry significantly more weight. For example, links from important documents are given much greater importance, while links from URLs in TeraGoogle (HDD) may be weighted less or possibly not considered at all.

Additional information and signals for each DocID are stored dynamically in the repository (PerDocData). Many systems access this later when it comes to fine-tuning relevance. It is useful to know that the last 20 versions of a document are stored there (via CrawlerChangerateURLHistory). 

Google has the ability to evaluate and assess changes over time. If you want to completely change the content or topic of a document, you would theoretically need to create 20 intermediate versions to override the old content signals. This is why reviving an expired domain (a domain that was previously active but has since been abandoned or sold, perhaps due to insolvency) does not offer any ranking advantage.

If a domain’s Admin-C changes and its thematic content changes at the same time, a machine can easily recognize this at this point. Google then sets all signals to zero, and the supposedly valuable old domain no longer offers any advantages over a completely newly registered domain.

In addition to the leaks, the evidence documents from hearings and trials of the U.S. judiciary against Google are a useful source for research. You can even find internal emails there.Figure 2: In addition to the leaks, the evidence documents from hearings and trials of the U.S. judiciary against Google are a useful source for research. You can even find internal emails there.

QBST: Someone is looking for ‘pencil’

When someone enters “pencil” as a search term in Google, QBST begins its work. The search phrase is analyzed, and if it contains multiple words, the relevant ones are sent to the word index for retrieval. 

The process of term weighting is quite complex, involving systems like RankBrain, DeepRank (formerly BERT) and RankEmbeddedBERT. The relevant terms, such as “pencil,” are then passed on to the Ascorer for further processing. 

Ascorer: The ‘green ring’ is created

The Ascorer retrieves the top 1,000 DocIDs for “pencil” from the inverted index, ranked by IR score. According to internal documents, this list is referred to as a “green ring.” Within the industry, it is known as a posting list. 

The Ascorer is part of a ranking system known as Mustang, where further filtering occurs through methods such as deduplication using SimHash (a type of document fingerprint), passage analysis, systems for recognizing original and helpful content, etc. The goal is to refine the 1,000 candidates down to the “10 blue links” or the “blue ring.” 

Our document about pencils is on the posting list, currently ranked at 132. Without additional systems, this would be its final position.

Superroot: Turn 1,000 into 10!

The Superroot system is responsible for re-ranking, carrying out the precision work of reducing the “green ring” (1,000 DocIDs) to the “blue ring” with only 10 results.

Twiddlers and NavBoost perform this task. Other systems are probably in use here, but their exact details are unclear due to vague information.

Mustang generates 1,000 potential results and Superroot filters them down to 10 results.Figure 3: Mustang generates 1,000 potential results and Superroot filters them down to 10 results.

Filter after filter: The Twiddlers

Various documents indicate that several hundred Twiddler systems are in use. Think of a Twiddler as a plug-in similar to those in WordPress. 

Each Twiddler has its own specific filter target. They are designed this way because they are relatively easy to create and don’t require changes to the complex ranking algorithms in Ascorer.

Modifying these algorithms is challenging and would involve extensive planning and programming due to potential side effects. In contrast, Twiddlers operate in parallel or sequentially and are unaware of the activities of other Twiddlers.

There are basically two types of Twiddlers.

For this reason, the PreDocs first reduce the posting list to significantly fewer entries and then start with slower filters. This saves an enormous amount of computing capacity and time. 

Some Twiddlers adjust the IR score, either positively or negatively, while others modify the ranking position directly. Since our document is new to the index, a Twiddler designed to give recent documents a better chance of ranking might, for instance, multiply the IR score by a factor of 1.7. This adjustment could move our document from the 132nd place to the 81st place.

Another Twiddler enhances diversity (strideCategory) in the SERPs by devaluing documents with similar content. As a result, several documents ahead of us lose their positions, allowing our pencil document to move up 12 spots to 69. Additionally, a Twiddler that limits the number of blog pages to three for specific queries boosts our ranking to 61.

Two types of Twiddlers – over 100 of them reduce the potential search results and re-sort them.Figure 4: Two types of Twiddlers – over 100 of them reduce the potential search results and re-sort them.

Our page received a zero (for “Yes”) for the CommercialScore attribute. The Mustang system identified a sales intention during analysis. Google likely knows that searches for “pencil” are frequently followed by refined searches like “buy pencil,” indicating a commercial or transactional intent. A Twiddler designed to account for this search intent adds relevant results and boosts our page by 20 positions, moving us up to 41.

Another Twiddler comes into play, enforcing a “page three penalty” that limits pages suspected of being spam to a maximum rank of 31 (Page 3). The best position for a document is defined by the BadURL-demoteindex attribute, which prevents ranking above this threshold. Attributes like DemoteForContent, DemoteForForwardlinks and DemoteForBacklinks are used for this purpose. As a result, three documents above us are demoted, allowing our page to move up to Position 38.

Our document could have been devalued, but to keep things simple, we’ll assume it remains unaffected. Let’s consider one last Twiddler that assesses how relevant our pencil page is to our domain based on embeddings. Since our site focuses exclusively on writing instruments, this works to our advantage and negatively impacts 24 other documents.

For instance, imagine a price comparison site with a diverse range of topics but with one “good” page about pencils. Because this page’s topic differs significantly from the site’s overall focus, it would be devalued by this Twiddler. 

Attributes like siteFocusScore and siteRadius reflect this thematic distance. As a result, our IR score is boosted once more, and other results are downgraded, moving us up to Position 14.

As mentioned, Twiddlers serve a wide range of purposes. Developers can experiment with new filters, multipliers or specific position restrictions. It’s even possible to rank a result specifically either in front of or behind another result. 

One of Google’s leaked internal documents warns that certain Twiddler features should only be used by experts and after consulting with the core search team.

“If you think you understand how they work, trust us: you don’t. We’re not sure that we do either.”

Leaked “Twiddler Quick Start Guide – Superroot” document

There are also Twiddlers that only create annotations and add these to the DocID on the way to the SERP. An image then appears in the snippet, for example, or the title and/or description are dynamically rewritten later.

If you wondered during the pandemic why your country’s national health authority (such as the Department of Health and Human Services in the U.S.) consistently ranked first in COVID-19 searches, it was due to a Twiddler that boosts official resources based on language and country using queriesForWhichOfficial.

You have little control over how Twiddler reorders your results, but understanding its mechanisms can help you better interpret ranking fluctuations or “inexplicable rankings.” It’s valuable to regularly review SERPs and note the types of results. 

For example, do you consistently see only a certain number of forum or blog posts, even with different search phrases? How many results are transactional, informational, or navigational? Are the same domains repeatedly appearing, or do they vary with slight changes in the search phrase?

If you notice that only a few online stores are included in the results, it might be less effective to try ranking with a similar site. Instead, consider focusing on more information-oriented content. However, don’t jump to conclusions just yet, as the NavBoost system will be discussed later.

Google’s quality raters and RankLab

Several thousand quality raters work for Google worldwide to evaluate certain search results and test new algorithms and/or filters before they go “live.”

Google explains, “Their ratings don’t directly influence ranking.” 

This is essentially correct, but these votes do have a significant indirect impact on rankings.

Here’s how it works: Raters receive URLs or search phrases (search results) from the system and answer predetermined questions, typically assessed on mobile devices. 

For example, they might be asked, “Is it clear who wrote this content and when? Does the author have professional expertise on this topic?” The answers to these questions are stored and used to train machine learning algorithms. These algorithms analyze the characteristics of good and trustworthy pages versus less reliable ones.

This approach means that instead of relying on Google search team members to create criteria for ranking, algorithms use deep learning to identify patterns based on the training provided by human evaluators.

Let’s consider a thought experiment to illustrate this. Imagine people intuitively rate a piece of content as trustworthy if it includes an author’s picture, full name, and a LinkedIn biography link. Pages lacking these features are perceived as less trustworthy.

If a neural network is trained on various page features alongside these “Yes” or “No” ratings, it will identify this characteristic as a key factor. After several positive test runs, which typically last at least 30 days, the network might start using this feature as a ranking signal. As a result, pages with an author image, full name, and LinkedIn link might receive a ranking boost, potentially through a Twiddler, while pages without these features could be devalued.

Google’s official stance of not focusing on authors could align with this scenario. However, leaked information reveals attributes like isAuthor and concepts such as “author fingerprinting” through the AuthorVectors attribute, which makes the idiolect (the individual use of terms and formulations) of an author distinguishable or identifiable – again via embeddings. 

Raters’ evaluations are compiled into an “information satisfaction” (IS) score. Although many raters contribute, an IS score is only available for a small fraction of URLs. For other pages with similar patterns, this score is extrapolated for ranking purposes.

Google notes, “A lot of documents have no clicks but can be important.” When extrapolation isn’t possible, the system automatically sends the document to raters to generate a score.

The term “golden” is mentioned in relation to quality raters, suggesting there might be a gold standard for certain documents or document types. It can be inferred that aligning with the expectations of human testers could help your document meet this gold standard. Additionally, it’s likely that one or more Twiddlers might provide a significant boost to DocIDs deemed “golden,” potentially pushing them into the top 10.

Quality raters are typically not full-time Google employees and may work through external companies. In contrast, Google’s own experts operate within the RankLab, where they conduct experiments, develop new Twiddlers and evaluate whether these or refined Twiddlers improve result quality or merely filter out spam. 

Proven and effective Twiddlers are then integrated into the Mustang system, where complex, computationally intensive and interconnected algorithms are used.

Get the newsletter search marketers rely on.


See terms.


But what do users want? NavBoost can fix that!

Our pencil document hasn’t fully succeeded yet. Within Superroot, another core system, NavBoost, plays a significant role in determining the order of search results. NavBoost uses “slices” to manage different data sets for mobile, desktop, and local searches.

Although Google has officially denied using user clicks for ranking purposes, FTC documents reveal an internal email instructing that the handling of click data must remain confidential.

This shouldn’t be held against Google, as the denial of using click data involves two key aspects. Firstly, acknowledging the use of click data could provoke media outrage over privacy concerns, portraying Google as a “data octopus” tracking our online activity. However, the intent behind using click data is to obtain statistically relevant metrics, not to monitor individual users. While data protection advocates might view this differently, this perspective helps explain the denial.

FTC documents confirm that click data is used for ranking purposes and frequently mention the NavBoost system in this context (54 times in the April 18, 2023 hearing). An official hearing in 2012 also revealed that click data influences rankings.

Since August 2012 (!) it was officially clear that click data changes the rankingFigure 5: Since August 2012 (!), it was officially clear that click data changes the ranking.

It has been established that both click behavior on search results and traffic on websites or webpages impact rankings. Google can easily evaluate search behavior, including searches, clicks, repeat searches and repeat clicks, directly within the SERPs.

There has been speculation that Google could infer domain movement data from Google Analytics, leading some to avoid using this system. However, this theory has limitations. 

First, Google Analytics does not provide access to all transaction data for a domain. More importantly, with over 60% of people using the Google Chrome browser (over three billion users), Google collects data on a substantial portion of web activity. 

This makes Chrome a crucial component in analyzing web movements, as highlighted in hearings. Additionally, Core Web Vitals signals are officially collected through Chrome and aggregated into the “chromeInTotal” value.

The negative publicity associated with “monitoring” is one reason for the denial, while another is the concern that evaluating click and movement data could encourage spammers and tricksters to fabricate traffic using bot systems to manipulate rankings. While the denial might be frustrating, the underlying reasons are at least understandable.

Let’s first examine clicks in search results. Each ranking position in the SERPs has an average expected click-through rate (CTR), serving as a performance benchmark. For example, according to an analysis by Johannes Beus presented at this year’s CAMPIXX in Berlin, the organic Position 1 receives an average of 26.2% of clicks, while Position 2 gets 15.5%.

If a snippet’s actual CTR significantly falls short of the expected rate, the NavBoost system registers this discrepancy and adjusts the ranking of the DocIDs accordingly. If a result historically generates significantly more or fewer clicks than expected, NavBoost will move the document up or down in the rankings as needed (see Figure 6).

This approach makes sense because clicks essentially represent a vote from users on the relevance of a result based on the title, description and domain. This concept is even detailed in official documents, as illustrated in Figure 7.

J. Beus, SISTRIX, with editorial overlays)Figure 6: If the “expected_CRT deviates significantly from the actual value, the rankings are adjusted accordingly. (Datasource: J. Beus, SISTRIX, with editorial overlays)
Trial Exhibit – UPX0228, U.S. and Plaintiff States v. Google LLC)Figure 7: Slide from a Google presentation (Source: Trial Exhibit – UPX0228, U.S. and Plaintiff States v. Google LLC)

Since our pencil document is still new, there are no available CTR values yet. It’s unclear whether CTR deviations are ignored for documents with no data, but this seems likely, as the goal is to incorporate user feedback. Alternatively, the CTR might initially be estimated based on other values, similar to how the quality factor is handled in Google Ads.

Based on the leaked information, it appears that Google uses extensive data from a page’s “environment” to estimate signals for new, unknown pages.

For instance, NearestSeedversion suggests that the PageRank of the home page HomePageRank_NS is transferred to new pages until they develop their own PageRank. Additionally, pnavClicks seems to be used to estimate and assign the probability of clicks through navigation.

Calculating and updating PageRank is time-consuming and computationally intensive, which is why the PageRank_NS metric is likely used instead. “NS” stands for “nearest seed,” meaning that a set of related pages shares a PageRank value, which is temporarily or permanently applied to new pages.

It’s probable that values from neighboring pages are also used for other critical signals, helping new pages climb the rankings despite lacking significant traffic or backlinks. Many signals are not attributed in real-time but may involve a notable delay.

The click metrics for documents are apparently stored and evaluated over a period of 13 months (one month overlap in the year for comparisons with the previous year), according to the latest findings. 

Since our hypothetical domain has strong visitor metrics and substantial direct traffic from advertising, as a well-known brand (which is a positive signal), our new pencil document benefits from the favorable signals of older, successful pages. 

As a result, NavBoost elevates our ranking from 14th to 5th place, placing us in the “blue ring” or top 10. This top 10 list, including our document, is then forwarded to the Google Web Server along with the other nine organic results.

The GWS: Where everything comes to an end and a new beginning

The Google Web Server (GWS) is responsible for assembling and delivering the search results page (SERP). This includes 10 blue links, along with ads, images, Google Maps views, “People also ask” sections and other elements.

The Tangram system handles geometric space optimization, calculating how much space each element requires and how many results fit into the available “boxes.” The Glue system then arranges these elements in their proper places.

Our pencil document, currently in 5th place, is part of the organic results. However, the CookBook system can intervene at the last moment. This system includes FreshnessNode, InstantGlue (reacts in periods of 24 hours with a delay of around 10 minutes) and InstantNavBoost. These components generate additional signals related to topicality and can adjust rankings in the final moments before the page is displayed.

Let’s say a German TV program about 250 years of Faber-Castell and the myths surrounding the word “pencil” begins to air. Within minutes, thousands of viewers grab their smartphones or tablets to search online. This is a typical scenario. FreshnessNode detects the surge in searches for “pencil” and, noting that users are seeking information rather than making purchases, adjusts the rankings accordingly. 

In this exceptional situation, InstantNavBoost removes all transactional results and replaces them with informational ones in real time. InstantGlue then updates the “blue ring,” causing our previously sales-oriented document to drop out of the top rankings and be replaced by more relevant results.

A television program on the origins of the word "pencil" to celebrate 250 years of Faber-Castell, a well-known German pencil manufacturer.Figure 8: A television program on the origins of the word “pencil” to celebrate 250 years of Faber-Castell, a well-known German pencil manufacturer.

Unfortunate as it may be, this hypothetical end to our ranking journey illustrates an important point: achieving a high ranking isn’t solely about having a great document or implementing the right SEO measures with high-quality content. 

Rankings can be influenced by a variety of factors, including changes in search behavior, new signals for other documents and evolving circumstances. Therefore, it’s crucial to recognize that having an excellent document and doing a good job with SEO is just one part of a broader and more dynamic ranking landscape.

The process of compiling search results is extremely complex, influenced by thousands of signals. With numerous tests conducted live by SearchLab using Twiddler, even backlinks to your documents can be affected.

These documents might be moved from HiveMind to less critical levels, such as SSDs or even TeraGoogle, which can weaken or eliminate their impact on rankings. This can shift ranking scales even if nothing has changed with your own document.

Google’s John Mueller has emphasized that a drop in ranking often doesn’t mean you’ve done anything wrong. Changes in user behavior or other factors can alter how results perform.

For instance, if searchers start preferring more detailed information and shorter texts over time, NavBoost will automatically adjust rankings accordingly. However, the IR score in the Alexandria system or Ascorer remains unchanged.

One key takeaway is that SEO must be understood in a broader context. Optimizing titles or content won’t be effective if a document and its search intent don’t align.

The impact of Twiddlers and NavBoost on rankings can often outweigh traditional on-page, on-site or off-site optimizations. If these systems limit a document’s visibility, additional on-page improvements will have minimal effect.

However, our journey doesn’t end on a low note. The impact of the TV program about pencils is temporary. Once the search surge subsides, FreshnessNode will no longer affect our ranking, and we’ll settle back at 5th place. 

As we restart the cycle of collecting click data, a CTR of around 4% is expected for Position 5 (based on Johannes Beus from SISTRIX). If we can maintain this CTR, we can anticipate staying in the top ten. All will be well.

Key SEO takeaways

A version of this article was originally published in German in August 2024 in Website Boosting, Issue 87.

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




The complete guide to optimizing content for SEO (with checklist)

Monday, August 12th, 2024

The complete guide to optimizing content for SEO 2024 (with checklist)

Creating high-quality, helpful content that drives search traffic is more important than ever.

There’s nothing worse than pouring your heart and soul into a wonderful guide or blog post that nobody ever sees or churning out pieces with little value that disappoint or alienate your potential audience.

Standing out from the crowd requires a delicate balance between optimizing content for search and creating pieces that truly resonate with users. 

This comprehensive guide will walk you through the essential steps to achieve this balance, ensuring your content ranks well and provides genuine value to your audience.

Follow along and check off each step as you go to create content that both search engines and readers will love.

Step 1: Optimize your content strategy for search

Your SEO and content strategy should complement each other. Leaning too heavily on one or the other could impact your share of organic traffic, so a balanced approach is best for optimal search engine results.  

Content marketing is a competitive game, so you must also ensure your piece is memorable or “sticky.” A great way to review this is by using the “Made to Stick SUCCESS model” to ensure your content has as many of these traits as possible:

High-quality, helpful content: 

Checklist - Optimize your content strategy for search

Step 2: Research and design helpful content

SEO is only successful with thorough research, which should help shape your ideas before you start creating content. 

It’s important to go beyond search volume, using data to help you better understand your target audience. 

Before you start to write content for search engines or users, you should carry out:

Checklist- Research and design helpful content

Step 3: Human vs. AI content generation

AI isn’t a shortcut to writing content, and high-quality content always needs a human touch. Most of today’s content likely uses both humans and AI as part of the process, and a combined approach can be effective. 

If you have used AI during your content creation process, you should: 

Checklist- Human vs. AI content generation

Step 4: Create correct, accessible and readable content

When you’re working on content, there are some basics that shouldn’t be ignored. Good spelling, grammar and readability are a must. 

Helpful content of a high standard should always be:

Checklist- Create correct, accessible and readable content

Get the newsletter search marketers rely on.


See terms.


Step 5: Keyword and entity usage

If you’ve followed the steps above, you’ll be armed with a content plan based on research and many relevant keywords. But you can still fall into common traps, like focusing too much on a primary keyword or overusing search terms.   

Helpful, great quality website content should:

Checklist- Keyword and entity usage

Step 5: Consider E-E-A-T

Having strong E-E-A-T (experience, expertise, authoritativeness and trustworthiness) can greatly impact SEO success. Trustworthy, authoritative content should:  

Checklist- Consider E-E-A-T

Step 6: Add multimedia elements

Depending on your topic, other media may help to enrich your content for users and improve your SEO efforts. 

Check the search engines to see what type of content they are already ranking for a certain search query. Perhaps video or visual content is prevalent. 

If so, make sure your content includes some original video or imagery too, where possible. Consider whether your pages would benefit from:

Checklist- Add multimedia elements

Step 7: A piece of a bigger picture

Every piece of content is related to others. Great content will add to your area of expertise and will relate to other pieces of content on your site. 

It will also draw references from other trusted resources on a topic. And in time, it might become a piece that others refer to.

While creating your content, it’s important to consider:

Checklist- A piece of a bigger picture (relationships)

Step 8: Technical SEO content issues

While technical SEO issues can affect your whole site, there are some specific ones to look out for to help optimize for SEO performance. As a quick overview, you should make sure:

Checklist- Technical SEO content issues

Step 9: Review and rework

The best content is never finished. The world changes fast, and that leaves content online that is dated, inaccurate or, at worst, misleading. It’s important to be aware of this and revisit your content often. 

Here are some of the key times to evaluate your content: 

Checklist- Review and rework

Why content quality matters

Today’s search results are in danger of being cluttered with AI-generated content.

It’s also more competitive than ever because it’s possible to create faster than ever. That’s why the quality of your content matters. If it’s high quality and helpful, it’s likely to:

Ultimately, quality content is about your reputation. But it’s also crucial for gaining a competitive edge in the search engine results pages.

Next time you’re optimizing content, remember to think past keyword ideas and take the big picture into account. 

Courtesy of Search Engine Land: News & Info About SEO, PPC, SEM, Search Engines & Search Marketing




« Older Entries | Newer Entries »