18 min read

The Slow Erosion of Google’s Deserved Trust

For the first time in decades, there’s a crisis in confidence that Google can satisfy its users with search results that don’t suck. What does this mean for brands, publishers, marketers, and searchers?
The Slow Erosion of Google’s Deserved Trust
by Blair MacGregor

Famed investor, the late Charlie Munger, was not a techie. Or an SEO, for that matter.

Along with his partner for decades at Berkshire Hathaway, Warren Buffett, Munger exercised his preference for simplicity in his investing strategy, avoiding tech stocks as a general rule.

In a talk he delivered at USC Business School in 1994:

Warren and I don’t feel like we have any great advantage in the high-tech sector. In fact, we feel like we’re at a big disadvantage in trying to understand the nature of technical developments in software, computer chips or what have you. So we tend to avoid that stuff, based on our personal inadequacies.

Despite he and Buffett’s investing preferences, he did remark on Google a few times: namely in 2019, expressing contrition for not jumping on it earlier (as we all do!):

I don't mind not having caught Amazon early," Munger said. "The guy (Jeff Bezos) is kind of a miracle worker, it's very peculiar. ... But I feel like a horse's ass for not identifying Google earlier ... We screwed up.

Unlike general trust, which can be granted to institutions like governments, religious bodies, or established brands based on assumptions or reputation, deserved trust is built through a proven track record of integrity and dependability.

In other words, the kind of consistent, reliable, ethical behavior where extensive procedure and controls aren’t necessary.

I would argue that we’re lightyears away from that in today’s Search landscape: not just with respect to extensive procedure (have you read all 170 pages of Google’s Quality Rater Guidelines document lately?) but in expecting the kind of consistent, reliable, and ethical behavior that were hallmarks of Google Search’s rise to prominence as the pre-eminent search engine in the early 2000s.

If you were to visualize the ideal relationship between all of the major parties in the Search ecosystem, it would probably look something like this:

Now, perhaps this has always been more aspirational than anything that's existed in reality. But directionally, I think this maps to what Google's north star of a healthy ecosystem is for everyone involved.

That's not what we have today. Instead, the feedback loops inherent to today’s Search landscape look a little more like this:

Google’s failing all of us right now. And while we all have to work with how the rules of the game have been set, it doesn’t mean we have to stand still and not comment on where the situation stands and what we might collectively be able to do about it.

In this piece, I’ll sum up the inconsistent, unreliable and (in some cases) unethical nature of Google search right now and how the ramifications have been felt throughout the Search ecosystem. 

And more importantly, I'll explain why this breach of deserved trust means they can no longer be given the benefit of the doubt as to whether their search results are actually satisfying most users.

(Part Two will get into potential solutions, including what an independent accounting of search results might look like. Part Three will dive into actionable steps people can take to plan for a world where Google continues to erode trust and potentially lose users as a result to other platforms and LLMs)


We Don’t Know What We Don’t Know

In its 2024 Q3 earnings call in October, Alphabet CEO Sundar Pichai relayed positive news across several of its business units: Cloud, YouTube, Gemini and even Waymo, the autonomous driving service started under the company’s banner in 2009.

He also made two claims specific to Search. 

First, he reported a year-over-year 12% increase in search revenue.

Second, he claimed that people were finding AI overviews satisfying based merely on the fact that they were conducting more searches:

We're seeing strong engagement, which is increasing overall search usage and user satisfaction. People are asking longer and more complex questions, and exploring a wider range of websites. What’s particularly exciting is that this growth actually increases over time, as people learn that Google can answer more of their questions.

Now maybe people are, on balance, finding AIOs satisfying for what they searched for.

But we can’t know that for sure from this quote alone.

Why?

Well, for one thing, Google isn’t releasing any actual user sentiment data about AIOs to the public; the same way they don’t for SERPs more generally.

In the absence of that, however, is a massive, growing tranche of screenshots, memes and anecdotal data to suggest at least some healthy percentage of typical Google users not only don’t find AIOs helpful at all but find them to actively detract from their Search experience.

To put more concrete numbers to it, according to an NPR report, there are now over 20,000 users actively using an extension called “Bye Bye, Google AI” which hides AIOs from the Search experience for users. 

That may only be a fraction of the 64 million people currently using the most popular ad-blocking Chrome extension AdBlock. But the reality that enough users to fill most major arenas across North America are already blocking AIOs after they’ve been out of public beta for less than a year strikes me as a concerning trend.

Furthermore, independent data about SERP quality more generally is surprisingly hard to come by.

The American Customer Satisfaction Index (ACSI) reported that in July 2024, Google's customer satisfaction score was 81 out of 100, which is pretty good. And that was up from 80 the year before. Over the 20-year period these scores have been recorded, 2022 was the lowest on record, with a score of 75, followed by 2015 (76).

On the other hand, an independent study conducted by Wallethub this year found that only 41% of searches for popular banking and credit card terms met a user’s intent, with 63% of users surveyed believing that search results were better the previous year.

So we know there’s at least a sizable amount of people who are troubled by the inconsistency and unreliability of what Google chooses to showcase in the SERPs.

Secondly, there’s the (potentially) unethical.

In the wake of the antitrust litigation filed last year by the U.S. Department of Justice, it was revealed that in 2019, some in Google’s leadership circles, namely (the now former) Head of Ads Prabhakar Raghavan explicitly strived to artificially increase the number of search queries; even if that meant creating a worse experience for users.

This context suddenly becomes very relevant when we look at Pichai's current claims about AI overviews. When he says that 'increased search usage' proves user satisfaction with AIOs, it echoes the same potentially problematic logic: equating more searches with better user experience.

But just as Raghavan's team was willing to sacrifice user experience for query volume in 2019, we have to question whether more searches today truly indicate satisfaction - or if they might instead signal users struggling to find what they need, having to rephrase queries, or working around unhelpful AI overviews.

So what’s actually happening here?

To what degree are users more broadly dissatisfied with the state of AIOs and Google Search? 

And what does it mean for us (brands and marketers) trying to reach them?


We’re All In The Same Boat

All of the key players who use or otherwise rely on Google Search: whether it’s publishers, ad networks, marketers or everyday searchers, have seen Google’s inconsistency and unreliability at work.

First, it’s been discussed by SEOs ad nauseum over the last several years. That’s not a surprise since we’re the ones closest to a broad-based set of results on a day-to-day basis.

Marketers and SEOs are always unhappy with one result or another because we almost always think our company or client’s site should be doing better.

But we also know there’s always a baseline level of inconsistency in the system with Search as a channel, especially in how standards are enforced across brands.

As long as Google’s existed, big brands have almost always been given the benefit of the doubt over small brands algorithmically: mainly because most users still want to see them.

Because searchers, not publishers, are still the most important stakeholders in Google’s eyes, any penalties to big brands for underhanded behavior typically don’t last very long.

I made the analogy to a bank in an X post earlier this year with respect to how small brands have to do more to “prove themselves” in the eyes of Google:

In the wake of winners and losers being declared, there are always those who feel hard done by the algorithm. “Bad” results are shared on social media (and in company Slack channels), showing pages that other publishers are convinced shouldn’t outrank them for any number of reasons; especially if they’re big brands that don’t appear to be playing by the rules Google seems to have set for everybody else.

As marketers, especially, we’ve always had to recognize that we’re on an inherently uneven playing field. Search isn’t egalitarian, and like life in general, it isn’t always fair.

Yet it’s reached a point where it’s also inconsistent and unreliable in ways that simply aren’t logically explainable to anyone. Even by those of us whose job it is to translate Google’s often vague directives into actionable recommendations for clients.

What ought to be much more alarming for Google though, is how this has managed to break containment beyond just marketing circles but into the broader tech press and even in the mainstream press as well.

There’s been a barrage of coverage in mainstream news outlets the last two years about the degrading quality of Search and how things aren’t quite what they used to be.

Rumblings of discontent started gaining traction in the broader tech ecosystem throughout 2021 and 2022 even before the rise of LLMs and AI writing tools took shape. 

But they really started to reach a fever pitch in 2023. A German research report seemed to confirm people’s suspicions that search quality has, in fact, degraded.

And then came an almost weekly occurrence of AI-related bad press throughout the year, as we learned big publishers blitzscaled AI content (without disclosing it) and hid phony authors behind legacy brands while Google largely shrugged their shoulders.

Meanwhile, the big gainer of millions of new visits (and fresh off a $60 million deal to give Google exclusive access to help train its AI tools like Gemini), Reddit, has become overridden with spammy AI-generated responses its team of volunteer moderators have largely been unable to curtail.

Google Images has also been saturated with fake AI slop, with seemingly little ability to discern authentic images from stuff pulled from Midjourney, Grok or a million other potential image generation tools.

Finally, Google’s failures have even been brought up in YouTube circles, with this video from popular YouTuber Mrwhosthe boss “Why Google Seach is Falling Apart” receiving over 3.5 million views as of this piece’s publication.

While all this was going on, another war was starting to take shape: this time on independent publishers who had been creating solid content for years but suddenly found themselves fighting for their business’s lives through little to no fault of their own.


The (Unhelpful and Unsatisfying) Content Update

When Google’s Helpful Content Update (HCU) was first announced in 2022, it came with a new, declarative mission statement. That statement: Google no longer wanted to reward what they called “search-engine first” content.

Just interpreting that statement is a more difficult exercise than it might appear on the surface. What is “search-engine first” content, after all? You could argue any website content that’s publicly indexed is technically “search engine first” content, since search engines have been the gateway to the open web since its inception.

The distinction Google was aiming for, if you read the rest of that post, was to differentiate between content created exclusively to rank highly for organic search queries (with little to no regard for a site’s existing audience or content footprint) and what they call “people-first” content: content one might create in a world where search engines theoretically didn’t exist.

By those definitions, “content created for search engines” is a real problem. Consider the epitome of that content in this day and age is the largely mediocre “Comtent” or commerce-driven content, produced by Forbes and other big publishers who for years now, have been trying to rank for anything and everything under the sun, namely consumer products like the best walking shoes for women, the best baby monitors, the best mattresses and the best cbd gummies, all of which Forbes ranked #1 as of a few weeks ago.

Moreover, all of it, to any reasonable person, would constitute being outside the editorial norm of what Forbes, a business publication, has written about in print form for decades.

This mass-produced, commerce-driven content published by big brands, upon inspection by actual experts in the field, consistently falls short of “helpful.” Instead, these publishers rely more on accentuating and signaling E-E-A-T through largely symbolic means (trust bars! “how we review products”!) than demonstrating it through actual expertise.

I wrote about these dynamics in a piece last year where I referenced my own experience wrestling with these questions in previous work contexts and how emerging brands and sites could think about expanding their site’s content footprint in a way that would avoid getting smashed by future, HCU-incorporated Core Updates:

If you already have a big bucket of existing content and things you're known for, the truth is you have more leeway to go into previously uncharted territory than a niche site that may be known for a more specific thing.
Google's made public statements indicating that they're aware of the backlash (at least from publishers), so there's a chance this may change at some point. But it's a hard circle to square, particularly with brands that have cache with their readers and produce otherwise good content.

Luckily, Google seems to (finally) be inching towards making that change I mentioned.

The first signs started happening over the summer when these “Comtent-first” sub-sections hosted on big media company domains started dropping in July in huge numbers: some up to 97%, in the case of Time’s “Stamped” product, followed by similar drops in visibility for Forbes Advisor, CNN Underscored, Wall Street Journal Buyside and Fortune Recommends.

A company called Marketplace, the third-party outfit revealed to be behind many of these now-offenders (including Forbes, CNN’s Underscored product, USA Today’s Blueprint) saw the writing on the wall and started laying people off, per public posts at the time, as have other companies with similar 3rd party relationships with big brands.

And now Google’s stepped up enforcement with a slew of manual actions aimed at some of these publishers, along with an expansion of the existing language around what constitutes abuse. The TLDR: it no longer matters who wrote the content, whether it’s an in-house staffer or someone else. Abuse is abuse.

From the Google Search Quality Team’s Chris Nelson:

We're making it clear that using third party content on a site in an attempt to exploit the site's ranking signals is a violation of this policy – regardless of whether there is first-party involvement or oversight of the content."
We've heard very clearly from users that site reputation abuse - commonly referred to as "parasite SEO" - leads to a bad search experience for people, and today’s policy update helps to crack down on this behavior. Site owners that are found to be violating this policy will be notified in their Search Console account.

This is still evolving in real-time as I write this, so it remains to be seen whether:

  • This policy will have the desired effect on SERP quality overall
  • This will have second-order consequences, in particular false positives for sites not explicitly violating this policy but, because they share characteristics with those that do, get swept up in this just because they had affiliate links in their content
  • Whether they gather enough data from all of these manual actions that at some point they can “solve” this algorithmically

But even if these sub-folders of big media sites are dropping, there’s no guarantee that smaller publishers, with actual expertise in these domains, will pick up the slack to replace them. Certainly not if you were a victim of last September’s HCU.


The Frankenstein Monster

For years, Google’s repeatedly said that they don’t reverse or otherwise “roll back” updates.  But we’ve gotten hints recently that they might’ve wished they rolled back last September’s HCU. Or at minimum, re-named it.

A few weeks ago, over 20+ creators were invited to the Googleplex in Mountain View for a Web Creator Conversation Event one attendee described as a funeral. This included creators who in some cases, were used in Google’s own marketing materials as helpful content just a short time before the HCU hit.

Google’s Search Liaison Danny Sullivan spearheaded the event. Jake Cain, a longtime blogger and frequent attendee of Google publisher events, noticed a distinct change in tone from Sullivan’s demeanor, having attended a previous Google publisher event in Texas at the time the HCU first dropped:

A year ago, right after HCU had first rolled out, I was at a Google publisher meet up in Texas with Danny and one thing I'll say is that his tone from then till now has completely changed. At the time when hcu was a month old, Danny was, these are my words, a lot more just dismissive, like “hey we're just surfacing helpful content you know you can probably recover in a short amount of time, this this isn't that big of a deal. Whereas this year Danny I mean he started out by apologizing. First thing he said was like, “first of all I'm sorry. Like I'm sorry you're here under these circumstances….

When it comes to public statements from Google, that’s a noteworthy departure from the norm. And a striking admission of guilt.

Yes, they’ve apologized in the past for obvious edge cases in Search that were publicly embarrassing to them, like the AI overviews earlier in the year telling people to eat rocks or to put glue on pizza. Or in 2017 when a featured snippet erroneously claimed that former President Obama was planning a communist takeover of the United States.

But in the wake of major algorithm updates, they never seem to admit fault, continuing instead to press forward the same mantra: “Create high-quality, helpful content!” despite the obvious eye-rolling that phrase now gets from just about everyone.

As I said on X:

What’s evenmore alarming here though is that Google doesn’t seem to know what happened or how to fix it.

From the founder of TechRaptor who attended the event:

I think they’ve lost control and are struggling to get the ranking systems back under control. Danny spoke at length about how they’ve been using our feedback and query examples to debug the system and REALLY understand what’s not working and why.
There’s no easy way to roll back an algorithm, or program of any kind, with this level of complexity and systems. As a former programmer and IT guy, I may be more sympathetic in this way - but it’s up to Google to fix this and fix it right.

The idea that this is potentially a Frankenstein monster that the Search team has lost control over is a fundamentally terrifying prospect: particularly when so many people are already on edge about the prospects of out-of-control AI more generally.

How likely is something like this to happen again? Who will it affect this time? Will Google continue to shrug its shoulders while more people lose their livelihoods?

And yet, even as they spent the first part of the session apologizing for one thing, they remained in denial about another key point animating this debate since last summer: whether or not HCU signals impacted publishers at the page or site-level.

Both the engineers who sat down and brainstormed with afflicted site creators, as well as Pandu Nayak, Google Fellow and VP of Search, continued to deny the existence of a sitewide classifier even though these sites all experienced very obvious site-wide declines.

From Jared Bauman:

One of the most eye-opening revelations came when both Jake and Morgan asked the engineers about Google’s use of a sitewide “classifier.” This mysterious classifier seemed to be an overarching filter that suppressed their entire sites in search rankings, regardless of the quality of individual pages. And, they had strong evidence to support this.
The engineers, however, were evasive. For starters, they all denied that a sitewide classifier existed.

Referring to Nyack:

He said, there is no classifier. There are not site wide penalties, only page level queries. And we were like, especially Jake, I know Jake spoke up at this time. Yes, there is a site wide classifier. I can tell you can type in my site name. It doesn't show up. Um, other people are showing up and I'm not like, I fell off on one day. I fell off for all queries.

As I pointed out on X the other day, Google’s own documentation pointed out that the HCU was to be a site-wide signal when it first launched back in September 2022.

Moreover, Glenn Gabe laid out the argument with quotes from various Googlers over more than a decade, documenting not only the sitewide effects of punitive algorithm updates of the past like Panda and Penguin but also citing nearly 100 separate examples of Google stating explicitly that these adjustments were intended to be site-wide rather than isolated to specific pages.

As for what else transpired, plenty of illuminating first-hand accounts were written by some of the participants, ranging from entirely cynical (though not incorrect) to cautiously optimistic as to what transpired and what Google will ultimately do with the feedback.

The common thread, however, seemed to be that site owners heavily affected by the HCU shouldn’t expect relief from Google anytime soon. The update can’t (or won’t, depending on your interpretation) be rolled back. And any feedback gleaned in real-time from the meeting won’t happen in the *next* Core Update, which, as of today, is still in progress.


How Do We React to All of This?

If it wasn’t already obvious, here’s my point: this kind of inconsistency, unreliability, and (arguably) unethical behavior in the system is toxic for everyone. And it makes it harder than ever before to take Google’s public pronouncements with any degree of trust.

After all, why would any company willingly throw significant resources at a channel with this level of inconsistency related to outcomes? And when one algorithm update can functionally destroy the business models of anyone, large or small, overnight with the push of a button? 

It throws a wrench in the ability of companies to plan or forecast growth in SEO as a channel with even the slightest degree of accuracy. Creators and big publishers alike don’t know if their traffic will be here today and gone tomorrow on a whim because of an overreaching algorithm update where they get erroneously lumped in with bad actors in a functionally irreversible way. 

If doing the things that have historically aligned with what Google says they try to get their algorithms to reflect no longer do that, then what’s the point of investing more money into the channel?

Whether you agree or disagree with what Forbes and some of these big publishers did to scale their content operations or not, or whether or not niche sites “deserve” to survive, there’s a common thread between small and large publishers in this equation: we all deserve better than what we’re currently getting from Google.

Unfortunately, Google’s current level of power in the market suggests we're probably not going to get better than what we're getting anytime soon.

People are losing their jobs and their businesses over this. And the best Google can do is shrug its shoulders and offer sympathy to the affected.

The current landscape also makes people lose faith in good-faith SEO practitioners who so often serve as interpreters for brands, businesses, and clients who aren’t following all of this daily and are just looking to translate Google’s often overly broad and (dare I say) unhelpful missives into some kind of concrete action.

I’ve seen this play out over the last few months, especially with niche publishers going after SEOs they see as siding with Google and questioning the ethics of continuing to offer HCU-focused audits and advice to clients who seemingly have no hope of recovery.

The principles of taking on clients whose traffic was functionally wiped out by the HCU are complex: mainly because last September’s HCU was sandwiched between two Core Updates, making it an inexact science as to what update was the undisputed triggering force for the decline. (Which is precisely why SEOs have complained about these concurrent updates for years.)

For my own business, my thinking has landed me on the following: if I've got a clear sense that a potential client was an HCU victim, I don’t push audit work on them; especially if they don’t have the resources to invest in recovery like a larger enterprise would. Why? Because at this rate, I’m not confident that I can help them. And I don’t want to put myself in a position where a client spends a lot of time, money, and resources fixing any number of things when the success rate (such that it is) is minimal.

Bottom line: we shouldn’t accept this. It’s a tell-tale sign that a business has far, far too much power in the marketplace.

A reckoning looks to be on the horizon in the form of antitrust action. Google’s lost several court cases in the EU but largely managed to evade any kind of (real or perceived) legal accountability in the U.S. until this past August, when they were officially declared by U.S. District Judge Amit Mehta to be “a monopolist in online search and advertising markets.”

Remedies won’t be decided on until a subsequent trial takes place. And given the incoming change in Presidential administrations, it’s still unclear what measures, if any, the U.S. government will recommend.

But regardless of where the USG ultimately comes down, I think the road to transparency exists in getting some kind of independent accounting of SERP quality that exists separate and apart from Google or any other provider.

I’ll explore what that might look like in the next piece in this series.