If youre here, youre probably already familiar with Mozs list of ranking factors and how they correlate with search results. Youve probably also heard thousands of times that links are the most important ranking factor. And if youre like most SEOs, you believe that links are always the most important ranking factor, and if you want to rank, you should always start with the highest correlated factors on Mozs list.
Im here to tell you that this is not how Google works.
The Truth About Rankings
Lets put aside the fact that rankings are different based on your location, your user data, your preferences, your search history, your settings, and which Google server you happen to land on. Lets go ahead and pretend that we still live in a world where everybody sees the same set of results based on the individual query that they typed into the search box. Even then, ranking doesnt work the way you probably think it does.
DotCults Ryan Jones recently wrote an excellent post on the subject of how Googles algorithm (algorithms, really) actually works. While its impossible to know exactly whats going on, anybody with experience in computer science should be able to tell you that it works something like this:
- After receiving your query, it searches through the index to identify which pages belong in the search results.
- It organizes the results according to a large set of ranking factors including relevance, links, and many others.
That last part is where the confusion comes in. We tend to think of the algorithm as though each individual ranking factor contributes some predefined fraction of a score to your page, then organizes them from highest to lowest. Its extremely unlikely that it works this way for every ranking factor, and very likely that it doesnt work this way for most, if not all, of them.
Instead, its more likely were dealing with sorting first by one factor, then by another, then another, and so on.
As an example of how this could work, suppose we knew that the ranking factors were prioritized like this (we dont):
- Number of backlinking domains
- Number of Facebook shares
- Number of matching keywords in title tag
In this case, the number of Facebook shares wouldnt matter unless two search results had a similar number of backlinking domains. The number of matching keywords in the title tag wouldnt matter unless two pages had a comparable number of backlinking domains and Facebook shares.
What This Means For Us
Whats the point of this hypothetical scenario? To understand that the most important rankings are going to change depending on the query and the available results.
Ever have way more links, and higher quality links, than your competitors, and still rank below them? We have. Just about every SEO has found themselves in that situation. Its not a fluke or a mistake. It is a very deliberate result of the way the algorithm is designed.
If you have more links than your competitors and theyre still outranking you, youre not going to solve the problem with links. For this query, Google considers a different ranking factor to be more important than links.
For example, Google may be prioritizing user clicks over links in this circumstance. To rank for this query, you would need a higher click-through rate than your competitor.
This wouldnt mean that links didnt matter at all for this query. Links would put you ahead of everybody who had a similar click through rate but fewer links. But no amount of links or quality of links would put you ahead of a competitor with a higher click through rate.
All of this is hypothetical, but its meant to prove an important point. When your metrics are telling you that you should be outranking a competitor, and youre not, it means youre going to need to tweak something else, and forget about that ranking factor for a while.
While this is hypothetical, my experience tells me that this is a very real situation. I believe that user behavior data is often prioritized over linking factors. I have watched results climb from page 3 to page 1 with no additional link building or changes of any kind, and I find it difficult to imagine how this is anything other than automated user testing on Googles part.
Okay, I May Have Simplified Things A Bit
So, is this how the algorithm always works? Is there a list of rankings that come in pre-packaged priorities, and as long as youve got the top priority met, youre set?
I doubt it. More likely, were also looking at vector comparison and nearest neighbor search.
I know, things are getting a bit techy here. But hang in there.
To avoid going into details, lets just put it this way. For each query, Google constructs a list of attributes it expects the perfect search result to have. It then organizes the pages according to how closely they resemble that list of attributes.
Since SEOs are so obsessed with links, they tend to think in terms of needing more of something, instead of thinking that they need to find a sweet spot. But ranking factors are more complicated than that, especially when we start talking about things like Panda, which are most likely algorithms designed by AIs training on human-reviewed data.
When we start thinking about Google this way, it means we need to start embracing the concept of trade-offs. Longer content can be more helpful, but it requires more investment on behalf of the user and they might grow tired of it. Images are appealing, but too many can slow things down, and sometimes they are actually less helpful or relevant than text. And so on.
We cant use this knowledge to reverse engineer the Google algorithm, but we can use it to change the way we think about rankings.
Theres no way to know exactly what Googles looking for in our pages, but we know the end goal: satisfied users. A satisfied user feels like they got what they wanted from a search result, that it served their purpose, and exceeded their expectations. You need to think long and hard about what a user is looking for when they perform a search, and be sure to give it to them.
Lets face it: SEO is a massive project. It takes organization, consumer satisfaction surveys, and management tools like MS Project or WorkZone to really address all of the variables.
Thresholds
Theres one more thing Id like to touch on about the algorithm before we wrap this up, and thats the presence of thresholds, or tolerances.
Sometimes aspects of the Google algorithm are binary. We know this for a fact. An indexed page is either in the list of search results, or its not. The query either has video/local/image inserts, or it doesnt. And we have reason to believe that your site is either penalized or its not. (Most penalties are really the indirect result of links that have lost value after taking a direct hit, which is why things dont seem quite so binary. More on misconceptions and recovery here.)
One of the clearest examples of this comes from Panda. Decreasing the quality of the content gradually does not result in a gradual downward traffic slope. Instead, there is a quality threshold. After falling below this threshold, Panda targets entire sections of your site based on factors like block element build-up. Even higher quality pages in these sections take a hit.
Its safe to assume that the known types of thresholds arent the only ones we deal with. For example, we may have situations where all search results with a certain minimum number of high quality links are treated equally and compared based on other factors instead. Or we may have situations where a minimum number of well-known social influencers need to be involved before Google starts considering social ranking factors.
Conclusion
Discussions about the algorithm such as this arent very common in the SEO community, in large part because its difficult to make any broad, actionable statements in response to them. However, I feel its extremely important to keep this information in mind while performing tests and analyzing competitors. This information isnt especially useful when youre trying to put together broad SEO advice, but its pure gold when you have your own datasets, tests, and when youre dealing with individual keywords youre trying to rank for.
Pratik, a great post. I am a big believer that Google values very highly user data. They know how long we stay on a page before bouncing back. They know if we bounce back. They know if we then click on another result of perform a brand knew search. They know if we click more on the #5 result than the #2 result. And they even might know what how long Chrome users stay on each page and where they go when they leave the site. They have so much user satisfaction data, they would have to be bonkers not to use it – given that user satisfaction is what keeps them afloat.
Excellent! Many SEOa seem to fall into the trap of binary thinking or assume that any one thing (like quantity of links) is THEone true thing that should be their main focus. That also leads to assumptions about what not to do – like the way everyone is all worked up about press releases and guest posting at the moment. It becomes an “all or nothing” view when in reality, successful SEO is a complex balance of many things.
One would think that by now more people would understand that the algorithm is more complex than a check list of simple items. The relationships between all of the binary factors AND the less tangible things like user experience holds that thing Google is looking for which everyone wants to find.