Google has announced that it will no longer show the Supplemental Result label in the SERPs. For details see Google Supplemental Label Out, PageRank Next?.
Most of the content of this post is still relevant.
Earlier in the year, many very large website owners were concerned that most of the web pages in their websites were ending up in the Google Supplemental Results Index rather than in the main Google Index. Clearly if a keyword query can be answered from the main Index, then your entries in the Supplemental Results Index will never see the light of day.
The Google robots cover an awful lot of web pages and need to put them somewhere. Shimon Sandler explained why web pages may end up in Google Supplemental Results and what you may need to do to avoid this. Matt Cuts of Google discussed why their Big Daddy infrastructure modification had caused the uproar and suggested that things would improve as they fine-tuned Big Daddy.
In fact the answer to all this is very simple. You’ll find it in a comment from Matt Cuts of Google in his blog:
Higher quality posts are the trick.
In general, the best way I know of to move sites from more supplemental to normal is to get high-quality links (don’t bother to get low-quality links just for links’ sake).
So if you’re a culprit, stop creating all those spammy web pages in the hope that quantity of links will in some way compensate for lack of quality in those links. That’s not the way to get out of the Google Supplemental Results index.
Related: Supplemental Result in Google – Hell or Help
Tip of the hat to SEO Scoop, a blog worth watching.
Tags: Google, supplemental result
Can you explain this further?
Clearly if a keyword query can be answered from the main Index, then your entries in the Supplemental Results will never see the light of day.
Thanks for posing that question, RC Guy. That’s a guess on my part. The Google Help Centre states the following, “Please also be assured that the index in which a site is included doesn’t affect its PageRank.” That might imply that supplemental results and normal results are all considered together in the keyword search algorithm.
I find that difficult to believe. The Google team also seems to have put considerable effort in ensuring web pages are not left in supplemental results when they should be in the normal index. I currently find no supplemental results in the first 100 results in several competitive keyword searches I have done. All in all, I’ll stick with what I said.
Hi,
I was reading an article for the supplimental result. As far as I knw for the supplimental result G uses a diffrent indexing.
And Bad pages will be shifted to supplimental index.
So now confusion starts :
When u search any keyword it shld show result from the main index not from supplimental index as they are the bad pages for google.
But you can see the mixed result in search.
So wht is the criteria G is mugging behind it..?
how to get out from the supplemental state…?
Jal, I think it’s wrong to view supplemental results as bad and the normal index as good. As Google explains it, it’s a question of priorities. Supplemental results have a secondary priority. So they’re spidered less frequently and may well have less information held about them in the database.
Google says that the PageRank is unaffected. In other words, information on the backlinks (or inlinks as I prefer to say) is maintained on supplemental results just as it is on regular results. That’s small comfort since PageRank is only one of many factors that goes into the keyword search algorithm(s). Currently there seem to be few supplemental results showing in typical keyword searches. That suggests to me it’s better to do what it takes to get your web pages into the regular index and avoid the supplemental index.
In my experience, the major problem with having pages in the supplemental index is that Google doesn’t update its cache of those pages very often (if at all). I have a site where 90% of its pages are in the supplemental index and Google still crawls those pages regularly; however, even if “supplemental” pages have been updated, Google never updates its cache for those pages.
I’ve not really noticed a problem with these “supplemental” pages ranking for the specific type of queries I would expect these pages to rank well for – many of these “supplemental” pages have page 1 positions (normally out of several tens of thousands of results).
I think the reason most people compain about a web page being “supplemental” is because it possibly gives the impression that a web page is in some way “second rate”.
Personally, I don’t know why Google doesn’t simply do away with the Supplemental Result label, I can’t see what benefit anyone (including Google) gets from the label being there.
That’s certainly a point of view, Jeff. It may be difficult to be sure exactly what more you know by being given this additional information. However I see it as a signal of not being seen as a priority web page. You should really do your best to identify in your own mind reasons why this signal has occurred and see whether you should be doing something to correct the situation.
Here’s another “point of view”, Barry:
“The index in which a site is included is completely automated; there is no way for you to select or change the index in which your site appears.”
http://www.google.com/intl/en/webmasters/faq.html#label
Do you know something Google doesn’t? Hey, if there’s a way to change the index your site appears in, even though Google says there isn’t, let’s hear it 😉
It is obvious to me that sites with many high-quality one-way backlinks, and the corresponding higher PR that goes with this, will have more pages in the normal index than sites with fewer high-quality one-way backlinks.
I wish people would stop perpetuating the myth that there is anything more mysterious to it than this!
I think you’re just pointing out how you get more web pages in the main index, Jeff, and I would agree with that. I think also content on the web page plays a part in this as well. If you work on inlinks and content then you can be pretty sure you will have web pages that Google will assign to the regular index. There’s nothing mysterious about it. Indeed that’s what Matt Cutts said in the quote I put in the blog entry that started this.
Just about all my pages have been supplementalized all of a sudden. I’d never even seen it before. I wish I knew more about the mechanics of this. Would it be fair to say that even internal pages need links to them as well, to avoid this, assuming they are not spammy but legitimate pages.
I do find this surprising at this time, minute. I’ve not seen comments elsewhere about any mass movement of web pages so it may be something relating to your website. As you say getting links to internal web pages is bound to be helpful.
Hi Barry,
You said: “That suggests to me it’s better to do what it takes to get your web pages into the regular index and avoid the supplemental index. ”
What should be done in order to getting into the regular index from supplemental index?
Thanks.
David
Hi David, that’s what Matt Cutts tried to tell you. He said “In general, the best way I know of to move sites from more supplemental to normal is to get high-quality links (don’t bother to get low-quality links just for links’ sake).” That requires that you have strong content that other people want to link to.
I have a website in the automotive category. Though this website has a page rank of 6 with many internal pages of pagerank 4 to 6, about 800 pages (almost 80%) are in the supplemental.
With a casual inspection I could see that all these pages in the supplemental were the php based dynamic URLs. Google does not seem to index them and though they are linked to high pagerank pages, they can not get out of the supplemental. So the only way to reduce such instances is to rewrite your applications which generate the dynamic URLS and make them search engine friendly.
You said: “That suggests to me it’s better to do what it takes to get your web pages into the regular index and avoid the supplemental index. ”
If your page is not being shown in the main index but only in the supplemental, then how do others find it in the first place to link to it! Chicken and egg.
It seems to me, NetNeo, that you’ve got to generate more links to such a web page and these other links if sufficiently authoritative will eventually ‘drag’ the web page out of supplemental. I’m pretty sure that doing anything on the web page itself will be of zero value.
At this point Google is not very consistant on what they deside to put in the suplimental results.
I have three cars dealers. All three use the same software (asp pages) to display thier inventory. one has no supplimental pages. one (the smallest with the least pages and links) only has one supplimental page. that page has has errors on it in the past, so that make sence. The last one is almost all supplimental.
It doesn’t matter if the page is unique, has quality content, or has any dynamic variables. It seems a given that Google doesn’t like one of the sites for some reason. Perhaps the server was down the last time Google tried indexing it. But Google is not consistant in its choices.
Ok on supplemental results can a site that has bad grammar
cause it to be placed into supplemental results.
2nd question how about content that makes no senses ?
hi Barry
My name is krish from Bali,
thanks for making this blog, after reading all these post on Supplmental results, would eliminating them from the G cache be a fast solution. Removing them from G and then redirecting the URL?..
Do you know any similar thing like this being done in Yahoo and MSN?
thanks
I don’t know how you would remove any web pages from Google cache directly. I’m not aware that Yahoo or MSN (Live) do anything similar.
Hello,
I have a site that contains sections displaying unique information.
However, as a novice to SEO, I did not make my scripts to generate dynamically unique Title, Meta keywords, and meta description.
I think this is what resulted in ALL my pages linked on my site to be supplemental and leaves my index.pl page as the only normal indexed site in Google.
The other pages that are supplemental are very unique in its own way.
For example, I offer my members inidivdual profiles which are all unique in its own way according to the member. I also have a page that lists consensus handicapping information that displays data for every different sport games. However, Google will consider all these pages supplemental. Could it be because of he url? For example: /Consensus.pl?day=3&yr=2007&mo=1&sport=NBA…
However if you goto just Consensus.pl (then by default it will display the most recent previous game date)—-therefore GOogle doesnt even need to see the parameter attached after the file name.
I really would want Google to appreciate these unique content pages because its I feel it is valuable not only for my sites SERP’s but useful for my audience as well.
I spent a whole day in generating unique Title, Description, keywords for each and every page—-Now my question is , whats the fastest way to recover from Supplemental hell?
Any suggestion will greatly be appreciated.
Thanks
Bobby, if you feel that the Google spiders would find unique static pages if they were spidering your website, then perhaps it’s time to contact Google directly and let them know that you have made changes. Of course if all the web pages look substantially the same apart from the title and description, then they may still be regarded as duplicates.
Bobby, generating unique Title, Description, keywords for each and every page will not help. All my pages but one are supplemental and each has a unique Title, Description and keywords. I also have a good sitemap which I added but that hasnt helped either – fortunately MSN (live search) and Yahoo like the site search.
Ian
You’re right, Ian. I assume you’re referring to the meta tags for description and keywords, which are disregarded by Google in its search algorithm and presumably in the decision as to supplemental index or not. So it comes down to the Title. Even if that’s unique but the total page content is remarkably similar to other web pages, then that would likely be enough to leave these pages in the supplemental index.
I have looked into supplemental results with a few interesting findings, if a page is orphand from the main directory structure it goes into supplemental results, this is an easy problem to fix, but when a page is not orphand and has good content and abides by all googles rules and the page has no PR that is worring and as yet I have not found out the reason for it…
All the best from Alan
Matt Cutts has recently said it’s all a question of PageRank as to whether a web page goes into the supplemental results index or not. Presumably if you can get some higher PageRank inlinks to the web page, then that will boost its own PageRank and it will in due course get out of the index. Of course that will only happen after a spider finds one of those other inlinks and then spiders your web page again. Thanks for the comment, Alan.