Sitemap - % of URL's in Google Index?
-
What is the average % of links from a sitemap that are included in the Google index? Obviously want to aim for 100% of the sitemap urls to be indexed, is this realistic?
-
If all the pages in your sitemap are worthy of the Google index, then you should expect around a 100% indexation rate. On the flip side, if you reference low quality pages in your sitemap file, you will not got them indexed and may even be hurting the trust of your sitemap file. As a point in case, Bing just recently announced that if they see an error rate greater than 1% in the sitemap, then they will just ignore your sitemap file.
-
Clients, so I have no idea how they do it. It's a complex automated process for sure.
-
Wow. Do you have a third party program to build your site map files or our you using something built in house?
-
Ryan's point is important to note. 100% is achievable under the correct circumstances. I've got a client with 34 million pages on their main site (and contained within a combined 909 sitemap xml files), and they have 34 million pages indexed.
-
The percent of pages indexed varies greatly with each site. If you desire 100% of your site indexed then 100% of your site's pages should be reviewed to ensure their content is worthy of being indexed. The content should be unique, well written and properly presented. Your sitemap process also needs to be carefully reviewed. Many site owners simply set up an automated process without taking the time to ensure it is properly configured. Often pages which are blocked by robots.txt are included in the site map, and those pages will not be indexed.
Many people say "I want 100% of my site indexed" just how many people say "I want to be #1 rank in Google". Both results are achievable, but both require time and effort, and perhaps money.
-
Hi. We have a stiemap with over 250,000 URLs and we are at 87%. This is a high for us. We have never been able to get 100%. We have been trying to clean up the sitemap a bit but with so many URLs it is hard to go through it line by line. We are making more of an effort to fix the errors Google tells us about in Webmaster Tools but these only account for a fraction of the URLs apparently not indexed.
We also do site searches on Google to see how many URLs total we have in Google as our sitemap only includes "the most important" pages. Doing a search for "site:www.sierratradingpost.com" comes up with over 400,000 URLs.
For us, I don't think 100% is realistic. We have never been able to achieve it. It will be interesting to see what other SEOmozers have to report!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I'm noticing that URL that were once indexed by Google are suddenly getting dropped without any error messages in Webmasters Tools, has anyone seen issues like this before?
I'm noticing that URLs that were once indexed by Google are suddenly getting dropped without any error messages in Webmasters Tools, has anyone seen issues like this before? Here's an example:
Intermediate & Advanced SEO | | nystromandy
http://www.thefader.com/2017/01/11/the-carter-documentary-lil-wayne-black-lives-matter0 -
Old URLs that have 301s to 404s not being de-indexed.
We have a scenario on a domain that recently moved to enforcing SSL. If a page is requested over non-ssl (http) requests, the server automatically redirects to the SSL (https) URL using a good old fashioned 301. This is great except for any page that no longer exists, in which case you get a 301 going to a 404. Here's what I mean. Case 1 - Good page: http://domain.com/goodpage -> 301 -> https://domain.com/goodpage -> 200 Case 2 - Bad page that no longer exists: http://domain.com/badpage -> 301 -> https://domain.com/badpage -> 404 Google is correctly re-indexing all the "good" pages and just displaying search results going directly to the https version. Google is stubbornly hanging on to all the "bad" pages and serving up the original URL (http://domain.com/badpage) unless we submit a removal request. But there are hundreds of these pages and this is starting to suck. Note: the load balancer does the SSL enforcement, not the CMS. So we can't detect a 404 and serve it up first. The CMS does the 404'ing. Any ideas on the best way to approach this problem? Or any idea why Google is holding on to all the old "bad" pages that no longer exist, given that we've clearly indicated with 301s that no one is home at the old address?
Intermediate & Advanced SEO | | boxclever0 -
When Mobile and Desktop sites have the same page URLs, how should I handle the 'View Desktop Site' link on a mobile site to ensure a smooth crawl?
We're about to roll out a mobile site. The mobile and desktop URLs are the same. User Agent determines whether you see the desktop or mobile version of the site. At the bottom of the page is a 'View Desktop Site' link that will present the desktop version of the site to mobile user agents when clicked. I'm concerned that when the mobile crawler crawls our site it will crawl both our entire mobile site, then click 'View Desktop Site' and crawl our entire desktop site as well. Since mobile and desktop URLs are the same, the mobile crawler will end up crawling both mobile and desktop versions of each URL. Any tips on what we can do to make sure the mobile crawler either doesn't access the desktop site, or that we can let it know what is the mobile version of the page? We could simply not show the 'View Desktop Site' to the mobile crawler, but I'm interested to hear if others have encountered this issue and have any other recommended ways for handling it. Thanks!
Intermediate & Advanced SEO | | merch_zzounds0 -
Should you allow an auto dealer's inventory to be indexed?
Due to the way most auto dealership website populate inventory pages, should you allow inventory to be indexed at all? The main benefit us more content. The problem is it creates duplicate, or near duplicate content. It also creates a ton of crawl errors since the turnover is so short and fast. I would love some help on this. Thanks!
Intermediate & Advanced SEO | | Gauge1230 -
Images Sitemap GWT - not indexed?
So we went ahead and created an image sitemap of 2387 images, one for each product - I was hoping it would give us better exposure in image results. No joy, over 7 days and they only showing as "sent" but not "indexed". Any ideas?
Intermediate & Advanced SEO | | bjs20100 -
What are Soft 404's and are they a problem
Hi, I have some old pages that were coming up in google WMT as a 404. These had links into them so i thought i'd do a 301 back to either the home page or to a relevant category or page. However these are now listed in WMT as soft 404's. I'm not sure what this means and whether google is saying it doesn't like this? Any advice welcomed.
Intermediate & Advanced SEO | | Aikijeff0 -
Do you bother cleaning duplicate content from Googles Index?
Hi, I'm in the process of instructing developers to stop producing duplicate content, however a lot of duplicate content is already in Google's Index and I'm wondering if I should bother getting it removed... I'd appreciate it if you could let me know what you'd do... For example one 'type' of page is being crawled thousands of times, but it only has 7 instances in the index which don't rank for anything. For this example I'm thinking of just stopping Google from accessing that page 'type'. Do you think this is right? Do you normally meta NoIndex,follow the page, wait for the pages to be removed from Google's Index, and then stop the duplicate content from being crawled? Or do you just stop the pages from being crawled and let Google sort out its own Index in its own time? Thanks FashionLux
Intermediate & Advanced SEO | | FashionLux0 -
Will Google Visit Non-Canonicalized Page Again and Return Its Page's Original Ranking?
I have 2 questions about canonicalization. 1. Will Google ever visit Page A again if after it has been canonicalized to Page B? 2. If Google will still visit Page A and found that it is not canonicalizing to Page B already, will the original rankings and traffic of Page A returned to the way before it's canonicalized? Thanks.
Intermediate & Advanced SEO | | globalsources.com0