Why are "noindex" pages access denied errors in GWT and should I worry about it?
-
GWT calls pages that have "noindex, follow" tags "access denied errors."
How is it an "error" to say, "hey, don't include these in your index, but go ahead and crawl them."
These pages are thin content/duplicate content/overly templated pages I inherited and the noindex, follow tags are an effort to not crap up Google's view of this site.
The reason I ask is that GWT's detection of a rash of these access restricted errors coincides with a drop in organic traffic. Of course, coincidence is not necessarily cause.
Should I worry about it and do something or not?
Thanks... Darcy
-
I am a little surprised, because having those pages as "noindex, follow" should not bring GWT to flag them as errors.
Monica is correct in addressing google flag anything than 200 as errors, but... Your page with "noindex, follow" should return a HTTP code of 200. If it is returning anything else, it's probably wrong, and you should analyze why is doing it.
My religion has a law saying that GWT should return no errors, point. I have also witnessed few times a correlation between lowering GWT errors count to 0 and an improve in SERP ranking; but I have no proof one is causing the other.
-
I had a similar issue where my sitemap and my robots.txt didn't match properly and they were causing a slew of errors to show up. Everything falls under a crawler error but "should" clean itself up as its being indexed. I resubmitted an updated sitemap that matched my robots.txt and I have gotten rid of the errors.
Google also states that these errors don't directly hurt your ranking, but they can indirectly hurt because of user experience. You can always double check and see if the pages are being indexed by doing a "site:" search in google and checking if those pages exist.
Now, the errors are somewhat of a blessing. We had a design firm who redid our website and they had contracted an SEO "expert" to optimize the site before launch. They launched our website, and the next day I open up GWMT and our entire website was still under "noindex". The forgot to take the noindex from the dev site off of our main site.
Also I would consider just redirecting the thing content all together.
EDIT: And again Ryan sneaks in before me!!!!!!!!
-
Thumbs up to Monica's answer. I'd just add that you could redirect some of those pages to thin out the use of no index if possible, but it sounds like you've kept them around as they're marginally useful. You can also click the 'ignore' button for given error messages and they'll go away.
-
No. I wouldn't worry about it. Google calls them errors, the same as a 404 error. To them an error is anything that returns a code other than 200. I have hundreds of noindex pages on my site and it doesn't hurt. I believe it helps because it removes duplicate content and eliminates bad user experiences.
I have always thought that it is Google's way of double checking to make sure that the Webmaster is aware those pages are blocked. There have been times that I found URLs in there that weren't supposed to be, and contrarily found missing URLs as well. Its checks and balances in my opinion.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How many links to the same page can there be for each page?
I need to know if I can add more than 2 equal links on the same page, for example 1 link in the header, another in the body and one in the footer
Intermediate & Advanced SEO | | Jorgesep0 -
"Unnatural links to your site" manual action by Google
Hi, My site has been hit by a "Unnatural links to your site" manual action penalty and I've just received a decline on my 2nd reconsideration request, after disavowing even more links than I did in the first request. I went over all the links in WMT to my site with an SEO specialist and we both thought things have been resolved but apparently they weren't. I'd appreciate any help on this so as to lift the penalty and get my site back to its former rankings, it has ranked well before and the timing couldn't have been worse. Thanks,
Intermediate & Advanced SEO | | ishais
Yael0 -
HELP! How do I get Google to value one page over another (older) page that is ranking?
So I have a tactical question and I need mozzers. I'll use widgets as an example: 1- My company used to sell widgets exclusively and we built thousands of useful, branded unique pages that sell widgets. We have thousands of pages that are ranking for widgets.com/brand-widgets-for-sale. (These pages have been live for almost 2 years) 2- We've shifted our focus to now renting widgets. We have about 100 pages focused on renting the same branded widgets. These pages have unique content and photos and can be found at widgets.com/brand-widgets-for-rent. (These pages have been live for about 2-3 months) The problem is that when someone searches just for the brand name, the "for sale" pages dramatically outrank the "for rent" pages. Instead, I want them to find the "for rent" page. I don't want to redirect traffic from the "for sale" pages because someone might still be interested in buying (although as a company, we are super focused on renting). Solutions? "nofollow" the "for sale" pages with the idea that Google will stop indexing "for sale" and start valuing "for rent" over it? Remove "for sale" from sitemap. Help!!
Intermediate & Advanced SEO | | Vacatia_SEO0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
What is a "good" dwell time?
I know there isn't any official documentation from Google about exact number of seconds a user should spend on a site, but does anyone have any case studies that looks at what might be a good "dwell time" to shoot for? We're looking on integrating an exact time on site into or Google Analytics metrics to count as a 'non-bounce'--so, for example, if a user spends 45 seconds on an article, then, we wouldn't count it as a bounce, since the reader likely read through all the content.
Intermediate & Advanced SEO | | nicole.healthline0 -
Duplicate on page content - Product descriptions - Should I Meta NOINDEX?
Hi, Our e-commerce store has a lot of product descriptions duplicated - Some of them are default manufacturer descriptions, some are descriptions because the colour of the product varies - so essentially the same product, just different colour. It is going to take a lot of man hours to get the unique content in place - would a Meta No INDEX on the dupe pages be ok for the moment and then I can lift that once we have unique content in place? I can't 301 or canonicalize these pages, as they are actually individual products in their own right, just dupe descriptions. Thanks, Ben
Intermediate & Advanced SEO | | bjs20101 -
Rel canonical on every page, pointing to home page
I've just started working with a client and have been surprised to find that every page of their site (using Concrete5 CMS) has a rel=canonical pointing to their home page. I'm feeling really dumb, because this seems like a fatal flaw which would keep Google from ranking any page other than the home page... but when I look at Google Analytics, Content > Site Content > Landing Pages, using Secondary Dimension = Source, it seems that Google is delivering users to numerous pages on their site. Can anyone help me out?! Thanks very much!!
Intermediate & Advanced SEO | | measurableROI0 -
What is the proper syntax for rel="canonical" ??
I believe the proper syntax is like this [taken from the SEOMoz homepage]: However, one of the sites I am working on has all of their canonical tags set up like this: I should clarify, not all of their canonicals are identical to this one, they simply use this naming convention, which appears to be relative URLs instead of absolute. Doesn't the entire URL need to be in the tag? If that is correct, can you also provide me with an explanation that I can give to management please? They hate it when I say "Because I said so!" LOL
Intermediate & Advanced SEO | | danatanseo0