What to do about "blocked by meta-robots"?
-
The crawl report tells me "Notices are interesting facts about your pages we found while crawling". One of these interesting facts is that my blog archives are "blocked by meta robots".
Articles are not blocked, just the archives.
What is a "meta" robot?
I think its just normal (since the article need only be crawled once) but want a second opinion. Should I care about this?
-
Meta robots refers to the < meta name="robots" > tag at the page header level. This is usually the case when a blog is set up with an SEO program like All In One SEO for example, where you can manually set which content is blocked. It's common to block archives, tags, and other sections, in the theory that allowing these to be crawled could either cause duplicate content issues, or drain link value from the primary category navigation.
-
In general, there are two ways you can block crawlers from indexing your content.
-
You can add a Disallow entry to your robots.txt file
-
You can add a meta tag to your pages:
What you are saying in either case is "please do not list this content in your search engine".
In general, you would not want to block your archives. There certainly can be specific cases where you only want the public to see your most current content, in which case you can block it.
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My sites "pages indexed by Google" have gone up more than qten-fold.
Prior to doing a little work cleaning up broken links and keyword stuffing Google only indexed 23/333 pages. I realize it may not be because of the work but now we have around 300/333. My question is is this a big deal? cheers,
Technical SEO | | Billboard20120 -
How to use robots.txt to block areas on page?
Hi, Across the categories/product pages on out site there are archives/shipping info section and the texts are always the same. Would this be treated as duplicated content and harmful for seo? How can I alter robots.txt to tell google not to crawl those particular text Thanks for any advice!
Technical SEO | | LauraHT0 -
Handling "legitimate" duplicate content in an online shop.
The scenario: Online shop selling consumables for machinery. Consumable range A (CA) contains consumables w, x, y, z. The individual consumables are not a problem, it is the consumables groups I'm having problems with. The Problem: Several machines use the same range of consumables. i.e. Machine A (MA) consumables page contains the list (CA) with the contents w,x,y,z. Machine B (MB) consumables page contains exactly the same list (CA) with contents w,x,y,z. Machine A page = Machine B page = Consumables range A page Some people will search Google for the consumables by the range name (CA). Most people will search by individual machine (MA Consumables, MB Consumables etc). If I use canonical tags on the Machine consumable pages (MA + MB) pointing to the consumables range page (CA) then I'm never going to rank for the Machine pages which would represent a huge potential loss of search traffic. However, if I don't use canonical tags then all the pages get slammed as duplicate content. For somebody that owns machine A, then a page titled "Machine A consumables" with the list of consumables is exactly what they are looking for and it makes sense to serve it to them in that format. However, For somebody who owns machine B, then it only makes sense for the page to be titled "Machine B consumables" even though the content is exactly the same. The Question: What is the best way to handle this from both a user and search engine perspective?
Technical SEO | | Serpstone0 -
Robots.txt
www.mywebsite.com**/details/**home-to-mome-4596 www.mywebsite.com**/details/**home-moving-4599 www.mywebsite.com**/details/**1-bedroom-apartment-4601 www.mywebsite.com**/details/**4-bedroom-apartment-4612 We have so many pages like this, we do not want to Google crawl this pages So we added the following code to Robots.txt User-agent: Googlebot Disallow: /details/ This code is correct?
Technical SEO | | iskq0 -
Robots.txt file
How do i get Google to stop indexing my old pages and start indexing my new pages even months down the line? Do i need to install a Robots.txt file on each page?
Technical SEO | | gimes0 -
Is "last modified" time in XML Sitemaps important?
My Tech lead is concerned that his use of a script to generate XML sitemaps for some client sites may be causing negative issues for those sites. His concern centers around the fact that the script generates a sitemap which indicates that every URL page in the site was last modified at the exact same date and time. I have never heard anything to indicate that this might be a problem, but I do know that the sitemaps I generate for other client sites can choose server response or not. What is the best way to generate the sitemap? Last mod from actual time modified, or all set at one date and time?
Technical SEO | | ShaMenz0 -
Confused about rel="canonical"
I'm receiving a duplicate content error in my reports for www.example.com and www.example.com/index.htm. Should I put the rel="canonical" on the index page and point it to www.example.com? And if I have other important pages where rel="canonical" is being suggested do I place the rel="canonical" on that page? For example if www.example/product is an important page would I place on that page?
Technical SEO | | BrandonC-2698870