Seomoz bar: No Follow and Robots.txt
-
Should the Mozbar pickup 'nofollow" links that are handled in robots.txt ?
the robots.tx blocks categories, but is still show as a followed (green) link when using the mozbar.
Thanks!
Holly
ETA: I'm assuming that- disallow: myblog.com/category/ - is comparable to the nofollow tag on catagory?
-
Thank you Cyrus for that great article link. And like that article states near the end, it touches on a common problem for those of us that assume all the info at SeoMoz is accurate even though it may not be current. (not only seomoz to be fair) I've found several instances where even authorities change their mind or google changes is for them?
But anyways, it appears using canonical or meta tags would be the better solution. Unfortunately,neither is possible in Squarespace. I had just about decided to change the robots.txt , get rid of the disallow: /category/ , and call it a day. But then I found an example where the noindex was used in the robots.txt file of a squarespace website (specializing in SEM among other things). Probably the "longest" robots list I've ever seen!
http://www.hunchfree.com/robots.txt
Would it be a good idea to use noindex, FOLLOW in the robots.txt for /category/
(if that's even possible) or just keep with my "call it a day" solution...at least where robots.txt is concerned.
BTW- I posted a similar question on the reasoning behind the robots.txt for ss websites at the developers forum- nothing but crickets. Unless it's about design, things pretty much drop like a rock. Oh well.
-
As Phil pointed out, blocking a URL with robot.txt may keep search engines from crawling your pages, but that doesn't mean they wont index those pages. The meta robots NOINDEX, FOLLOW tag is a much better choice.
Highly recommend the following article that explains this in more detail:
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
Unfortunately, Sqarespace isn't all that flexible when it comes to meta tags. For the most part, Google is getting better at figuring this kind of duplicate content out, but it's best to address it when you can.
-
Thank you so much for the detailed reply. It's REALLY appreciated. The blog you are referring to is the Squarespace company's blog. This disallow: categories IS however on any site that uses their service. But I've done a similar search with my personal blog on Squarespace and a couple of categories still show up in the SERPs anyways. You can edit the robot file if you want, but you have to do a redirect as you don't have root access.
Unfortunately, (at least I don't think we can), include meta tags for noindex on a page by page basis. You can use it in robots.txt.
It seems their would be a lot more duplicate content issue with tags rather than categories as it's more granular than categories.
The point of all this is I'm creating new websites for some of our homeschool students and want to get it right from the start with the site architecture and how we use tags and categories with a balanced focus on usability as well as optimizing for search. These kids are super interested in all the reasoning behind things and their questions are tougher than any client! Ha!
Again, Thanks so much and take care,
Holly
-
Thanks for providing some more detail Holly. I definitely think it's applicable to leave here and I'm happy to help.
Some people like to prevent search engines from crawling category pages out of a fear of duplicate content. For example, say you have a post that's at this URL:
site.com/blog/chocolate-milk-is-great.html
and it's also the only post in the category "milk" with this url:
then search engines see the same exact content (your blog post) on two different URLs. Since duplicate content is a big no-no, many people choose to prevent the engines from crawling category pages. Although, in my experience, it's really up to you. Do you feel like your category pages will provide value to users? Would you like them to show up in search results? If so, then make sure you let Google crawl them.
If you DON'T want category pages to be indexed by Google, then I think there's a better choice than using robots.txt. Your best bet is applying the noindex, follow tag to these pages. This tag tells the engines NOT to index this page, but to follow all of the links on it. This is better than robots.txt because robots.txt won't always prevent your site from showing up in search results (that's another long story), but the noindex tag will.
If I'm not making sense at all then please just let me know :).
Lastly, from what I can see on your site and blog, it doesn't look like the category pages for your blog are actually in your robots.txt file. Have someone do a double check.
To check this myself, I just did a google search for this URL:
http://blog.squarespace.com/blog/?category=Roadmap
And it showed up in Google right away. Looks like something isn't going according to plan. Don't worry though, that happens all of the time and it should be an easy fix.
-
I know one day i may wakeup one morning and this will all click, but for now perhaps an example will help me get past this initial hurdle.
Squarespace disallows categories in the robots.txt, but using the mozbar I see the category links are green.
So if I understand (partly anyways), the disallow in robots keeps the bots from crawling those pages when they come knocking at my site. However, the category links in a blog post are being crawled? or what's the point?
I'm just trying to understand the reasoning behind disallowing categories and how that should impact the tagging and categorizing of blog posts.
Perhaps I should of started a new question? or is it applicable to leave it here..
-
The nofollow attribute and robots.txt file serve different purposes.
Nofollow Attribute
This attribute is used to tell search engines, "Don't follow this link", or even "Don't follow any links on this page." It doesn't prevent pages from being indexed, just prevents the search engines from following that link from that particular page.
Robots.txt
This file contains a list of pages that the search engine should not access and should not index.
To read more about robots.txt check out this page: http://googleblog.blogspot.com/2007/01/controlling-how-search-engines-access.html
For more on Nofollow, check out this page: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
Hope this helps!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Why seomoz crawler does not see my snapshot?
I have a web app that uses angularJS and the content is all dynamic (SPA). I have generated snapshots for the pages and write a rule to redirect ( 301) to the snapshot in case of find escaped_fragment in the URL. E.g http://plure.com/#!/imoveis/venda/rj/rio-de-janeiro Request: http://plure.com/?escaped_fragment=/imoveis/venda/rj/rio-de-janeiro is redirected to: http://plure.com/snapshots/imoveis/venda/rj/rio-de-janeiro/ The snapshot is a headless page generated by PhantomJS. Even following the guideline ( https://developers.google.com/webmasters/ajax-crawling/docs/specification) I still can't see my page crawled and I also in SEOMoz I can only see the 1st page crawled with no dynamic content on it. Am I doing something wrong? SEOMoz was supposed to get the snapshot based on same rules of GoogleBot or SEOMoz does not get snapshots?
Moz Pro | | plure_seo0 -
SEOmoz report vs. Google's Algo
Hello, I got an SEOmoz Report for one of our clients and the report is showing these pages and many more others as duplicate page content. Thing is the page content is not duplicate however there is very little data differentiating the contents. My question is does Google see the following pages contents as duplicate? because seomoz does. http://dallastxlofts.com/blog/2012/06/using-a-loft-for-commercial-or-office-space.html/img_9632/ http://dallastxlofts.com/blog/2012/08/newly-renovated-in-victory-park.html/3-2/ http://dallastxlofts.com/blog/2012/08/pedestrian-friendly-uptown-west-village.html/attachment/18/ http://dallastxlofts.com/blog/2012/06/using-a-loft-for-commercial-or-office-space.html/img_4322/ http://dallastxlofts.com/blog/2012/08/historic-deep-ellum-lofts.html/2012-08-18-11-24-30/ http://dallastxlofts.com/blog/2012/08/pedestrian-friendly-uptown-west-village.html/attachment/13/ http://dallastxlofts.com/blog/2012/06/using-a-loft-for-commercial-or-office-space.html/842-4/
Moz Pro | | Bryan_Loconto0 -
How to increase ranking on google by using pro seomoz
Hi Team, I am trying to get good rank on this 2 keywords, gift card http://www.giftbig.com/gift-cards.html and gift vouchers http://www.giftbig.com/gift-vouchers.html, can anyonw suggest me how to rank on these 2 keywords. Now I m using SEO moz pro to get the solution.
Moz Pro | | Joydeep_das0 -
Help with URL parameters in the SEOmoz crawl diagnostics Error report
The crawl diagnostics error report is showing tons of duplicate page titles for my pages that have filtering parameters. These parameters are blocked inside Google and Bing webmaster tools. I do I block them within the SEOmoz crawl diagnostics report?
Moz Pro | | SunshineNYC0 -
Drop in Number of Crawled pages by SEOMOZ?
I noticed that the number of Crawled Pages on my website has been 2 pages only over past week. Before that the number of crawled pages was over 1000. My site has numerous pages as it is a Travel website that pulls search results for Flights, Cars, Hotels, Cruises and Vacation packages so there is a huge Database there. Can someone help? Thanks !
Moz Pro | | sherohass0 -
What are followed vs non-followed links in Domain Analyis report?
I'm new to SEOMoz and wondering if some nice person can put this into plain English for me? 🙂 On the Competitive Domain Analysis report there are green follow / no-follow links and blue followed / non-followed linking root domains? what is the context of the green & blue circles? is there a better or worse visual representation that I should aim for - more dark green, more light green or about half & half. thanks in advance
Moz Pro | | seanuk0 -
Is the SEOMoz Rank Tracker still accurate?
I monitor several keywords on several domains and i have found that rank tracker gives me one result yet when searching via Google myself (on different machines) i get a different result. I've tried several devices on different networks so there is no chance that Google is remembering my searches as such and i ensure that i am always signed out of my account. All devices provide the same results yet Rank Tracker provides a completely different result. It's as though Rank Tracker is a day ahead or something. Can anyone shed any light for me please?
Moz Pro | | MarkHincks0