How to get rid of the message "Search Engine blocked by robots.txt"
-
During the Crawl Diagnostics of my website,I got a message Search Engine blocked by robots.txt under Most common errors & warnings.Please let me know the procedure by which the SEOmoz PRO Crawler can completely crawl my website?Awaiting your reply at the earliest.
Regards,
Prashakth Kamath
-
Thanks Simon for the info.Will check and revert back if there is any issues.
Regards,
Prashakth Kamath
-
Thanks Ryan for the info.Will check and revert back if there is any issues.
Regards,
Prashakth Kamath
-
Hi Sagar
That was a good reply from Ryan.
Check out http://www.seomoz.org/dp/rogerbot
rogerbot is the name of the SEOmoz crawler bot, the above page has all the info you require.
Regards
Simon
-
The seomoz user agent is named rogerbot. You can read more about the SEOmoz crawl process here: http://seomoz.zendesk.com/entries/20034082-lesson-5-crawl-diagnostics
<code>User-agent: rogerbot Allow: /</code>
-
Thanks Ryan for your immediate reply.
Can you please provide name & the code of the SEOmoz Crawler that I need to enter on my file so that the SEOmoz crawls all the webpages of my website.Apart from SEOmoz Crawler I don't want any other crawler to crawl my website?Please help.Awaiting your reply.
Regards,
Prashakth Kamath
-
That error is pretty straight forward and indicates you have a robots.txt file which is blocking the crawler from accessing your site. The robots.txt file can be read by going to your site URL and adding /robots.txt to it such as www.mysite.com/robots.txt.
The file can be found in the root directory on your site's web server. Remove or alter the file to allow search engines to crawl your site. More info can be found at http://www.robotstxt.org/
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Will moz crawl pages blocked by robots.txt and nofollow links?
i have over 2,000 temporary redirects in my campaign report redirects are mostly events like being redirected to a login page before showing the actual data im thinking of adding nofollow on the link so moz wont crawl the redirection to reduce the notification will this solve my problem?
Moz Pro | | WizardOfMoz0 -
How do Moz tools handle signs for "PHRASE" and [EXACT MATCH] KW queries?
Hello, As some of my projects are in competitive niche markets, I often chase exact match KW to handle with. However, when using 'Moz >> Keyword Difficulty Report' I'm getting SAME Search Volume result - regardless using broad match, phrase or exact match KWs. Have I missed something? Ranking result's and are however different and seems to correspond to different type of KW search ( which type of KW all its. Please share some light on this.
Moz Pro | | SEOisSEO0 -
Issue getting total links, page & domain authority
Hi guys, I am trying to get total links, page & domain authority using the API. I am requesting the following columns: Cols=6871947673632768328204816384343597383681653687091214 { "fjid": 207343179, "ued": 43324279, "pib": 255645, "ptrr": 0.0056131743357352125, "fmrp": 8.246626591590841, "unid": 954915, "fjf": 4003651, "fjr": 0.00040067116628622016, "ftrp": 8.308303969566644, "ftrr": 0.0012189619975325583, "fejp": 9.265328830369816, "pnid": 45883246, "fjd": 2480265, "ujfq": 1277385, "pjip": 1230240, "fjp": 9.586342983782004, "fuid": 294877628, "uu": "www.google.com/", "pejr": 0.0004768398971363439, "ufq": "www.google.com/", "pejp": 9.647424778525615, "ujp": 300689, "utrp": 7.916901429865898, "ptrp": 9.487254666203722, "utrr": 0.001639219985667878, "fmrr": 0.000731592927123369, "pda": 100, "pjd": 5600882, "ulc": 1342758719, "fnid": 12165784, "fejr": 0.00016052965883996156, "ujb": 107264 } I cannot see the UPA column returned in the JSON object. Im using 34359738368 for the UPA column. I need to retrieve the three fields (page authority, domain authority and total links) in the same query. Is it possible?
Moz Pro | | Srvwiz0 -
"Too many on page links" phantom penalty? What about big sites?
So I am consistently over the recommended "100 links" rule on our site's pages because of our extensive navigation and plentiful footer links (somewhere around 300 links per page). I know that there is no official penalty for this but rather that it affects the "link juice" of each link on there. I guess my question is more about how places like Zappos and Amazon get away with this? They have WAY over 100 links per page... in fact I think that Zappos footer is 100+ links alone. This overage doesn't seem to affect their domain rankings and authority so why does SEO moz place so much emphasis on this error?
Moz Pro | | kida12meyer0 -
Optimization report card says "F" but I'm ranking #1
Using the SEOMoz pro tools and several of the terms I have listed get a report card of "F" but they also show Google rank of #1, #2. Is this unusual? And should I spend a lot of time "fixing" this if we're already ranking high?
Moz Pro | | schof0