Error Code 612: Error response for robots.txt
-
Hi,
We are getting Error Code 612: Error response for robots.txt in our crawl but everything looks to be ok with the robots file.
Can you confirm what is wrong?
Thanks
-
Hi Wendy! Kristina from Moz's Help Team here - I wanted to chime in here as I had a chance to look over your site and it appears that your site is blocking AWS.
We are getting a "403 Forbidden Error" when attempting to access your site as AWS: http://screencast.com/t/P858BVEQk
Additionally, this 3rd party tool, hurl.it which also uses AWS is getting an Internal Server Error when trying to access your site as well: http://screencast.com/t/N5T822Zpdo
Please re-connect with your developers and make sure they're addressing the issue with your site blocking AWS and that should resolve the issue you're seeing currently in Moz.
I hope this helps but please let us know if there's more we can assist with!
-Kristina -
Chiaryn can you take a look at something? I am getting a 612 error on this website: www.seminolepowersports.com the developer is telling me there is nothing wrong from what they can see and they are saying MOZ is using AWS server and they have blocked the AWS server from the site. Questions for you:
- Does MOZ use AWS server? and could it be the site is blocking then?
- or is the site confusing the Moz bot, Roger?
Thank you for your input.
Wendy -
Hey David, thanks for your question.
I took a look at your campaign and it seem that this is a bit different than the case in the previous post that Thomas linked to in his reply.
It actually looks like you have a redirect loop in place which could be confusing our bot, Roger. The robots.txt page redirects to the www version of the homepage, which redirects to an /en/home subfolder, which redirects to /en/home?r=US. You can verify this using the third party tool https://httpstatus.io/ (http://www.screencast.com/t/pk4fvGXJ1).
I can't say with entire certainty that this is causing the error message you are seeing, as I have never seen a redirect loop on the robots.txt file for a site, but I do know that the crawler will only follow two redirects and any more redirects than that will prevent us from accessing the page, which would likely be reported as an error with the robots.txt.
I would recommend fixing that so that you have only one 301 in place that points to a 200 page or by having the robots.txt file for the site respond with a 200 status. This will need to be done by your site administrator or developer.
-
I'm not sure if this is of any help to you? https://mza.seotoolninja.com/community/q/without-robots-txt-no-crawling
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
When I crawl my site On Moz it says it can't access the robots.txt file, but crawl is fine on SEM Rush - Anyone know any reason for this?
Hi guys, When I try to run a site crawl on Moz it returns an error saying that it has failed due to an error with the robots.txt file. However, my site can be crawled by SEM Rush with no mention of problems with roots.txt file issues. My developer has looked into it and insists their is no problem with my robots.txt and I've tried the Moz crawl at least 6 times over an 8 week period. Has anyone ever seen such a large discrepancy between Moz and SEM Rush or have any ideas why Moz has this issue with my site?? TIA everyone
Getting Started | | Webreviewadmin0 -
Track rank by multiple zip codes, radius, or state?
I work for a hospital that serves patients in multiple states. I'd love to track keywords locally, but entering one town or zip code at a time would take days. Is it possible to track locally by multiple zip codes, by radius, or by states?
Getting Started | | jaclyn_stevens1 -
901 error code showing url back to back in crawl
Hi Everyone, I'm absolutely dumbfounded about this 901 issue (showing pages with our url back to back). Our site is hosted on Big Commerce: https://www.santabarbarachocolate.com When I look for these pages being crawled I don't find them. I've called BC for help and I can't seem to find a solution or where to turn as to how to fix the issue at hand or even if it matters. Please see below what the Moz crawl shows. Could this be related to Yotpo or some app we have running? Or does this even matter and does it have any influence on rank? Do you have recommendations or ideas? Thanks so much. Pages with Crawl Attempt Error as of Mar 3 URL Page Authority Linking Root Domains Status Code | Error Code 901: DNS Errors Prevented Crawler from Resolving Hostname http://www.santabarbarachocolate.comhttp/www.santabarbarachocolate.com/100-percent-pure-cacao-unsweetened-baking-chocolate -- -- 901 Error Code 901: DNS Errors Prevented Crawler from Resolving Hostname http://www.santabarbarachocolate.comhttp/www.santabarbarachocolate.com/buy-wholesale-bulk-chocolate -- -- 901 Error Code 901: DNS Errors Prevented Crawler from Resolving Hostname http://www.santabarbarachocolate.comhttp/www.santabarbarachocolate.com/organic-chocolate-wholesale | -- | -- | 901 |
Getting Started | | santabarbarachocolate0 -
Error Selecting Search Engine During Moz Campaign Set-Up
Hi, I'm setting up a Moz campaign for a new client, and when I get to the third step, "Choose the search engines for which you would like to track rankings," there's nothing to select. No drop down, or anything, just a blank field under the heading. If I try to advance, I get an error pop-up that says I must select a search engine. Has anyone else run into this problem? What should I do? GlWPAsc zAmzqFf
Getting Started | | tommyKPseo0 -
How Do I Scan My New Site & Grade My Work With The Robots Turned Off? For Pre-Inspection before I launch my Site?
I have a new site that has all the bots turned off so google can't index my site until I'm finished it. I've been working on this site for a couple months now optimizing and I was wondering if there was anyway I can run a preliminary scan on the site for my titles, URLs, Headers, Alt Tags and pretty much anything else that will grade my work and tell me if i did anything wrong? Can MOZ do this with the Bots turned off? Thanks
Getting Started | | Inframan0 -
'not a valid url' error in campaign set up
I get the error not a valid url when I'm trying to set up a campaign. I know it's a valid url. I have tried with www, non-www, http://, https:// when I do the https it lets me start, but then I get an error that https is forwarding to http and I need to use that. When I then put in the http, I get the original error. thanks in advance for your help.
Getting Started | | HighVoltage0 -
High Number of Crawl Errors for Blog
Hello All, We have been having an issue with very high crawl errors on websites that contain blogs. Here is a screenshot of one of the sites we are dealing with: http://cl.ly/image/0i2Q2O100p2v . Looking through the links that are turning up in the crawl errors, the majority of them (roughly 90%) are auto-generated by the blog's system. This includes category/tag links, archived links, etc. A few examples being: http://www.mysite.com/2004/10/ http://www.mysite.com/2004/10/17/ http://www.mysite.com/tagname As far as I know (please correct me if I'm wrong!), search engines will not penalize you for things like this that appear on auto-generated pages. Also, even if search engines did penalize you, I do not believe we can make a unique meta tag for auto-generate pages. Regardless, our client is very concerned seeing these high number of errors in the reports, even though we have explained the situation to him. Would anyone have any suggestions on how to either 1) tell Moz to ignore these types of errors or 2) adjust the website so that these errors now longer appear in the reports? Thanks so much! Rebecca
Getting Started | | Level2Designs0 -
How to get moz to crawl a staging domain that is blocked by robots.txt
Is it possible to get Moz to do a crawl report on a domain blocked by robots.txt and actually display all errors instead of only one saying the domain was blocket in robots.txt? Anything i can add to robots.txt to make moz able to do the crawl report but still hinder google from crawling a staging domain?
Getting Started | | classifiedtech0