Crawl error robots.txt
-
Hello, when trying to access the site crawl to be able to analyze our page, the following error appears:
**Moz was unable to crawl your site on Nov 15, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster.
Can help us?
Thanks!
-
@Linda-Vassily yes
-
The page is: https://frizzant.com/ And don't have noindex
-
Thanks Lind and Tawny! i 'll check it
-
Hey there!
This is a tricky one — the answer to these questions is almost always specific to the site and the Campaign. For this Campaign, it looks like your robots.txt file returned a 403 forbidden response to our crawler: https://www.screencast.com/t/f42TiSKp
Do you use any kind of DDOS protection software? That can give our tools trouble and cause us to be unable to access the robots.txt file for your site.
I'd recommend checking with your web developer to make sure that your robots.txt file is accessible to our user-agent, rogerbot, and returning a 200 OK status for that user-agent. If you're still having trouble, it'll be easier to assist you if you contact us through [email protected], where we can take a closer look at your account and Campaign directly.
-
I just popped that into ScreamingFrog and I don't see a noindex on that page, but I do see it on some other pages. (Though that shouldn't stop other pages from being crawled.)
Maybe it was just a glitch that happened to occur at the time of the crawl. You could try doing another crawl and see if you get the same error.
-
The page is: http://www.yogaenmandiram.com/ And don't have noindex
-
Hmm. How about on the page itself? Is there a noindex?
-
Yes, our robots.txt it's very simple:
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php -
That just says that you are blocking the Moz crawler. Take a look at your robots.txt file and see if you have any exclusions in there that might cause that page not to be crawled. (Try going to yoursite.com/robots.txt or you can learn more about this topic here.)
-
Sorry, the image don't appear
Try now -
It looks like the error you are referring to did not come through in your question. Could you try editing it?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
DA error in my website
Hi
Product Support | | Bdgbye
there are some errors in this website. When i check this with my friends laptop it shows different DA PA
but if i check this on my PC it shows different. Now which one is perfect?
Here's the Name and link of website Juicks
Please check this and guide me where i am doing wrong. Thank you.0 -
Site Crawl Status code 430
Hello, In the site crawl report we have a few pages that are status 430 - but that's not a valid HTTP status code. What does this mean / refer to?
Product Support | | ianatkins
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors If I visit the URL from the report I get a 404 response code, is this a bug in the site crawl report? Thanks, Ian.0 -
False 5xx Errors for ColdFusion website
For several years month after month MOZ crawl reports 5xx errors on many pages. Almost every time all the pages work fine as fa as i could see. Google webmaster tools does not notice any errors. Could anyone explain how to fix this situation? Should i get a refund from MOZ?
Product Support | | Elchanan0 -
What is the difference between the "Crawl Issues" report and the "Crawl Test" report?
I've downloaded the CSV of the Crawl Diagnositcs report (which downloads as the "Crawl Issues" report) and the CSV from the Crawl Test Report, and pulled out the pages for a specific subdomain. The Crawl Test report gave me about 150 pages, where the Crawl Issues report gave 500 pages. Why would there be that difference in results? I've checked for duplicate URLs and there are none within the Crawl Issues report.
Product Support | | SBowen-Jive0 -
Why is Moz Crawl Diagnostics labelling pages as duplicate when they appear to be different?
Moz Crawl Diagnostics is flagging some pages on the Doorfit website as duplicate, yet the page content is completely different and not identical. Example. Page: http://www.doorfit.co.uk/locks-security/secondary-security Duplicate: http://www.doorfit.co.uk/seals-and-sealants?cat=279 Does anybody have any suggestions as to why this might be the case? Thanks
Product Support | | A_Q0 -
Received emails about new ranking, crawl and on page reports, but nothing new shows.
Yesterday morning I had emails about updated crawl, ranking and on page reports being available however nothing in my dashboard is newer than 2/21. I waited through yesterday to see if things changed, logged in and out etc but nothing new has shown up. Any ideas on why that is the case?
Product Support | | sea2dca0 -
MOZ Crawl help
Our MOZ report says it crawled 1800 pages so it reports a lot of errors based on those pages. We don't have that many pages on our site. What is MOZ crawling? I updated the profile to make sure it crawls the filtered page section of Google Analytics.
Product Support | | JessiK0