Incorrect crawl errors
-
A crawl of my websites has indicated that there are some 5XX server errors on my website:
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 803: Incomplete HTTP Response Received
Error Code 803: Incomplete HTTP Response Received
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 902: Network Errors Prevented Crawler from Contacting ServerThe five pages in question are all in fact perfectly working pages and are returning HTTP 200 codes. Is this a problem with the Moz crawler?
-
Thanks for this! I didn't think to check the server logs. I'll have them checked and make sure that it's not blocking Moz out from the crawl. We have thousands of URL's on our website and quite a strict security policy on the server - so I imagine Moz has probably been blocked out.
Thanks,
Liam
-
Hi,
These error code's are Moz custom codes to list errors it encounters when crawling your site - it's quite possible that when you check these pages in a browser that they load fine (and that google bot is able to crawl them as well).
You can find the full list of crawl errors here: https://mza.seotoolninja.com/help/guides/search-overview/crawl-diagnostics/errors-in-crawl-reports. You could try to check these url's with a tool like web-sniffer.net to check the responses and check the configuration of your server.
-
608 errors: Home page not decodable as specified Content-Encoding
The server response headers indicated the response used gzip or deflate encoding but our crawler could not understand the encoding used. To resolve 608 errors, fix your site server so that it properly encodes the responses it sends. -
803 errors: Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data. -
902 errors: Unable to contact server
The crawler resolved an IP address from the host name but failed to connect at port 80 for that address. This error may occur when a site blocks Moz's IP address ranges. Please make sure you're not blocking AWS.
Without the actual url's it's impossible to guess what is happening in your specific case.
Hope this helps,
Dirk
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I crawled my site, but an old crawl report still is visible
I crawled my site recently, but an old crawl report still is still all I can see
Link Explorer | | Bigjim0 -
Spam Score and crawling of my site
Hello, I'm trying to analyze the spam score of my site which is 9/17 Actually I have few backlinks and all of them have a low spam score (max 4/17, just one). I think there's some kind of issue with the crawler since I get strange spam factors: Large Site with Few Links (likely true, I recently deleted a lot of tags used once) Low Number of Pages Found (wasn't it a "Large Site"??) Low Number of Internal Links (I got a considerable number) No Contact Info (I have a link to my facebook in the menu and a "contacts" page) Thin Content (It's just a blog with min 300 words per post, why thin?) Site Link Diversity is Low (likely true) Ratio of Followed to Nofollowed Subdomains (likely true) Low MozTrust or MozRank Score (true) Ratio of Followed to Nofollowed Domains (likely true) Can you please help me to understand it, is it a crawling problem or similar? If needed I will post the url of the website. Thank you so much Marco
Link Explorer | | MarcoBP0 -
Error Code 612 with robots.txt 200
Hi! I am getting this message Error Code 612: Error response for robots.txt, so the crawler do not check any page of the site. The status code for the robots.txt is 200 and it does not seem Googlebot has any problem crawling the site, so I don't know what the matter is. The site is http://www.musicopolix.com/ Thanks so much in advance for any help!
Link Explorer | | Musicopolix0 -
Really slow to load results on Open Explorer and "error fetching your data" messages
Hi, I am new to Moz and I'm finding using Open Explorer a real pain and frustrating, not only is it really slow, but for the past couple of days I've been getting "error fetching your data" messages when trying to show links etc. Is this just generally what its like or is there another problem?
Link Explorer | | Dave_B0 -
Error message coming up for Open Site Explorer
When in Open Site Explorer there seems to be an error getting the data for http://vagabondtoursofireland.ie/ or www.vagabondtoursofireland.ie I have used this with other websites and have never had a problem. Thanks.
Link Explorer | | Johnny_AppleSeed0 -
Does OSE crawl our site often? Or do we have an other problem? (internal links)
I am wondering if i can find out when OpenSiteExplorer crawled our entire website for the last time.
Link Explorer | | wilcoXXL
A few months ago we added some internal textlinks to all of our productpages. Those are links to the brandpage of that particular product. Some of our brands do have up to 1500 products, so there should be at least 1500 internal links pointing to that brandpage. But OSE still gives me a wrong count. It says there is only 1 internal link pointing to that brandpage.
Is it because OSE never crawled our site again? Or am i missing something here?(maybe too many internal links to one specific brandpage is not okay?) I'll hope you guys can help me out with this one.0 -
Moz crawling bot
Hi guys, in OpenSiteExplorer -> Top Pages, there are no page titles displayed in a raport for certain domain, and "HTTP Status" column shows: "Blocked by robots.txt". I tried to find out what the ID of Moz crawling bot is, and on this page: http://moz.com/community/q/seomoz-spider-bot-details someone says it's: Mozilla/5.0 (compatible; rogerBot/1.0; http://www.seomoz.org/dp/rogerbot). However, my robots.txt doesn't have such entry. Take a look: Automatically banned scanners and crawlers section User-agent: 008 Disallow: / user-agent: AhrefsBot Disallow: / User-agent: MJ12bot Disallow: / User-agent: metajobbot Disallow: / User-agent: Exabot Disallow: / User-agent: Ezooms Disallow: / User-agent: fyberspider Disallow: / User-agent: dotbot Disallow: / User-agent: MojeekBot Disallow: / Section end What could be the problem here, then? Why does the Moz bot think I'm blocking it?
Link Explorer | | superseopl0 -
How Is a Page Crawled by Moz When Moz Says 'No Links'?
As above, really. I've crawled a new client's site to find the Moz crawler has identified a handful of 404 errors. The Moz crawler says these pages have '0 linking domains', and OSE has no data for these pages. So how are these pages being crawled by Moz and what should I advise my client?
Link Explorer | | xerox4320