804 : HTTPS (SSL) Error in Crawl Test
-
So I am getting this 804 Error but I have checked our Security Certificate and it looks to be just fine. In fact we have another 156 days before renewal on it. We did have some issues with this a couple months ago but it has been fixed. Now, there is a 301 from http to https and I did not start the crawl on https so I am curious if that is the issue? Just wanted to know if anybody else has seen this and if you were able to remedy it?
Thanks,
Chris Birkholm -
Hi Chris! Did that post help?
-
Hi Chris!
Take a look at the following post that has some good suggestions for resolving the 804 error you're seeing:
https://mza.seotoolninja.com/community/q/error-code-804-https-ssl-error-encounteredIf you continue to have further questions, send us your information at [email protected] and we'll take a look.
Thanks!
Kevin
Help Team
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is the update site crawl feature following robot.txt rules?
I noticed that most of the errors would not be occurring if Moz's tool followed the rules implemented in sites robots.txt. Has anyone else seen this problem and do you know if Moz will fix this?
Moz Bar | | jamestown0 -
Find SEO errors
Hi, I have a Moz Pro account. Is there any way to automatically find images without ALT tag, and also noindex/nofollow pages? Cheers,
Moz Bar | | viatrading10 -
Crawl test csv has lost its formatting??
All the columns/heading merged into column A. Anyone else noticed this over the past few days?
Moz Bar | | Moving-Web-SEO-Auckland0 -
Crawl Report Internal Links Count
We recently ran a crawl report on www.phase1tech.com. Some of the pages are coming back with a large amount of 'internal links'. These 2 pages for example are showing 800 internal links: http://www.phase1tech.com/Upcoming-Events
Moz Bar | | AISEO
http://www.phase1tech.com/Contact At best there are approximately 70 links on the page. Where is the 800 number coming from?0 -
I'm getting a Crawl error 605 Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
The website is www.bigbluem.com and is a wordpress site. I'm getting the following error: 605 Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag But what is weird is the domain it lists below that is http://None/BigBlueM.com Any advice?
Moz Bar | | TumbleweedPDX1 -
Moz is reporting a broken link error but GWT is not
My latest Moz report is showing a 404 for: http://www.fateyes.com/how-will-googles-hummingbird-affect-your-search-ranking/”ht
Moz Bar | | gfiedel
(and showing the link this way with the characters after the last / which are not part of the page URL) Google Webmaster Tools says we have no errors. I'm wondering why there is this descrepancy and I'm wondering how I can track down where this link is originating from on our site. I've tried downloading screamingfrog and deeptrawl to no avail (java issues). I've also tried a couple of services online and installed Broken Link Checker plugin with no luck finding it. Any suggestions? Thanks in advance!0 -
Why do the crawl diagnostics indicate duplicate page content among blog postings hosted by WordPress?
Does anyone know why the crawl diagnostics indicate duplicate page content regarding the blog we are hosting on WordPress? And does anyone know how to fix this issue? The content is not, or does not appear to be duplicate.
Moz Bar | | AndreaKayal0 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0