4XX client error with email address in URL
-
I have an unusual situation I have never seen before and I did not set up the server for this client. The 4XX error is a string of about 74 URLs similar to this:
http://www.websitename.com/about-us/[email protected]
I will be contacting the server host as well to troubleshoot this issue. Any ideas?
Thanks
-
Hi EliteVenu! I'm so glad Ryan pointed you in the right direction. If that turns out to fix the problem, mind marking one or both of his responses as a "Good Answer?"
-
Great! Glad I could help.
-
That gave me the right direction to look in! A social icon plugin did not require the mailto in the dashboard settings, (as it only said "enter your email address here") and the theme wrote it as href in the theme's code. I looked at the source code, but overlooked this small detail. I removed the social icon email so I will see if it helps.
Thanks for the response!
-
Hi there! Tawny from the Help Team here - I think I can help provide a little bit of insight!
If you take a look at the Site Crawl report for this site's campaign and look at just the 4XX client errors, you'll see a Linking Page column in the table below the graph. That's the page from which our crawler arrived at the 404 page, and is where you can start looking for what went wrong.
I'd recommend taking a peek at that Linking Page's source code and searching for the email address - that's likely where you'll find the issue.
I hope this helps! Feel free to write in to us at [email protected] if you still have questions and we'll do our best to help you out!
-
If that's what you're seeing it looks like someone used a relative href link instead of a mailto link for emails on the about us page.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site Crawl 1-page 301 status error but httpstatus.io says its 403
I am trying to run a site crawl for my website and MOZ is only resulting in 1 page crawled with the home page URL Status Code of 301. However when I run it in httpstatus.io it is giving me a 403 status error. Im curious as to why MOZ is saying its a 301 and httpstatus.io is saying 403. Is there anything I can do in MOZ first to get the site crawled before asking my developers to look into the 403 error?
Moz Bar | | JohnConover0 -
Moz keyword mention on-page counting errors
Hi. Moz is showing 18 mentions of the keyword 'street furniture' on this landing page https://www.broxap.com/street-furniture.html But I can only count 6 in total in the body copy and 13 if you include navigation links. This is the same on other pages too for that keyword. Does anyone know where it's counting these extra keywords from? I don't want to fall foul of keyword stuffing but as far as I can see we're not! Could Moz be miscalculating? Any help appreciated! Thanks Joe
Moz Bar | | iweb_agency0 -
That URL is inaccessible Moz grader?
Hi all, I'm having some issues getting my site graded www.balihaiphoto.com + Kauai wedding photographer with On Page grader, where as when I enter another photographer example www.jmoellerphoto.com and getting results. Is there any reason this is going on that I can correct? Many thanks for any help all! -Jon
Moz Bar | | Jon_Gibb0 -
Perplexed by last MOZ crawling duplicate content errors
In the last crawler issues report from MOZ I can see many many pages listed as duplicate content with 0 duplicate urls. Like this: http://imgur.com/fbikRVq I am puzzled, what does it mean?
Moz Bar | | max.favilli0 -
Crwal errors : duplicate content even with canonical links
Hi I am getting some errors for duplicate content errors in my crawl report for some of our products www.....com/brand/productname1.html www.....com/section/productname1.html www.....com/productname1.html we have canonical in the header for all three pages <link rel="canonical" href="www....com productname1.html"=""></link rel="canonical" href="www....com>
Moz Bar | | phes0 -
408 errors in crawl diagnostics
Best community, The Crawl Diagnostics Report of Moz gave our website a lot of 408 errors like below: <dl> <dt>Title</dt> <dd>408 : Error</dd> <dt>Meta Description</dt> <dd>408 Request Time-out</dd> <dt>Meta Robots</dt> <dd>Not present/empty</dd> <dt>Meta Refresh</dt> <dd>Not present/empty</dd> <dd>-----------------------------------------------------------------------</dd> <dd>The report has diagnosed a lot of these (around 320), even though we cannot reproduce the error (we cannot seem to find it ourself). </dd> <dd>2 questions relating to this: </dd> <dd>* Can you (the people of Moz) reproduce the errors manually? </dd> <dd>* Is it possible that it is a bug in the spider of Moz itself (too many spiders crawling at the same time)?</dd> </dl>
Moz Bar | | arjen.koedam0 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0 -
Dupe content report showing in 'Errors' section when surely should be in 'Warnings' section ?
Why is the dupe content info showing in errors and not warnings ? Since if dupe content can get your site penalised (as per Panda) or worse banned, surely it should be in that section of reports ? Cheers
Moz Bar | | Dan-Lawrence
Dan0