404 Errors
-
Hello Team,
I noticed that my site has 1,000s of 404 errors. Not sure how this happened, maybe when I updated our CMS.
My question is, should I worry about them. Should I delete them or just leave them alone.
Thank you for your feedback!
-
No worries,
Sorry - my mistake - I didn't mean Open Site Explorer. You can find the 404 report in your campaign settings in your SEOMoz dashboard. Click Errors and Download as CSV top right. You can sort the CSV in Excel to group all of your 404's and easily find the referral page.
If for some reason you can't access the Errors page in your dashboard (still waiting for your report to finish etc), you can do pretty mush the same with software called Xenu Link Sleuth.
Cheers,
-
http://www.opensiteexplorer.org/
Also, ask your hosting administrator, to help; they often can fix 404, & 301 server side.
-
Thank you very much!!!
How do I find the 404's on Open Site Explorer, would that be on the top pages tab?
I'll make sure to send out some love from one of my sites to your confetti
-
Hi there,
Yes - you should definitely worry about them. Get them fixed or add a 301 redirect to a relevant page.
In my experience, lots of 404's often stem from a small number of places such so it shouldn't be a case of having to manually fix every single link. If you fix the link in one place, you'll probably fix it in hundreds of others. Download a CSV of your broken links from Open Site Explorer. You'll easily be able to see the referring page and fix the links.
Hope that helps.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
403 Errors Issue
Hi, all! I've been working with a Wordpress site that I inherited that gets little to no organic traffic, despite being content rich, optimized, etc. I know there's something wrong on the backend but can't find a satisfactory culprit. When I emulate googlebot, most pages give me a 403 error. Also, google will not index many urls which makes sense and is a massive headache. All advice appreciated! The site is https://www.diamondit.pro/ It is specific to WP Engine, using GES (Global Edge Security) and WPWAF
Technical SEO | | SimpleSearch0 -
Hi anyone please help I use this code but now getting 404 error. please help.
#index redirect
Technical SEO | | roynguyen
RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /index.html\ HTTP/
RewriteRule ^index.html$ http://domain.com/ [R=301,L]
RewriteCond %{THE_REQUEST} .html
RewriteRule ^(.*).html$ /$1 [R=301,L] hi anyone please help I use this code but now getting 404 error. please help. homepage and service.html page is working, but the rest pages like about.html, servicearea.html, and contact.html is not working showing 404 error. and also when you type this URL. generalapplianceserice.ca/about.html generalapplianceserice.ca/contact.html generalapplianceserice.ca/servicearea.html it automatically remove the .HTML extension and shows 404 error, the pages name in root directory is same. these pages work like generalapplianceservice.ca and generalapplianceservice.ca/services why? i also remove this code again but still same issue.0 -
404 or rel="canonical" for empty search results?
We have search on our site, using the URL, so we might have: example.com/location-1/service-1, or example.com/location-2/service-2. Since we're a directory we want these pages to rank. Sometimes, there are no search results for a particular location/service combo, and when that happens we show an advanced search form that lets the user choose another location, or expand the search area, or otherwise help themselves. However, that search form still appears at the URL example.com/location/service - so there are several location/service combos on our website that show that particular form, leading to duplicate content issues. We may have search results to display on these pages in the future, so we want to keep them around, and would like Google to look at them and even index them if that happens, so what's the best option here? Should we rel="canonical" the page to the example.com/search (where the search form usually resides)? Should we serve the search form page with an HTTP 404 header? Something else? I look forward to the discussion.
Technical SEO | | 4RS_John1 -
Best strategy to handle over 100,000 404 errors.
I recently been given a site that has over one-hundred thousand 404 error codes listed in Google Webmasters. It is really odd because according to Google Webmasters, the pages that are linking to these 404 pages are also pages that no longer exist (they are 404 pages themselves). These errors were a result of site migration that had occurred. Appreciate any input on how one might go about auditing and repairing large amounts of 404 errors. Thank you.
Technical SEO | | SEO_Promenade0 -
Database driven content producing false duplicate content errors
How do I stop the Moz crawler from creating false duplicate content errors. I have yet to submit my website to google crawler because I am waiting to fix all my site optimization issues. Example: contactus.aspx?propid=200, contactus.aspx?propid=201.... these are the same pages but with some old url parameters stuck on them. How do I get Moz and Google not to consider these duplicates. I have looked at http://moz.com/learn/seo/duplicate-content with respect to Rel="canonical" and I think I am just confused. Nick
Technical SEO | | nickcargill0 -
Increase in Not Found Errors
Hello All, Looking for input on an issue I am having. We used to have a website www.gazaro.com. It was a price comparison engine for consumers. A shift in the focus of the business resulted in www.360pi.com - a price intelligence tool for retailers.The two websites have similar themes, so I thought it would be valuable to pass SEO juice from the old domain to the new domain.Back in August, I noticed that Gazaro was redirected to 360pi with a meta refresh. I know a 301 redirect is preferable to a meta refresh, so we switched to a 301 redirect.Since that happened, there has been a spike in 404 errors in webmaster tools. If you hover over the url, it is actuallywww.360pi.com/deal/amazon etc etc. It is looking for gazaro urls on the 360pi domain - which don't exist. I think this is hurting our homepage ranking. Our homepage no longer ranks for "price intelligence" when it used to be in pos. 4 or 5. As it turns out, we are ranking #1 for "price intelligence" but with our product page.I'm wondering why the 404 are happening. Is something setup in correctly? Or should I have them switch back to a meta refresh.Thoughts? Thanks for your helpPNM1cYO PNM1cYO
Technical SEO | | AmandaHorne0 -
4xx error - but no broken links founded by Xenu
In my SeoMoz crawl report I get multiple 4XX errors reported and they are all on the same type of links. www.zylom.com/nl/help/contact/9/ and differiate between the number at the end and the language. But I i look in the source code we nice said: <a class="<a class="attribute-value">bigbuttonblue</a>" style="<a class="attribute-value">float:right; margin-left:10px;</a>" href="[/nl/help/contact/9/?sid=9&e=login](view-source:http://www.zylom.com/nl/help/contact/9/?sid=9&e=login)" onfocus="<a class="attribute-value">blur()</a>" title="<a class="attribute-value">contact</a>"> contact a> I already tested the little helpfull tool Xenu, but this also doesn't give any broken links for the url's which I found in the 4xx error report. Could somebody give me a suggestion Why these 4xx errors keep coming? Could it be that the SeoMoz crawlers break the part ?sid=9&e=login' from the URL. Because if you want to enter the link, you first get a pop-up to fill in a login screen. Thanks for you answers already
Technical SEO | | Letty0 -
500 Server Error on RSS Feed
Hi there, I am getting multiple 500 errors on my RSS feed. Here is the error: <dt>Title</dt> <dd>500 : Error</dd> <dt>Meta Description</dt> <dd>Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/downpour/init.py", line 391, in _error failure.raiseException() File "/usr/local/lib/python2.7/site-packages/twisted/python/failure.py", line 370, in raiseException raise self.type, self.value, self.tb Error: 500 Internal Server Error</dd> <dt>Meta Robots</dt> <dd>Not present/empty</dd> <dt>Meta Refresh</dt> <dd>Not present/empty</dd> Any ideas as to why this is happening, they are valid feeds?
Technical SEO | | mistat20000