Best way to fix a whole bunch of 500 server errors that Google has indexed?
-
I got a notification from Google Webmaster tools saying that they've found a whole bunch of server errors. It looks like it is because an earlier version of the site I'm doing some work for had those URLs, but the new site does not.
In any case, there are now thousands of these pages in their index that error out.
If I wanted to simply remove them all from the index, which is my best option:
-
- Disallow all 1,000 or so pages in the robots.txt ?
-
- Put the meta noindex in the headers of each of those pages ?
-
- Rel canonical to a relevant page ?
-
- Redirect to a relevant page ?
-
- Wait for Google to just figure it out and remove them naturally ?
-
- Submit each URL to the GWT removal tool ?
-
- Something else ?
Thanks a lot for the help...
-
-
If you already fixed the error, then just wait for Google to figure things out on their end. Having those errors in GWT isn't going to hurt you.
-
Wouldn't you be showing 404's instead of 500's in the first place?
If the old URL's are still showing in the index, I'd reckon you'd want those 301'd to relevant pages anyways, at worst, at least a resource-heavy 404 page popping up rather than a 500.
-
4/5 with a bit of 7
What you need to do is return the correct response code (I'm guessing that is either 404 or 410) then let google reindex those URLs. That way Google knows that those urls are no longer valid. However, if those URLs have links or get traffic then you might want to 301 them.
Let's look at a couple the other options though - it is interesting.
-
This will stop google re-visiting those URLs,Therefore it will always think they are there.
-
No index confirms they are there, but tells google not to return them in results. Again this isn't correct and they will continue to return to and re-check those URLs
-
Unless the content is very close, this is unlikely to work. It is also wrong (because presumably they are not the same thing)
-
If they URLs have a common (and exclusive) directory it may be an option to submit that. It might though not be a good idea to submit lots individually - Matt Cutts has suggested this in the past.
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Server Connection Error when using Google Speed Test Insight and GTMetrix
Hi Guys, Recently got into the issue when testing load speed of my website (https://solvid.co.uk). Occasionally, Google Speed Insights gives me a server connection error which states _"PageSpeed was unable to connect to the server. Ensure that you are using the correct protocol (_http vs https), the page loads in a browser, and is accessible on the public internet." Also, GTMetrix gives me an error as well, which states the following: "An error occurred fetching the page: HTTPS error: SSl connect attempt failed" All of my redirects seem to be set-up correctly as well as the SSL certificate. I've contacted my hosting provider (godaddy), they are saying that everything is fine with the server and the installation. Also, tried in different browsers in incognito mode, still gives me the same error. Until yesterday I haven't had such a problem. I've also attached the error screenshot links. I would really appreciate your help! Dmytro UxchPYR M52iPDf
Technical SEO | | solvid1 -
Best way to deal with over 1000 pages of duplicate content?
Hi Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue! 95% of the issues arise from our news and news archive as its been going for sometime now. We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added. The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article. What is the best way to solve this issue? From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long. Thanks Ben
Technical SEO | | benjmoz0 -
Does Google index has expiration?
Hi, I have this in mind and I think you can help me. Suppose that I have a pagin something like this: www.mysite.com/politics where I have a list of the current month news. Great, everytime the bot check this url, index the links that are there. What happens next month, all that link are not visible anymore by the user unless he search in a search box or google. Does google keep those links? The current month google check that those links are there, but next month are not, but they are alive. So, my question is, Does google keep this links for ever if they are alive but nowhere in the site (the bot not find them anymore but they work)? Thanks
Technical SEO | | informatica8100 -
Index page 404 error
Crawl Results show there is 404 error page which is index.htmk **it is under my root, ** http://mydomain.com/index.htmk I have checked my index page on the server and my index page is index.HTML instead of index.HTMK. Please help me to fix it
Technical SEO | | semer0 -
Google doesn't rank the best page of our content for keywords. How to fix that?
Hello, We have a strange issue, which I think is due to legacy. Generally, we are a job board for students in France: http://jobetudiant.net (jobetudiant == studentjob in french) We rank quite well (2nd or 3rd) on "Job etudiant <city>", with the right page (the one that lists all job offers in that city). So this is great.</city> Now, for some reason, Google systematically puts another of our pages in front of that: the page that lists the jobs offers in the 'region' of that city. For example, check this page. the first link is a competitor, the 3rd is the "right" link (the job offers in annecy), but the 2nd link is the list of jobs in Haute Savoie (which is the 'departement'- equiv. to county) in which Annecy is... that's annoying. Is there a way to indicate Google that the 3rd page makes more sense for this search? Thanks
Technical SEO | | jgenesto0 -
How to de-index the server location of my website
Somehow my website is indexed by it's server location. So in addition to www.[example].com, it's also indexed like this: server123.[server name].com I have no idea how that happened. But, I was wondering, does anyone know how to de-index all the urls like this? I have a lot of urls indexed using my server's address. Thanks.
Technical SEO | | webtarget0 -
How long does it take for Google to de-index urls?
Added the noindex meta tag to some pages on my site and I am wondering if anyone has any idea how long it will take to deindex the urls?
Technical SEO | | nicole.healthline0 -
Why is Google only indexing 3 of 8 pages?
Hi everyone, I have a small 8 page website I launched about 6 months ago. For the life of me I can not figure out why google is only indexing 3 of the 8 pages. The pages are not duplicate content in any way. I have good internal linking structure. At this time I dont have many inbound links from others, that will come in time. Am I missing something here? Can someone give me a clue? Thanks Tim Site: www.jparizonaweddingvideos.com
Technical SEO | | fasctimseo0