Dealing with 404 pages
-
I built a blog on my root domain while I worked on another part of the site at .....co.uk/alpha I was really careful not to have any links go to alpha - but it seems google found and indexed it. The problem is that part of alpha was a copy of the blog - so now soon we have a lot of duplicate content. The /alpha part is now ready to be taken over to the root domain, the initial plan was to then delete /alpha. But now that its indexed I'm worried that Ill have all these 404 pages. I'm not sure what to do.. I know I can just do a 301 redirect for all those pages to go to the other ones in case a link comes on but I need to delete those pages as the server is already very slow. Or does a 301 redirect mean that I don't need those pages anymore? Will those pages still get indexed by google as separate pages? Please assist.
-
after a 301 redirect can I delete the pages and the databases/folders associated with them?
Yes. Think of a 301 redirect like mail forwarding. If you have an address, 1000 main street and then move to a new address you would leave a forward order (e.g. 301 redirect) with the post office. Once that is done, you can bulldozer the house (e.g.. delete the webpage/database) and the mail should still be forwarded properly.
How does one create a 301 redirect?
The method of creating a 301 redirect varies based on your server setup. If you have a LAMP setup with cPanel, there is a Redirect tool. Otherwise I would suggest contacting your host and ask how to create a redirect based on your particular setup.
-
Ryan,
Two things.
First - after a 301 redirect can I delete the pages and the databases/folders associated with them?
Second - How does one create a 301 redirect?
-
Hi Ryan,
Agree with you, but I thought to provide alternate solution to the problem. I know it is difficult and not chosen one.
But as I said that if he can't get any traffic from it then and then only it can delete the pages for index. Plus as he told earlier in question that mistakenly alpha folder was indexed so lines as per you said in the comment "That tool was designed to remove content which is damaging to businesses such as when confidential or personal information is indexed by mistake." and Its contradictory statement too "The indexed content are pages you want in the index but simply have the wrong URL - The wrong URL means the different page.
Anyways will definitely go with your solution but sometimes two options helps you to choose better one.
Thanks
-
Semil, your answer is a working solution but I would like to share why it is not a best practice.
Once the /alpha pages were indexed you could have traffic on them. You cannot possibly know who has linked to those pages, e-mailed links, bookmarked them, etc. By providing a simple 301 the change will be completely seamless to users. All their links and bookmarks will still work. Additionally if any website did link to your /alpha pages, you will retain the link.
The site will also benefit because it is already indexed by Google. You will not have to wait for Google to index your pages. This means more traffic for the site.
The 301 is very quick and easy to implement. If you are simply moving from the /alpha directory to your main site then a single 301 redirect can cover your entire site.
I will offer a simple best practice of SEO (my belief which not everyone agrees with) which I do my best to follow. NEVER EVER EVER use the robots.txt file unless you have exhausted every other possibility. The robots.txt file is an inferior solution that many people latch on to because it is quick and easy. In your case, there is no need to adjust your robots.txt file at all. The original poster stated an intention to delete the /alpha pages. Those pages will no longer exist. Why block URLs which don't exist? It doesn't offer any benefit.
Also, it makes no sense to use the Google removal tool. That tool was designed to remove content which is damaging to businesses such as when confidential or personal information is indexed by mistake. The indexed content are pages you want in the index but simply have the wrong URL. The 301 redirect will allow your pages to remain in the index and for the URL to be properly updated. In order for the 301 to work correctly, you would need to NOT block the /alpha pages with robots.txt.
The solution you shared would work, but it is not as friendly all around.
-
Whoops! Thanks for correcting my answer...
-
The reason behind not using 301 is alpha is not a page or folder you want to create for your users so I don't want to put 301. Its indexed that's it. Are you getting any traffic from it ?
No, then why you need to redirect. Remove the page and ask search engine to remove that page from index. That is all.
-
Thanks Dan,
Is there a way of blocking an entire folder or do I have to add each link?
-
How can I ask them to remove it from webmaster? How can I ask everything on the /alpha folder not to be indexed - or do I have to write each link out?
Why do you think my case isn't good for 301 redirects?
-
You have to be very careful from the start, but now Google indexed your alpha. So dont worry about the thing.
Using 301 is something which I dont like to do on your case. Ask google to remove that urls from indexing from GWT, and put robots.txt to prevent alpha to be indexed.
Thanks,
-
You can perform the 301 redirect and you will not need those pages anymore. Using the redirect would be a superior SEO solution over using the robots.txt file. Since the content is already indexed, it will stay indexed and Google will update each page over the next 30 days as it crawls your site.
If you block /alpha with robots.txt, Google will still retain the pages in their index, users will experience 404s and your new pages wont start to be properly indexed until Google drops the existing pages which takes a while. The redirect is better for everyone.
-
Hi
If you do not want them in the index you should block them in your robots.txt file like so:
-
-
-
-
- -
-
-
-
User-agent: *
Allow: /
Disallow: /alpha
-Dan
PS - Some documentation on robots.txt
-
-
-
-
- -
-
-
-
EDIT: I left my answer, but don't listen to it. Do what Ryan says
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is my inner pages ranking higher than main page?
Hi everyone, for some reason lately i have discovered that Google is ranking my inner pages higher than the main subfolder page. www.domain.com/subfolder --> Target page to be ranked
Technical SEO | | davidboh
www.domain.com/subfolder/aboutus ---> page that is currently ranking Also in the SERP most of the time, it is showing both links in this manner. www.domain.com/subfolder/aboutus
-----------www.domain.com/subfolder Thanks in advance.1 -
Does anyone know the linking of hashtags on Wix sites does it negatively or postively impact SEO. It is coming up as an error in site crawls 'Pages with 404 errors' Anyone got any experience please?
Does anyone know the linking of hashtags on Wix sites does it negatively or positively impact SEO. It is coming up as an error in site crawls 'Pages with 404 errors' Anyone got any experience please? For example at the bottom of this blog post https://www.poppyandperle.com/post/face-painting-a-global-language the hashtags are linked, but they don't go to a page, they go to search results of all other blogs using that hashtag. Seems a bit of a strange approach to me.
Technical SEO | | Mediaholix0 -
"One Page With Two Links To Same Page; We Counted The First Link" Is this true?
I read this to day http://searchengineland.com/googles-matt-cutts-one-page-two-links-page-counted-first-link-192718 I thought to myself, yep, thats what I been reading in Moz for years ( pitty Matt could not confirm that still the case for 2014) But reading though the comments Michael Martinez of http://www.seo-theory.com/ pointed out that Mat says "...the last time I checked, was 2009, and back then -- uh, we might, for example, only have selected one of the links from a given page."
Technical SEO | | PaddyDisplays
Which would imply that is does not not mean it always the first link. Michael goes on to say "Back in 2008 when Rand WRONGLY claimed that Google was only counting the first link (I shared results of a test where it passed anchor text from TWO links on the same page)" then goes on to say " In practice the search engine sometimes skipped over links and took anchor text from a second or third link down the page." For me this is significant. I know people that have had "SEO experts" recommend that they should have a blog attached to there e-commence site and post blog posts (with no real interest for readers) with anchor text links to you landing pages. I thought that posting blog post just for anchor text link was a waste of time if you are already linking to the landing page with in a main navigation as google would see that link first. But if Michael is correct then these type of blog posts anchor text link blog posts would have value But who is' right Rand or Michael?0 -
Pages to be indexed in Google
Hi, We have 70K posts in our site but Google has scanned 500K pages and these extra pages are category pages or User profile pages. Each category has a page and each user has a page. When we have 90K users so Google has indexed 90K pages of users alone. My question is. Should we leave it as they are or should we block them from being indexed? As we get unwanted landings to the pages and huge bounce rate. If we need to remove what needs to be done? Robots block or Noindex/Nofollow Regards
Technical SEO | | mtthompsons0 -
Drupal duplicate pages
Anyone else encountered massive numbers of duplicate pages being reported on SEO Moz crawls for Drupal based sites? I assumed it was b/c there was no redirect on the print format pages, so I fixed that with a cannonical tag. But still seeing 2 or 3 duplicate pages reported for many pages. Any experience fixing this would be awesome to hear about. Thanks, Kevin
Technical SEO | | kevgrand0 -
Old Product Pages
Hi Issue: I have old versions of a product page in the Google index for a product that I still carry. Why: The URLs were changed when we updated this product page a few years ago. There are four different URLs for this product -- no duplicate content issues b/c we updated the product info, Title tags, etc. So I have a few pages indexed by Google for a particular product. Including a current, up-to-date page. The old pages don't get any traffic, but if I type in google search: "product name" site:store.com then all of the versions of this page appear. The old pages don't have any links to them, only one has any PA, and as I said they don't get any traffic, and the current page is around #8 in google for its keyword. Question: Do these old pages need 301 redirects, should I ask google to remove the old URLs? It seems like Google picks the right version of this page for this keyword query, is it possible that the existence of these other pages (that are not nearly as optimized for the keyword) drag it down a bit in the results? Thanks in advance for any help
Technical SEO | | IOSC0 -
Auto generated pages
Hi, I have two sites showing (crawl report from SEOMoz.org) extremely high numbers of duplicate titles and descriptions (e.g., 33,000). These sites have CMSs behind them and so the duplicate titles, etc., are a result of auto-generated pages. What is the best way to address these problems? Thanks! David
Technical SEO | | DWill0 -
404 Error
Hello, Seomoz flagged a url as having a 404 client error. The reason the link doesn't return a proper content page is because the url name was changed. What should we do? Will this error disappear when Google indexes our site again? Or is there some way to manually eliminate it? Thanks!
Technical SEO | | OTSEO0