My beta site (beta.website.com) has been inadvertently indexed. Its cached pages are taking traffic away from our real website (website.com). Should I just "NO INDEX" the entire beta site and if so, what's the best way to do this? Please advise.
-
My beta site (beta.website.com) has been inadvertently indexed. Its cached pages are taking traffic away from our real website (website.com). Should I just "NO INDEX" the entire beta site and if so, what's the best way to do this? Are there any other precautions I should be taking? Please advise.
-
On your beta sites in future, I would recommend using Basic HTTP Authentication so that spiders can't even access it (this is for Apache):
AuthUserFile /var/www/sites/passwdfile
AuthName "Beta Realm"
AuthType Basic
require valid-user
Then htpasswd -m /var/www/sites/passwdfile usernameIf you do this as well, Google's Removal Tool will go "ok its not there I should remove the page" as well, because they usually ask for content in the page as a check for removal. If you don't remove the text, they MAY not process the removal request (even if it has noindex [though I don't know if that's the case]).
-
-
In Webmaster Tools, set the subdomain up as its own site and verify it
-
Put on the robots.txt for the subdomain (beta.website.com/robots.txt
User-agent: *
Disallow: / -
You can then submit this site for removal in Google Webmaster Tools
- Click "optimization" and then "remove URLs"
- Click "create a new removal request"
- Type the URL "http://beta.website.com/" in there
- Click "continue"
- Click "submit request".
-
-
Agreed on all counts with Mark. In addition, if you haven't done this already, make sure you have canonical tags in place on your pages. Good luck!
-
You can add noindex to the whole subdomain, and then wait for the crawlers to remove it.
Or you can register the subdomain with webmaster tools, block the subdomain via the robots.txt with a general Disallow: / for the entire subdomain, and then use the URL removal tool in Webmaster Tools to remove the subdomain via robots.txt. Just a robots.txt block won't work - it won't remove the pages, it'll just prevent them from being crawled again.
In your case, I would probably go the route of the robots.txt / url removal tool. This will work to remove the pages from Google. Once this has happened, I would use the noindex tag on the whole subdomain and remove the robots.txt block - this way, all search engines should not index the page / will remove it from their index.
Mark
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page with "random" content
Hi, I'm creating a page of 300+ in the near future, on which the content basicly will be unique as it can be. However, upon every refresh, also coming from a search engine refferer, i want the actual content such as listing 12 business to be displayed random upon every hit. So basicly we got 300+ nearby pages with unique content, and the overview of those "listings" as i might say, are being displayed randomly. Ive build an extensive script and i disabled any caching for PHP files in specific these pages, it works. But what about google? The content of the pages will still be as it is, it is more of the listings that are shuffled randomly to give every business listing a fair shot at a click and so on. Anyone experience with this? Ive tried a few things in the past, like a "Last update PHP Month" in the title which sometimes is'nt picked up very well.
Technical SEO | | Vanderlindemedia0 -
Best way to deal with 100 product pages
It feels good to be BACK. I miss Moz. I left for a long time but happy to be back! 🙂 My client is a local HVAC company. They sell Lennox system. Lennox provides a tool that we hooked up to that allows visitors to their site to 'see' 120+ different kind of air quality, furnace and AC units. They problem is (I think its a problem) is Google and other crawl tools are seeing these 100+ pages that are not unique, helpful or related to my client. There is a little bit of cookie cutter text and images and specs and that's it. Are these pages potentially hurting my client? I can't imagine they are helping. Best way to deal with these? Thank you! Thank you! Matthew
Technical SEO | | Localseo41440 -
Our client's site was owned by former employee who took over the site. What should be done? Is there a way to preserve all the SEO work?
A client had a member of the team leave on bad terms. This wasn't something that was conveyed to us at all, but recently it came up when the distraught former employee took control of the domain and locked everyone out. At first, this was assumed to be a hack, but eventually it was revealed that one of the company starters who unhappily left the team owned the domain all along and is now holding it hostage. Here's the breakdown: -Every page aside from the homepage is now gone and serving a 404 response code -The site is out of our control -The former employee is asking for a $1 million ransom to sell the domain back -The homepage is a "countdown clock" that isn't actively counting down, but claims that something exciting is happening in 3 days and lists a contact email. The question is how we can save the client's traffic through all this turmoil. Whether buying a similar domain and starting from square one and hoping we can later redirect the old site's pages after getting it back. Or maybe we have a legal claim here that we do not see even though the individual is now the owner of the site. Perhaps there's a way to redirect the now defunct pages to a new site somehow? Any ideas are greatly appreciated.
Technical SEO | | FPD_NYC0 -
What is the best practice to seperate different locations and languages in an URL? At the moment the URL is www.abc.com/ch/de. Is there a better way to structure the URL from an SEO perspective?
I am looking for a solution for using a new URL structure without using www.abc.com**/ch/de** in the URL to deliver the right languages in specific countries where more than one language are spoken commonly. I am looking forward to your ideas!
Technical SEO | | eviom0 -
What is the best way to find stranded pages?
I have a client that has a site that has had a number of people in charge of it. All of these people have very different opinions about what should be on the site itself. When I look at their website on the server I see pages that do not have any obvious navigation to them. What is the best way to find out the internal linking structure of a site and see if these pages truly are stranded?
Technical SEO | | anjonr0 -
If you only want your home page to rank, can you use rel="canonical" on all your other pages?
If you have a lot of pages with 1 or 2 inbound links, what would be the effect of using rel="canonical" to point all those pages to the home page? Would it boost the rankings of the home page? As I understand it, your long-tail keyword traffic would start landing on the home page instead of finding what they were looking for. That would be bad, but might be worth it.
Technical SEO | | watchcases0 -
Every time google caches our site it shows no website.
Our site <cite>www.skaino.co.uk/</cite> seems to be having real issues with being picked up with Google. The site has been around for a long time but no longer even ranks on google if you search for the word 'Skaino'. This is odd as its hardly a competitive keyword. If I do a site:www.skaino.co.uk then it shows all the pages proving the site has been indexed. But if I do cache:www.skaino.co.uk it shows a blank cache. I'm starting to worry that Google isn't able to crawl our site properly. If it helps to clarify we have a flash site with a HTML site running underneath for those who cant view flash. Im wandering if I've missed something glaringly obvious. Is it normal to have a blank google cache? Thanks AJ
Technical SEO | | handygammon0 -
Google News not indexing .index.html pages
Hi all, we've been asked by a blog to help them better indexing and ranking on Google News (with the site being already included in Google News with poor results) The blog had a chronicle URL duplication problem with each post existing with 3 different URLs: #1) www.domain.com/post.html (currently in noindex for editorial choices as showing all the comments) #2) www.domain.com/post/index.html (currently indexed showing only top comments) #3) www.domain.com/post/ (very same as #2) We've chosen URL #2 (/index.html) as canonical URL, and included a rel=canonical tag on URL #3 (/) linking to URL #2.
Technical SEO | | H-FARM
Also we've submitted yesterday a Google News sitemap including consistently the list of URLs #2 from the last 48h . The sitemap has been properly "digested" by Google and shows that all URLs have been sent and indexed. However if we use the site:domain.com command on Google News we see something completely different: Google News has indexed actually only some news and more specifically only the URLs #3 type (ending with the trailing slash instead of /index.html). Why ? What's wrong ? a) Does Google News bot have problems indexing URLs ending with .index.html ? While figuring out what's wrong we've found out that http://news.google.it/news/search?aq=f&pz=1&cf=all&ned=us&hl=en&q=inurl%3Aindex.html gives no results...it seems that Google News index overall does not include any URLs ending with /index.html b) Does Google News bot recognise rel=canonical tag ? c) Is it just a matter of time and then Google News will pick up the right URLs (/index.html) and/or shall we communicate Google News team any changes ? d) Any suggestions ? OR Shall we do the other way around. meaning make URL #3 the canonical one ? While Google News is showing these problems, Google Web search has actually well received the changes, so we don't know what to do. Thanks for your help, Matteo0