Robots.txt help
-
Hi Moz Community,
Google is indexing some developer pages from a previous website where I currently work:
ddcblog.dev.examplewebsite.com/categories/sub-categories
Was wondering how I include these in a robots.txt file so they no longer appear on Google. Can I do it under our homepage GWT account or do I have to have a separate account set up for these URL types?
As always, your expertise is greatly appreciated,
-Reed
-
The robots.txt would allow the OP to go back into GWT and request removal of the dev site from the index. Password protecting a dev site is usually a pretty good idea, too.
-
Can you not just add a htaccess password to the directory to keep the dev site up, but keep bots out?
-
You'll want a separate account for that subdomain, and also put the robots.txt excluding that subdomain in that subdomain itself.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help Dealing with Sustained Negative SEO Attack
Hello, I am hoping that someone is able to help with a problem that is destroying both my business and my health. We are an ecommerce site who have been trading since 2004 and who have always had strong rankings in Google. Unfortunately, over the past couple of months, these have significantly decreased (I would estimate around 40% drop in organic traffic). We have not had a manual penalty and still have decent rankings for a lot of competitive keywords, so we think it is more likely to be an algorithmic penalty.The most likely culprit is due to a huge scale negative SEO attack that has been going on for around 18 months. Last September, we suffered a major drop in rankings as a result of the 302 hijack scheme, but after submitting a disavow file (of around 500 domains) on 12th November, we recovered on 26th November (although we now don't know whether this was due to disavow file or the Phantom III update on 19th November).After suffering another major drop at the end of June, we submitted a disavow file of 1100 domains (this the scale of the problem!). This tempoarily halted the slide, however it is getting worse again. I have attached a file from Majestic which shows the increase in the backlinks (however we are not building these).We are at a loss and desperately need help. We have contacting all the sites to try and get links removed but they are happening faster than we can contact them. We have also done a full technical audit and added around 50,000 words of unique, handwritten content, as well as continuing to work through all technical fixes and improvements.At the moment, the only thing we can think of doing is submitting a weekly disavow for all the new spammy domains that come up. The questions I have are: Is there anything we can do to stop the attack? Is this increase in backlinks likely to be the culprit for the drops (both the big drops and the subsequent weekly 10% drop)? If so, would weekly disavows solve the problem? Is this likely to take months (years?) to recover from or can it be done quicker? Can you give me any ray of light to help me sleep at night? 😞 Really appreciate any and all help. I wouldn't wish ths on anyone.Thanks,Simon
Intermediate & Advanced SEO | | simonukss0 -
Need help with Robots.txt
An eCommerce site built with Modx CMS. I found lots of auto generated duplicate page issue on that site. Now I need to disallow some pages from that category. Here is the actual product page url looks like
Intermediate & Advanced SEO | | Nahid
product_listing.php?cat=6857 And here is the auto generated url structure
product_listing.php?cat=6857&cPath=dropship&size=19 Can any one suggest how to disallow this specific category through robots.txt. I am not so familiar with Modx and this kind of link structure. Your help will be appreciated. Thanks1 -
Robots.txt and redirected backlinks
Hey there, since a client's global website has a very complex structure which lead to big duplicate content problems, we decided to disallow crawler access and instead allow access to only a few relevant subdirectories. While indexing has improved since this I was wondering if we might have cut off link juice. Since several backlinks point to the disallowed root directory and are from there redirected (301) to the allowed directory I was wondering if this could cause any problems? Example: If there is a backlink pointing to example.com (disallowed in robots.txt) and is redirected from there to example.com/uk/en (allowed in robots.txt). Would this cut off the link juice? Thanks a lot for your thoughts on this. Regards, Jochen
Intermediate & Advanced SEO | | Online-Marketing-Guy0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Robots.txt & Duplicate Content
In reviewing my crawl results I have 5666 pages of duplicate content. I believe this is because many of the indexed pages are just different ways to get to the same content. There is one primary culprit. It's a series of URL's related to CatalogSearch - for example; http://www.careerbags.com/catalogsearch/result/index/?q=Mobile I have 10074 of those links indexed according to my MOZ crawl. Of those 5349 are tagged as duplicate content. Another 4725 are not. Here are some additional sample links: http://www.careerbags.com/catalogsearch/result/index/?dir=desc&order=relevance&p=2&q=Amy
Intermediate & Advanced SEO | | Careerbags
http://www.careerbags.com/catalogsearch/result/index/?color=28&q=bellemonde
http://www.careerbags.com/catalogsearch/result/index/?cat=9&color=241&dir=asc&order=relevance&q=baggallini All of these links are just different ways of searching through our product catalog. My question is should we disallow - catalogsearch via the robots file? Are these links doing more harm than good?0 -
Help understanding 301 domain redirect
Can anyone help me understand a specific process of a 301 redirecting a domain. Here is what I would like to know.... When you 301 redirect a site, most if not all the links follow to your new site. But how does this process happen? 1.When Google sees the new domain does it simply apply the backlink profile of the old site to the new one? 2. Does it have to re-crawl all the links one by one and apply them to the new domain? 3. or something else?
Intermediate & Advanced SEO | | gazzerman10 -
Please help with creation of slideshare
Just wondering how I would go about creating something like this http://www.slideshare.net/coolstuff/the-brand-gap?from_search=1
Intermediate & Advanced SEO | | BobAnderson0 -
Will an RSS feed help new product get indexed? How to create one for product?
Hi I've read that creating an RSS feed for one of our ecommerce sites will help the products get indexed faster. Currently it takes google 4-5 days to index our new products, we want to speed that up. Will an RSS feed of the new products we have help? How do you create an RSS feed for this? Our blog gets indexed within minutes, but our main website, 4 days. Help!
Intermediate & Advanced SEO | | xoffie0