Effect duration of robots.txt file.
-
in my web site there is demo site in that also, index in Google but no need it now.so i have created robots file and upload to server yesterday.in the demo folder there are some html files,and i wanna remove all these in demo file from Google.but still in web master tools it showing
User-agent: *
Disallow: /demo/How long this will take to remove from Google ?
And are there any alternative way doing that ?
-
Google Webmaster Tools also has a remove URL function where you can remove an entire directory, which may be of help to you.
-
And, if they are already indexed, you have to wait for them to be recrawled, then fall out of index, so it's not an immediate thing. Sometimes it takes days, sometimes weeks.
-
Hello,
The robots directive will only prevent google from crawling the pages. In order t remove the pages from index you need to add "meta noindex" to the pages you want to have removed.
<meta name="robots" content="noindex">
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disavow files and net com org etc ....
When looking at my backlinks if I see something like this: www.domainPizza.net
Intermediate & Advanced SEO | | HLTalk
www.domainPizza.com
sub.domainPizza.com
www.domainpizza.org
domainPizza.net
https://domainpizza.com
https://www.domainpizza.net What is the actual list of disavows that I put into the file if I want to disavow this domain? I am seeing so many variations of the same domain. Thank you.0 -
Would a mass data update have a negative effect on SEO?
We have a large eCommerce site with the ability to do an export, change data, and import new data in mass. Over the 15 years that this site has been growing, it has accumulated several inconsistencies in product titles, descriptions, title tags, etc. The question is: If we were to update thousands of product titles ( 's on those pages) and some of the descriptions, would it have a negative SEO impact because of the groundbreaking number of products effected? Or would it only be for the better if they were all technically improvements (both in SEO and UX)? Thanks!
Intermediate & Advanced SEO | | frankandmaven0 -
Question about Syntax in Robots.txt
So if I want to block any URL from being indexed that contains a particular parameter what is the best way to put this in the robots.txt file? Currently I have-
Intermediate & Advanced SEO | | DRSearchEngOpt
Disallow: /attachment_id Where "attachment_id" is the parameter. Problem is I still see these URL's indexed and this has been in the robots now for over a month. I am wondering if I should just do Disallow: attachment_id or Disallow: attachment_id= but figured I would ask you guys first. Thanks!0 -
Google Indexing Duplicate URLs : Ignoring Robots & Canonical Tags
Hi Moz Community, We have the following robots command that should prevent URLs with tracking parameters being indexed. Disallow: /*? We have noticed google has started indexing pages that are using tracking parameters. Example below. http://www.oakfurnitureland.co.uk/furniture/original-rustic-solid-oak-4-drawer-storage-coffee-table/1149.html http://www.oakfurnitureland.co.uk/furniture/original-rustic-solid-oak-4-drawer-storage-coffee-table/1149.html?ec=affee77a60fe4867 These pages are identified as duplicate content yet have the correct canonical tags: https://www.google.co.uk/search?num=100&site=&source=hp&q=site%3Ahttp%3A%2F%2Fwww.oakfurnitureland.co.uk%2Ffurniture%2Foriginal-rustic-solid-oak-4-drawer-storage-coffee-table%2F1149.html&oq=site%3Ahttp%3A%2F%2Fwww.oakfurnitureland.co.uk%2Ffurniture%2Foriginal-rustic-solid-oak-4-drawer-storage-coffee-table%2F1149.html&gs_l=hp.3..0i10j0l9.4201.5461.0.5879.8.8.0.0.0.0.82.376.7.7.0....0...1c.1.58.hp..3.5.268.0.JTW91YEkjh4 With various affiliate feeds available for our site, we effectively have duplicate versions of every page due to the tracking query that Google seems to be willing to index, ignoring both robots rules & canonical tags. Can anyone shed any light onto the situation?
Intermediate & Advanced SEO | | JBGlobalSEO0 -
Robots.txt Syntax
I have been having a hard time finding any decent information regarding the robots.txt syntax that has been written in the last few years and I just want to verify some things as a review for myself. I have many occasions where I need to block particular directories in the URL, parameters and parameter values. I just wanted to make sure that I am doing this in the most efficient ways possible and thought you guys could help. So let's say I want to block a particular directory called "this" and this would be an example URL: www.domain.com/folder1/folder2/this/file.html
Intermediate & Advanced SEO | | DRSearchEngOpt
or
www.domain.com/folder1/this/folder2/file.html In order for me to block any URL that contains this folder anywhere in the URL I would use: User-agent: *
Disallow: /this/ Now lets say I have a parameter "that" I want to block and sometimes it is the first parameter and sometimes it isn't when it shows up in the URL. Would it look like this? User-agent: *
Disallow: ?that=
Disallow: &that= What about if there is only one value I want to block for "that" and the value is "NotThisGuy": User-agent: *
Disallow: ?that=NotThisGuy
Disallow: &that=NotThisGuy My big questions here are what are the most efficient ways to block a particular parameter and block a particular parameter value. Is there a more efficient way to deal with ? and & for when the parameter and value are either first or later? Secondly is there a list somewhere that will tell me all of the syntax and meaning that can be used for a robots.txt file? Thanks!0 -
Google showing high volume of URLs blocked by robots.txt in in index-should we be concerned?
if we search site:domain.com vs www.domain.com, We see: 130,000 vs 15,000 results. When reviewing the site:domain.com results, we're finding that the majority of the URLs showing are blocked by robots.txt. They are subdomains that we use as production environments (and contain similar content as the rest of our site). And, we also find the message "In order to show you the most relevant results, we have omitted some entries very similar to the 541 already displayed." SEER Interactive mentions that this is one way to gauge a Panda penalty: http://www.seerinteractive.com/blog/100-panda-recovery-what-we-learned-to-identify-issues-get-your-traffic-back We were hit by Panda some time back--is this an issue we should address? Should we unblock the subdomains and add noindex, follow?
Intermediate & Advanced SEO | | nicole.healthline0 -
What will the effect of normalising the case of my URLs be?
Hi all, I have a web site with a selection of pages with excellent rankings, mostly in the top 3 for the keywords we want to rank for. Currently, the URLs are mostly presented mixed case, like this: www.mydomain.com/Type/ITEM-IDENTIFIER/ However we have problems of different cases being used in different parts of our application, and also it's obviously not that attractive the way it is. What we are proposing to do is deploy a change to our web site that lowercases all URLs in internal links, as well as present the URLs in lowercase in our sitemap.xml, and provide any links to partners from this point on in lowercase format. We are also proposing to 301 redirect any non-lowercase URLs to the lowercase version. These pages already have a canonical link tag due to us hosting different versions of these pages on multiple domains, for skinning purposes. The link in the canonical link tag will also be changed to be lowercase. What I am concerned about is, URLs of the case above have been in the rankings for a few years now, and if all of a sudden our links are all lowercase, will they drop off the rankings? Or will the above measures mean that the pagerank is transferred to the lowercase version of the URL? Thanks in advance, James
Intermediate & Advanced SEO | | SeeTickets0 -
Are there any benefits to having dashes in file names?
Through searching, I can find lots of discussion regarding "dash vs underscore", but am having trouble with an even simpler question: Is there any SEO difference between using http://www.broadway.com/shows/milliondollarquartet.php vs. http://www.broadway.com/shows/million-dollar-quartet.php
Intermediate & Advanced SEO | | RyanWhitney150