Wildcarding Robots.txt for Particular Word in URL
-
Hey All,
So I know that this isn't a standard robots.txt, I'm aware of how to block or wildcard certain folders but I'm wondering whether it's possible to block all URL's with a certain word in it?
We have a client that was hacked a year ago and now they want us to help remove some of the pages that were being autogenerated with the word "viagra" in it. I saw this article and tried implementing it https://builtvisible.com/wildcards-in-robots-txt/ and it seems that I've been able to remove some of the URL's (although I can't confirm yet until I do a full pull of the SERPs on the domain). However, when I test certain URL's inside of WMT it still says that they are allowed which makes me think that it's not working fully or working at all.
In this case these are the lines I've added to the robots.txt
Disallow: /*&viagra
Disallow: /*&Viagra
I know I have the solution of individually requesting URL's to be removed from the index but I want to see if anybody has every had success with wildcarding URL's with a certain word in their robots.txt? The individual URL route could be very tedious.
Thanks!
Jon
-
Hey Paul,
Great answer, for some reason it totally slipped my mind that robots.txt is a crawling directive and not an index one. Yes the pages return a 404 on the headers. I've grabbed a copy of the complete SERPS and will now manually disallow them.
Thanks!
Jon
-
Thank for the endorsement, Christy! Funny, I only just now saw Rand's recent WBF related to this topic, but pleased to see my answer lines up exactly with his info.
P.
-
You need to be aware, Jonathan, that there is absolutely nothing about a robots.txt disallow that will help remove a URL from the search engine indexes. Robots is a crawling directive, NOT an indexing directive. In fact, in most cases, blocking URLs in robots.txt will actually cause them to remain in the index even longer.
I'm assuming you have cleaned up the site so the actual spam URLs no longer resolve. Those URLs should now result in a 404 error page. You must confirm they are actually returning the correct 404 code in the headers. As long as this is the case, it is a matter of waiting while the search engines crawl the spam URLs often enough to recognise they are really gone and remove them from the index. The problem with adding them to the robots.txt is that is actually telling the search engines NOT to crawl them, so they are unlikely to discover that they lead to 404s, hence they may remain in the index even longer.
Unfortunately you can't use a no-index tag on the offending pages, because the pages should no longer exist on the site. I don't think even a careful implementation of a X-Robots noindex directive in htaccess would work, because the URLs should be resulting in a 404.
Make certain the problem URLs return a clean 404, use the Google Search Console Remove URLs tool for as many of them as you can (for example you can request removal for entire directories, if the spam happened to be built that way), and then be patient for the rest. But do NOT block them in robots.txt - you'll just prolong the agony and waste your time.
Hope that all makes sense?
Paul
-
Hi Jon,
Why not just: Disallow: /viagra
-
Jon,
I have never done it with a robots.txt, one easy why that I think you could do it would be on the page level. You could add a noindex nofollow to the page itself.
You can generate it automatically too and have it fired depending on the url by using a substring search on the url as well. That will get them all for sure.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirecting a Few URLs to a New Domain
We are in the process of buying the blog section of a site. Let's say Site A is buying Site B. We have taken the content from Site B and replicated it on Site A, along with the exact url besides the TLD. We then issued 301 redirects from Site B to Site A and initiated a crawl on those original Site B urls so Google would understand they are now redirecting to Site A. The new urls for Site A, with the same content are now showing up in Google's index if we do a site:SiteA.com search on the big G. Anyone have any experience with this as to how long before Site A urls should replace Site B urls in the search results? I undestand there may be a ranking difference and CTR difference based on domain bias, etc... I'm just asking if everything goes as planned and there isn't a huge issue, does the process take weeks or months?
Intermediate & Advanced SEO | | seoaustin0 -
URL Optimisation Dilemma
First of all, I fully appreciate that I may be over analysing this, so feel free to highlight if you think I’m going overboard on this one. I’m currently trying to optimise the URLs for a group of new pages that we have recently launched. I would usually err on the side of leaving the urls as they are so that any incoming links are not diluted through the 301 re-direct. In this case, however, there are very few links to these pages, so I don’t think that changing URLs will harm them. My main question is between short URLs vs. long URLs (I have already read Dr. Pete’s post on this). Note: the URLs I have listed below are not the actual URLs, but very similar examples that I have created. The URLs currently exist in a similar format to the examples below: http://www.company.com/products/dlm/hire-ca My first response was that we could put a few descriptive keywords in the url, with something like the following: http://www.company/products/debt-lifecycle-management/hire-collection-agents - I’m worried though that the URL will get too long for any pages sitting under this. As a compromise, I am considering the following: http://www.company/products/dlm/hire-collection-agents My feeling is that the second approach will give the best balance between having the keywords for the products and trying to ensure good user experience. My only concern is whether the /dlm/ category page would suffer slightly, but this would have ‘debt-lifecycle-management’ in the title tag. Does this sound like a good approach to people? Or do you think I’m being a little obsessive about this? Any help would be appreciated 🙂
Intermediate & Advanced SEO | | RG_SEO0 -
301 redirect to a temporary URL
Hi there, What would happen if I redirected a set of URLs to a temporary URL structure. And then a few weeks later redirected the original URLs and temporary URLs to the final permanent URLs? So for example:A -> B for a few weeks.
Intermediate & Advanced SEO | | sichristie
then: A->C and B->C where:
C is the final destination URL.
B is the temporary destination
A is the original URL. The reason we are doing this is the naming of the URLs and pages are different, and we wish to transition our customers carefully from old to new. I am looking for a pure technical response.
Would we lose link juice? Does Google care if we permanently redirect to a set of 'temporary' URLs, and then permanently redirect to a set of what we think are permanent URLs? Cheers, Simon0 -
Should /node/ URLs be 301 redirect to Clean URLs
Hi All! We are in the process of migrating to Drupal and I know that I want to block any instance of /node/ URLs with my robots.txt file to prevent search engines from indexing them. My question is, should we set 301 redirects on the /node/ versions of the URLs to redirect to their corresponding "clean" URL, or should the robots.txt blocking and canonical link element be enough? My gut tells me to ask for the 301 redirects, but I just want to hear additional opinions. Thank you! MS
Intermediate & Advanced SEO | | MargaritaS0 -
Robots.txt: Can you put a /* wildcard in the middle of a URL?
We have noticed that Google is indexing the language/country directory versions of directories we have disallowed in our robots.txt. For example: Disallow: /images/ is blocked just fine However, once you add our /en/uk/ directory in front of it, there are dozens of pages indexed. The question is: Can I put a wildcard in the middle of the string, ex. /en/*/images/, or do I need to list out every single country for every language in the robots file. Anyone know of any workarounds?
Intermediate & Advanced SEO | | IHSwebsite0 -
Changing a url from .html to .com
Hello, I have a client that has a site with a .html plugin and I have read that its best to not have this. We currently have pages ranking with this .html plug in. However If we take the plug in out will we lose rankings? would we need a 301 or something?
Intermediate & Advanced SEO | | SEODinosaur0 -
Does using robots.txt to block pages decrease search traffic?
I know you can use robots.txt to tell search engines not to spend their resources crawling certain pages. So, if you have a section of your website that is good content, but is never updated, and you want the search engines to index new content faster, would it work to block the good, un-changed content with robots.txt? Would this content loose any search traffic if it were blocked by robots.txt? Does anyone have any available case studies?
Intermediate & Advanced SEO | | nicole.healthline0 -
Skip root page for brandname domain and just forward to key-word URL document?
SEOMoz community, I wanted to get your guys' ideas around what I would consider an unorthodox but potentially effective approach: I am currently internationalizing a brandname domain by building up different CCTLD domains and assigning them to local server resources. In order to push the brand of the project, all international CCTLDs share the same root domain, however the main key word for each CCTLD is different. My question is essentially if it would make sense to configure the root landing page of each international project as http://www.<brandname>.<cctld>/<market_keyword></market_keyword></cctld></brandname> rather than the typical approach http://www.<brandname>.</brandname> This would allow me to get the market specific keyword into the landing page URL. The root domain would have a 301 redirect to the keyword landing page. Anyone has any experience with this approach? Does a true root domain URL get some sort of SEO bonus that would not justify the above approach or will the URL with the keyword in the path will have higher SEO power. Looking forward to your responses - even if its just projections/SEO gut feeling. Thanks /Thomas
Intermediate & Advanced SEO | | tomypro0