Robots.txt Allowed
-
Hello all,
We want to block something that has the following at the end:
http://www.domain.com/category/product/some+demo+-text-+example--writing+here
So I was wondering if doing:
/*example--writing+here
would work?
-
Yes, that should work just fine. As Logan mentioned, I recommend you test it in the robots.txt testing tool in Google Search Console.
-
Yes, that would work. I'm sure everyone already knows that if in case you have a product that has the word example at the end of URL, it would block that too. A little off tangent here but blocking in robots.txt does not mean that every single spiders out there is going to honor this rule. The major ones like Google Spiders does honor this. Also, it doesn't mean that the URL won't be indexed. Sorry for the long winded answer but just make sure that if this is truly an example or demo page that you don't want search engines to index to make sure that you include "noindex, nofollow" in the metainfo.
I agree with Logan Ray. In case you want the "Robots TXT" Tester, you can google it "Robots Txt Tester" and the first one should be from support.google.com
-
Hi Thomas,
That should work. You can confirm this by modifying your robots.txt file in Search Console and testing a handful of URLs to ensure they're blocked the way you want.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt advice
Hey Guys, Have you ever seen coding like this in a robots.txt, I have never seen a noindex rule in a robots.txt file before - have you? user-agent: AhrefsBot User-agent: trovitBot
Intermediate & Advanced SEO | | eLab_London
User-agent: Nutch
User-agent: Baiduspider
Disallow: / User-agent: *
Disallow: /WebServices/
Disallow: /*?notfound=
Disallow: /?list=
Noindex: /?*list=
Noindex: /local/
Disallow: /local/
Noindex: /handle/
Disallow: /handle/
Noindex: /Handle/
Disallow: /Handle/
Noindex: /localsites/
Disallow: /localsites/
Noindex: /search/
Disallow: /search/
Noindex: /Search/
Disallow: /Search/
Disallow: ? I have never seen a noindex rule in a robots.txt file before - have you?
Any pointers?0 -
CPanel Redirect not allowing login access.
Using the redirect function in cPanel I am able to create the 301 redirect that I need to not have duplicate content issues in Moz. However, the issue now is that when I try to login to domain.com/login it redirects to domain.com/index.php?q=admin, which is not a page on the site and I can no longer login. I have checked the htaccess file and it appears that the entry is correct ( I originally thought that the cPanel redirect was not writing access correctly ). I am not sure if there is a small detail that I am missing with this or not. So my main question is how do I redirect my site to remove dup content errors while retaining the login at domain.com/admin and not be redirected to domain.com/index.php?q=admin? Thank you ahead of time for your assistance.
Intermediate & Advanced SEO | | Highline_Ideas0 -
Robots.txt
What would be a perfect robots.txt file my site is propdental.es Can i just place: User-agent: * Or should i write something more???
Intermediate & Advanced SEO | | maestrosonrisas0 -
What should I block with a robots.txt file?
Hi Mozzers, We're having a hard time getting our site indexed, and I have a feeling my dev team may be blocking too much of our site via our robots.txt file. They say they have disallowed php and smarty files. Is there any harm in allowing these pages? Thanks!
Intermediate & Advanced SEO | | Travis-W1 -
How to Disallow Tag Pages With Robot.txt
Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Is it allowed to have different alt on same image on different pages?
Hi, I have images that match several different keywords and I wondered if I can give them different alts based on the page that they are displayed or will Google be angry with me? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Using 2 wildcards in the robots.txt file
I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it? So something like /_Q1. Will that pickup and block every URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1. So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.
Intermediate & Advanced SEO | | seo1234560 -
Negative impact on crawling after upload robots.txt file on HTTPS pages
I experienced negative impact on crawling after upload robots.txt file on HTTPS pages. You can find out both URLs as follow. Robots.txt File for HTTP: http://www.vistastores.com/robots.txt Robots.txt File for HTTPS: https://www.vistastores.com/robots.txt I have disallowed all crawlers for HTTPS pages with following syntax. User-agent: *
Intermediate & Advanced SEO | | CommercePundit
Disallow: / Does it matter for that? If I have done any thing wrong so give me more idea to fix this issue.0