How to block "print" pages from indexing
-
I have a fairly large FAQ section and every article has a "print" button. Unfortunately, this is creating a page for every article which is muddying up the index - especially on my own site using Google Custom Search.
Can you recommend a way to block this from happening?
Example Article:
Example "Print" page:
http://www.knottyboy.com/lore/article.php?id=052&action=print
-
Donnie, I agree. However, we had the same problem on a website and here's what we did the canonical tag:
Over a period of 3-4 weeks, all those print pages disappeared from the SERP. Now if I take a print URL and do a cache: for that page, it shows me the web version of that page.
So yes, I agree the question was about blocking the pages from getting indexed. There's no real recipe here, it's about getting the right solution. Before canonical tag, robots.txt was the only solution. But now with canonical there (provided one has the time and resources available to implement it vs adding one line of text to robots.txt), you can technically 301 the pages and not have to stop/restrict the spiders from crawling them.
Absolutely no offence to your solution in any way. Both are indeed workable solutions. The best part is that your robots.txt solution takes 30 seconds to implement since you provided the actually disallow code :), so it's better.
-
Thanks Jennifer, will do! So much good information.
-
Sorry, but I have to jump in - do NOT use all of those signals simultaneously. You'll make a mess, and they'll interfere with each other. You can try Robots.txt or NOINDEX on the page level - my experience suggests NOINDEX is much more effective.
Also, do not nofollow the links yet - you'll block the crawl, and then the page-level cues (like NOINDEX) won't work. You can nofollow later. This is a common mistake and it will keep your fixes from working.
-
Josh, please read my and Dr. Pete's comments below. Don't nofollow the links, but do use the meta noindex,follow on the page.
-
Rel-canonical, in practice, does essentially de-index the non-canonical version. Technically, it's not a de-indexation method, but it works that way.
-
You are right Donnie. I've "good answered" you too.
I've gone ahead and updated my robots.txt file. As soon as I am able, I will use no indexon the page, no follow on the links, and rel=canonical.
This is just what I needed, a quick fix until I can make a more permanent solution.
-
Your welcome : )
-
Although you are correct... there is still more then one way to skin a chicken.
-
But the spiders still run on the page and read the canonical link, however with the robot text the spiders will not.
-
Yes, but Rel=Canonical does not block a page it only tells google which page to follow out of two pages.The question was how to block, not how to tell google which link to follow. I believe you gave credit to the wrong answer.
http://en.wikipedia.org/wiki/Canonical_link_element
This is not fair. lol
-
I have to agree with Jen - Robots.txt isn't great for getting indexed pages out. It's good for prevention, but tends to be unreliable as a cure. META NOINDEX is probably more reliable.
One trick - DON'T nofollow the print links, at least not yet. You need Google to crawl and read the NOINDEX tags. Once the ?print pages are de-indexed, you could nofollow the links, too.
-
Yes, it's strongly recommended. It should be fairly simple to populate this tag with the "full" URL of the article based on the article ID. This approach will not only help you get rid of the duplicate content issue, but a canonical tag essentially works like a 301 redirect. So from all search engine perspective you are 301'ing your print pages to the real web urls without redirecting the actual user's who are browsing the print pages if they need to.
-
Ya it is actually really useful. Unfortunately they are out of business now - so I'm hacking it on my own.
I will take your advice. I've shamefully never used rel= canonical before - so now is a good time to start.
-
True but using robots.txt does not keep them out of the index. Only using "noindex" will do that.
-
Thanks Donnie. Much appreciated!
-
I actually remember Lore from a while ago. It's an interesting, easy to use FAQ CMS.
Anyways, I would also recommend implementing Canonical Tags for any possible duplicate content issues. So whether it's the print or the web version, each one of them will contain a canonical tag pointing to the web url of that article in the section of your website.
rel="canonical" href="http://www.knottyboy.com/lore/idx.php/11/183/Maintenance-of-Mature-Locks-6-months-/article/How-do-I-get-sand-out-of-my-dreads.html" /> -
-
Try This.
User-agent: *
Disallow: /*&action=print
-
Theres more then one way to skin a chicken.
-
Rather than using robots.txt I'd use a noindex,follow tag instead to the page. This code goes into the tag for each print page. And it will ensure that the pages don't get indexed but that the links are followed.
-
That would be great. Do you mind giving me an example?
-
you can block in .robot text, every page that ends in action=print
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No existing pages in Google index
I have a real estate portal. I have a few categories - for example: flats, houses etc. Url of category looks like that: mydomain.com/flats/?page=1 Each category has about 30-40 pages - BUT in Google index I found url like: mydomain.com/flats/?page=1350 Can you explain it? This url contains just headline etc - but no content! (it´s just generated page by PHP) How is it possible, that Google can find and index these pages? (on the web, there are no backlinks on these pages) thanks
Technical SEO | | visibilitysk0 -
What's our easiest, quickest "win" for page load speed?
This is a follow up question to an earlier thread located here: http://www.seomoz.org/q/we-just-fixed-a-meta-refresh-unified-our-link-profile-and-now-our-rankings-are-going-crazy In that thread, Dr. Pete Meyers said "You'd really be better off getting all that script into external files." Our IT Director is willing to spend time working on this, but he believes it is a complicated process because each script must be evaluated to determine which ones are needed "pre" page load and which ones can be loaded "post." Our IT Director went on to say that he believes the quickest "win" we could get would be to move our SSL javascript for our SSL icon (in our site footer) to an internal page, and just link to that page from an image of the icon in the footer. He says this javascript, more than any other, slows our page down. My question is two parts: 1. How can I verify that this javascript is indeed, a major culprit of our page load speed? 2. Is it possible that it is slow because so many styles have been applied to the surrounding area? In other words, if I stripped out the "Secured by" text and all the syles associated with that, could that effect the efficiency of the script? 3. Are there any negatives to moving that javascript to an interior landing page, leaving the icon as an image in the footer and linking to the new page? Any thoughts, suggestions, comments, etc. are greatly appreciated! Dana
Technical SEO | | danatanseo0 -
How is a dash or "-" handled by Google search?
I am targeting the keyword AK-47 and it the variants in search (AK47, AK-47, AK 47) . How should I handle on page SEO? Right now I have AK47 and AK-47 incorporated. So my questions is really do I need to account for the space or is Google handling a dash as a space? At a quick glance of the top 10 it seems the dash is handled as a space, but I just wanted to get a conformation from people much smarter then I at seomoz. Thanks, Jason
Technical SEO | | idiHost0 -
On page audit throws a rel="canonical" curve ball :-(
Good Morning from -3 Degrees C, still no paths gritted wetherby UK 😞 Following an on page audit one recommendation instructs me to ad:
Technical SEO | | Nightwing
http://www.barrettsteel.com/" /> on the home page of barrett steel. I'm confused, i thought i only had to add this to duplications
the home page which to my knowledge dont exist. So my question is please: "Why shoul i ad this snippet of code on the home page of http://www.barrettsteel.com http://www.barrettsteel.com/" /> Any insights welcome 🙂0 -
Micro formats to block HTML text portions of pages
I have a client that wants to use micro formatting to keep a portion of their page (the disclaimer) from being read by the search engines. They want to do this because it will help with their keyword density on the rest of the page and block the “bad keywords” that come from their legally required disclaimer. We have suggested alternate methods to resolve this problem, but they do not want to implement those, they just want a POV from us explaining how this micro formatting process will work. And that’s where the problem is. I’ve never heard of this use case and can’t seem to find anyone who has. I'm posting the question to the Moz Community to see if anyone knows how microformats can keep copy from being crawled by the bots. Please include any links to sites that you know that are using micro formatting in this way. Have you implemented it and seen results? Do you know of a website that is using it now? We're looking for use cases please!
Technical SEO | | Merkle-Impaqt0 -
Is this 404 page indexed?
I have a URL that when searched for shows up in the Google index as the first result but does not have any title or description attached to it. When you click on the link it goes to a 404 page. Is it simply that Google is removing it from the index and is in some sort of transitional phase or could there be another reason.
Technical SEO | | bfinternet0 -
What is consider best practice today for blocking admins from potentially getting indexed
What is consider best practice today for blocking pages, for instance xyz.com/admin pages, from getting indexed by the search engines or easily found. Do you recommend to still disallow it in the robots.txt file or is the robots.txt not the best place to notate your /admin location because of hackers and such? Is it better to hide the /admin with an obscure name, use the noidex tag on the page and don't list in the robots.txt file?
Technical SEO | | david-2179970