Could you use a robots.txt file to disalow a duplicate content page from being crawled?
-
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO.
I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
-
Yeah, sorry for the confusion. I put the tag on all the pages (Original and Duplicate). I sent you a PM with another good article on Rel canonical tag
-
Peter, Thanks for the clarification.
-
Generally agree, although I'd just add that Robots.txt also isn't so great at removing content that's already been indexed (it's better at prevention). So, I find that it's not just not ideal - it sometimes doesn't even work in these cases.
Rel-canonical is generally a good bet, and it should go on the duplicate (you can actually put it on both, although it's not necessary).
-
Next time I'll read the reference links better
Thank you!
-
per google webmaster tools:
If Google knows that these pages have the same content, we may index only one version for our search results. Our algorithms select the page we think best answers the user's query. Now, however, users can specify a canonical page to search engines by adding a element with the attribute
rel="canonical"
to the section of the non-canonical version of the page. Adding this link and attribute lets site owners identify sets of identical content and suggest to Google: "Of all these pages with identical content, this page is the most useful. Please prioritize it in search results." -
Thanks Kyle. Anthony had a similar view on using the rel canonical tag. I'm just curious about adding it to both the original page or duplicate page? Or both?
Thanks,
Greg
-
Anthony, Thanks for your response. See Kyle, he also felt using the rel canonical tag was the best thing to do. However he seemed to think you'd put it on the original page - the one you want to rank for. And you're suggesting putting on the duplicate page. Should it be added to both while specifying which page is the 'original'?
Thanks!
Greg
-
I'm not sure I understand why the site owner seems to think that the duplicate content is necessary?
If I was in your situation I would be trying to convince the client to remove the duplicate content from their site, rather than trying to find a way around it.
If the information is difficult to find then this may be due to a problem with the site architecture. If the site does not flow well enough for visitors to find the information they need, then perhaps a site redesign is necessary.
-
Well, the answer would be yes and no. A robots.txt file would stop the bots from indexing the page, but links from other pages in site to that non indexed page could therefor make it crawlable and then indexed. AS posted in google webmaster tools here:
"You need a robots.txt file only if your site includes content that you don't want search engines to index. If you want search engines to index everything in your site, you don't need a robots.txt file (not even an empty one).
While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results."
I think the best way to avoid any conflict is applying the rel="canonical" tag to each duplicate page that you don't want indexed.
You can find more info on rel canonical here
Hope this helps out some.
-
The best way would be to use the Rel canonical tag
On the page you would like to rank for put the Rel canonical tag in
This lets google know that this is the original page.
Check out this link posted by Rand about the Rel canonical tag [http://www.seomoz.org/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps](http://www.seomoz.org/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps)
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Please help - Duplicate Content
Hi, I am really struggling to understand why my site has a lot of duplicate content issues. It's flagging up as ridiculously high and I have no idea how to fix this, can anyone help me, please? Website is www.firstcapitol.co.uk
Intermediate & Advanced SEO | | Alix_SEO1 -
Should I use noindex or robots to remove pages from the Google index?
I have a Magento site and just realized we have about 800 review pages indexed. The /review directory is disallowed in robots.txt but the pages are still indexed. From my understanding robots means it will not crawl the pages BUT if the pages are still indexed if they are linked from somewhere else. I can add the noindex tag to the review pages but they wont be crawled. https://www.seroundtable.com/google-do-not-use-noindex-in-robots-txt-20873.html Should I remove the robots.txt and add the noindex? Or just add the noindex to what I already have?
Intermediate & Advanced SEO | | Tylerj0 -
Duplicate content issue
Hello! We have a lot of duplicate content issues on our website. Most of the pages with these issues are dictionary pages (about 1200 of them). They're not exactly duplicate, but they contain a different word with a translation, picture and audio pronunciation (example http://anglu24.lt/zodynas/a-suitcase-lagaminas). What's the better way of solving this? We probably shouldn't disallow dictionary pages in robots.txt, right? Thanks!
Intermediate & Advanced SEO | | jpuzakov0 -
Concerns of Duplicative Content on Purchased Site
Recently I purchased a site of 50+ DA (oldsite.com) that had been offline/404 for 9-12 months from the previous owner. The purchase included the domain and the content previously hosted on the domain. The backlink profile is 100% contextual and pristine. Upon purchasing the domain, I did the following: Rehosted the old site and content that had been down for 9-12 months on oldsite.com Allowed a week or two for indexation on oldsite.com Hosted the old content on my newsite.com and then performed 100+ contextual 301 redirects from the oldsite.com to newsite.com using direct and wild card htaccess rules Issued a Press Release declaring the acquisition of oldsite.com for newsite.com Performed a site "Change of Name" in Google from oldsite.com to newsite.com Performed a site "Site Move" in Bing/Yahoo from oldsite.com to newsite.com It's been close to a month and while organic traffic is growing gradually, it's not what I would expect from a domain with 700+ referring contextual domains. My current concern is around original attribution of content on oldsite.com shifting to scraper sites during the year or so that it was offline. For Example: Oldsite.com has full attribution prior to going offline Scraper sites scan site and repost content elsewhere (effort unsuccessful at time because google know original attribution) Oldsite.com goes offline Scraper sites continue hosting content Google loses consumer facing cache from oldsite.com (and potentially loses original attribution of content) Google reassigns original attribution to a scraper site Oldsite.com is hosted again and Google no longer remembers it's original attribution and thinks content is stolen Google then silently punished Oldsite.com and Newsite.com (which it is redirected to) QUESTIONS Does this sequence have any merit? Does Google keep track of original attribution after the content ceases to exist in Google's search cache? Are there any tools or ways to tell if you're being punished for content being posted else on the web even if you originally had attribution? Unrelated: Are there any other steps that are recommend for a Change of site as described above.
Intermediate & Advanced SEO | | PetSite0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Recovering from robots.txt error
Hello, A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone. It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT. In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok. I guess what I want to ask is: What else is there that we can do to recover these rankings quickly? What time scales can we expect for recovery? More importantly has anyone had any experience with this sort of situation and is full recovery normal? Thanks in advance!
Intermediate & Advanced SEO | | RikkiD220 -
Removing Duplicate Page Content
Since joining SEOMOZ four weeks ago I've been busy tweaking our site, a magento eCommerce store, and have successfully removed a significant portion of the errors. Now I need to remove/hide duplicate pages from the search engines and I'm wondering what is the best way to attack this? Can I solve this in one central location, or do I need to do something in the Google & Bing webmaster tools? Here is a list of duplicate content http://www.unitedbmwonline.com/?dir=asc&mode=grid&order=name http://www.unitedbmwonline.com/?dir=asc&mode=list&order=name
Intermediate & Advanced SEO | | SteveMaguire
http://www.unitedbmwonline.com/?dir=asc&order=name http://www.unitedbmwonline.com/?dir=desc&mode=grid&order=name http://www.unitedbmwonline.com/?dir=desc&mode=list&order=name http://www.unitedbmwonline.com/?dir=desc&order=name http://www.unitedbmwonline.com/?mode=grid http://www.unitedbmwonline.com/?mode=list Thanks in advance, Steve0 -
Pop Up Pages Being Indexed, Seen As Duplicate Content
I offer users the opportunity to email and embed images from my website. (See this page http://www.andertoons.com/cartoon/6246/ and look under the large image for "Email to a Friend" and "Get Embed HTML" links.) But I'm seeing the ensuing pop-up pages (Ex: http://www.andertoons.com/embed/5231/?KeepThis=true&TB_iframe=true&height=370&width=700&modal=true and http://www.andertoons.com/email/6246/?KeepThis=true&TB_iframe=true&height=432&width=700&modal=true) showing up in Google. Even worse, I think they're seen as duplicate content. How should I deal with this?
Intermediate & Advanced SEO | | andertoons0