Noindexing Duplicate (non-unique) Content
-
When "noindex" is added to a page, does this ensure Google does not count page as part of their analysis of unique vs duplicate content ratio on a website? Example: I have a real estate business and I have noindex on MLS pages. However, is there a chance that even though Google does not index these pages, Google will still see those pages and think "ah, these are duplicate MLS pages, we are going to let those pages drag down value of entire site and lower ranking of even the unique pages". I like to just use "noindex, follow" on those MLS pages, but would it be safer to add pages to robots.txt as well and that should - in theory - increase likelihood Google will not see such MLS pages as duplicate content on my website?
On another note: I had these MLS pages indexed and 3-4 weeks ago added "noindex, follow". However, still all indexed and no signs Google is noindexing yet.....
-
Canonical pages don't have to be the same.
it will merge the content to look like one page.
Good luck
-
thx, Alan. I am already using re=next prev. However, that means all those paginated pages will still be indexed. I am adding the "noindex, follow" to page 2-n and only leaving page 1 indexed. Canonical: I don't think that will work. Each page in the series shows different properties, which means pages 1 - n are all different......
-
Ok if you use follow, that will be ok. but I would be looking at canonical or next previous first
-
I am trying to rank for those MLS duplicate alike pages, since that is what users want (they don't want my guide pages with lots of unique data, when they are searching "....for sale"). I will add unique data to page 1 of these MLS result pages. However, page 2-50 will NOT change (stay duplicate alike looking). If I have page 1-50 indexed, the unique content on page 1 may look like a drop in the ocean to G, and that is why I feel including "noindex, follow" on pages 2-50 may make sense.
-
That's correct.
you wont rank for duplicate pages, but unless most of your site is duplicate you wont be penalized
-
http://moz.com/blog/handling-duplicate-content-across-large-numbers-of-urls - that is Rand's whiteboard Friday a few weeks ago and I quote from the transcripts:
"So what happens, basically, is you get a page like this. I'm at BMO's Travel Gadgets. It's a great website where I can pick up all sorts of travel supplies and gear. The BMO camera 9000 is an interesting one because the camera's manufacturer requires that all websites which display the camera contain a lot of the same information. They want the manufacturer's description. They have specific photographs that they'd like you to use of the product. They might even have user reviews that come with those.
Because of this, a lot of the folks, a lot of the e-commerce sites who post this content find that they're getting trapped in duplicate content filters. Google is not identifying their content as being particularly unique. So they're sort of getting relegated to the back of the index, not ranking particularly well. They may even experience problems like Google Panda, which identifies a lot of this content and says, "Gosh, we've seen this all over the web and thousands of their pages, because they have thousands of products, are all exactly the same as thousands of other websites' other products."
-
There is nothing wrong with having duplicate content. It becomes a problem when you have a site that is all or almost all duplicate or thin content.
Having a page that is on every other competitors site will not harm you, you just may not rank for it.
but no indexing can cause lose of link juice as all links pointing to non indexed pages waste there link juice. Using noindex,follow will return most of this, but still there in no need to no-index
-
http://www.honoluluhi5.com/oahu-condos/ - this is an "MLS result page". That URL will soon have some statistics and it will be unique (I will include in index). All the paginated pages (2 to n) hardly has any unique content. It is great layout, users love it (ADWords campaign average user spends 9min and views 16 pages on site), but since it is MLS listings (shared amongst thousands of Realtors) Google will see "ah, these are duplicate pages, nothing unique". That is why I plan to index page 1 (the URL I list) but all paginated pages like: http://www.honoluluhi5.com/oahu-condos/page-2) I will keep as "noindex, follow". Also, I want to rank for this URL: http://www.honoluluhi5.com/oahu/honolulu-condos/ which is a sub-category of the first URL and 100% of the content is exactly the same as the 1st URL. So, I will focus on indexing just the 1st page and not the paginated pages. Unfortunately, G cannot see value in layout and design and I can see how keeping all pages indexed could hurt my site.
Would be happy to hear your thoughts on this. I launched site 4 months ago, more unique and quality content than 99% of other firms I am up against, yet nothing happens ranking wise yet. I suspect all these MLS pages is the issue. Time will show!
-
If you no index, I don't think Next Previous will have any affect.
If they are different then and if the keywords are all important why no-index?
-
Thx ,Philip. I am using already, but I thought adding "noindex, follow" to those paginated pages (on top of rel=next prev") will increase likelihood G will NOT see all those MLS result pages as a bunch of duplicate content. Page 1 may look thin, but with some statistical data I will soon include it is unique and that uniqueness may offset lack of indexed MLS result pages.....not sure if my reasoning is sound. Would be happy to hear if you feel differently
-
Sounds like you should actually be using rel=next and rel=prev.
More info here: http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
-
Hi Alan, thx for your comment. Let me give you an example and if you have a though that's be great:
- Condos on Island: http://www.honoluluhi5.com/oahu-condos/
- Condos in City: http://www.honoluluhi5.com/oahu/honolulu-condos/
- Condos in Region: http://www.honoluluhi5.com/oahu/honolulu/metro-condos/
Properties on the result page for 3) are all in 2) and all properties within 2) is within 1). Furthermore, for each of those URL, the paginated pages (2 to n) are all different, since each property is different, so using canonical tags would not be accurate. 1 + 2 + 3 are all important keywords.
Here is what I am planning: add some unique content to the first page in the series for each of those URL and include just the 1st page in the serious to the index, but pages 2 to n I will keep "noindex, follow" on. Argument could be "your MLS result pages will look too thin and not rank" but the other way of looking at it is "with potentially 500 or more properties on each URL, a bit of stats on page 1 will not offset all the MLS duplicate data, so even though the page may look thin, only indexing page 1 is best way forward".
-
Remember that if you no-index pages, any link you have on your site pointing to those pages is wasting its link juice.
This looks like a job for Canonical tag
-
lol - good answer Philip. I hear you. What makes it difficult is the lack of crystal clear guidelines from search engines....it is almost like they don't know themselves and each case is sort of on a "what feels right" basis.....
-
Good find. I've never seen this part of the help section. Their resonating reason behind all of the examples seems to be "You don’t need to manually remove URLs; they will drop out naturally over time."
I have never had an issue, nor have I ever heard of anyone having an issue, removing URLs with the Removal Tool. I guess if you don't feel safe doing it, you can wait for Google's crawler to catch up, although it could take over a month. If you're comfortable waiting it out, have no reasons to rush it, AND feel like playing it super safe... you can disregard everything I've said
We all learn something new every day!
-
based on Google's own guidelines it appears to be a bad idea to use the removal tool under normal circumstances (which I believe my site falls under): https://support.google.com/webmasters/answer/1269119
It starts with: "The URL removal tool is intended for pages that urgently need to be removed—for example, if they contain confidential data that was accidentally exposed. Using the tool for other purposes may cause problems for your site."
-
thx, Philip. Most helpful. I will get on it
-
Yes. It will remove /page-52 and EVERYTHING that exists in /oahu/honolulu/metro/waikiki-condos/. It will also remove everything that exists in /page-52/ (if anything). It trickles down as far as the folders in that directory will go.
**Go to Google search and type this in: **site:honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/
That will show you everything that's going to be removed from the index.
-
Yep, you got it.
You can think of it exactly like Windows folders, if that helps you stay focused. If you have C:\Website\folder1 and C:\Website\folder12. "noindexing" \folder1\ would leave \folder12\ alone because they're not in the same directory.
-
for some MLS result pages I have a BUNCH of pages and I want to remove from index with 1 click as opposed to having to include each paginated page. Example: http://www.honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/page-52 I simply include"/oahu/honolulu/metro/waikiki-condos/" and that will ALSO remove from index this page: http://www.honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/page-52 - is that correct?
-
removing directory "/oahu/waianae-makaha-condos/" will NOT remove "/oahu/waianae-makaha/maili-condos/" because the silo "waianae-makaha" and "waianae-makaha-condos" are different.
HOWEVER,
removing directory " /oahu/waianae-makaha/maili-condos/" will remove "/oahu/waianae-makaha/maili-condos/page-2" because they share this silo "waianae-makaha"Is that correctly understood?
-
Yep. Just last week I had an entire website deindexed (on purpose, it's a staging website) by entering just / into the box and selecting directory. By the next morning the entire website was gone from the index
It works for folders/directories too. I've used it many times.
-
so I will remove directory for "/oahu/waianae-makaha/maili-condos/" and that will ensure removal of "/oahu/waianae-makaha/maili-condos/page-2" as well?
-
thx, Philip. So you are saying if I use the directory option that will ensure the paginated pages will also be taken out of the index like this page: /oahu/waianae-makaha/maili-condos/page-2
-
I'm not 100% sure Google will understand you if you leave off the slashes. I've always added them and have never had a problem, so you want to to type: /oahu/waianae-makaha-condos/
Typing that would NOT include the neighborhood URL, in your example. It will only remove everything that exists in the /waianae-makaha-condos/ folder (including that main category page itself).
edit >> To remove the neighborhood URL and everything in that folder as well, type /oahu/waianae-makaha/maili-condos/ and select the option for "directory".
edit #2 >> I just want to add that you should be very careful with this. You don't want to use the directory option unless you're 100% sure there's nothing in that directory that you want to stay indexed.
-
thx. I have a URL like this for a REGION: http://www.honoluluhi5.com/oahu/waianae-makaha-condos/ and for a "NEIGHBORHOOD" I have this: http://www.honoluluhi5.com/oahu/waianae-makaha/maili-condos/
As you can see Region has "waianae-makaha-condos" directory, whereas the Neighborhood has "waianae-makaha" without the "condos" for that region directory part.
Question: when I go to GWT and remove can I simply type "oahu/waianae-makaha-condos" and select the directory option and that will ALSO exclude the neighborhood URL? OR, since the region part in the URL within the neighborhood URL is different I have to submit individually?
-
Yep! After you remove the URL or directory of URLs, there is a "Reinclude" button you can get to. You just need to switch your "Show:" view so it shows URLs removed. The default is to show URLs PENDING removal. Once they're removed, they will disappear from that view.
-
good one, Philip. Last BIG question: if I remove URL's from GWT, is it possible to "unremove" without issue? I am planning to index some of these MLS pages in the future when I have more unique content on.
-
When "noindex" is added to a page, does this ensure Google does not count page as part of their analysis of unique vs duplicate content ratio on a website? Yes, that will tell Google that you understand the pages don't belong in the index. They will not penalize your site for duplicate content if you're explicitly telling Google to noindex them.
Is there a chance that even though Google does not index these pages, Google will still see those pages and think "ah, these are duplicate MLS pages, we are going to let those pages drag down value of entire site and lower ranking of even the unique pages". No, there's no chance these will hurt you if they're set to noindex. That is exactly what the noindex tag is for. You're doing what Google wants you to do.
I like to just use "noindex, follow" on those MLS pages, but would it be safer to add pages to robots.txt as well and that should - in theory - increase likelihood Google will not see such MLS pages as duplicate content on my website? You could add them to your robots.txt but that won't increase your likelihood of Google not penalizing you because there is already no worry about being penalized for pages not being indexed.
On another note: I had these MLS pages indexed and 3-4 weeks ago added "noindex, follow". However, still all indexed and no signs Google is noindexing yet.....
Donna's advice is perfect here. Use the Remove URLs tool. Every time I've used the tool, Google has removed the URLs from the index in less than 12-24 hours. I of course made sure to have a noindex tag in place first. Just make sure you enter everything AFTER the TLD (.com, .net, etc) and nothing before it. Example: You'd want to ask Google to remove /mls/listing122 but not example.com/mls/listing122. The ladder will not work properly because Google automatically adds "example.com" to it (they just don't make this very clear). -
thx, Donna. My question was mainly around whether Google will NOT consider MLS pages as duplicate content when I place the "noindex" on. We can all guess, but does anyone have anything concrete on this, to make me understand reality of this. Can we with 90% certainty say "yes, if you place noindex on a duplicate content page, then Google will not consider that duplicate content, hence it will not count towards how Google views duplicate vs unique site content". This is the big question: If we are left in uncertainty, then only way forward may be to password protect such pages and not offer users without creating an account.....
Removal on GWT: I plan to index some of these MLS pages in the future (when I get more unique content on them) and I am concerned if once submitted to GWT for removal, then it is tough to get such pages indexed again.
-
Hi khi5,
I think excluding those MLS listings from your site using the robots.txt file would be over kill.
As I'm sure you well know, Google does what it wants. I think tagging the pages you don't want indexed with "noindex follow" AND adding them to the robots.txt file doesn't make the likelihood that Google will respect your wishes any higher. You might want to consider canonicalizing them though, so links to and bookmarks and shares of said pages get credited to your site.
As to how long it takes for Google to deindex said pages, it can take a very long time. In my experience, "a very long time" can run 6-8 months. You do have the option however, of using Google Webmaster Tools > Google Index > Remove URLs to ask to have them deindexed faster. Again, no guarantees that Google will do as you ask, but I've found them to be pretty responsive when I use the tool.
I'd love to hear if anyone else feels differently.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Defining duplicate content
If you have the same sentences or paragraphs on multiple pages of your website, is this considered duplicate content and will it hurt SEO?
Intermediate & Advanced SEO | | mnapier120 -
Duplicate content. Competing for rank.
Scenario: An automotive dealer lists cars for sale on their website. The descriptions are very good and in depth at 1,200 words per car. However chunks of the copy are copied from car review websites and weaved into their original copy. Q1: This is flagged in copyscape - how much of an issue is this for Google? Q2: The same stock with the same copy is fed into a popular car listing website - the dealer's website and the classifieds website often rank in the top two positions (sometimes the dealer on top other times the classifieds site). Is this a good or a bad thing? Are you risking being seen as duplicating/scraping content? Thank you.
Intermediate & Advanced SEO | | Bee1590 -
Different language with direct translation: duplicate content, meta?
For a site that does NOT want a separate subdomain, or directory, or TLD for a country/language would the directly translated page (static) content/meta be duplicate? (NOT considering a translation of the term/acronym which could exist in another language) i.e. /SEO-city-state in English vs. /SEO-city-state Spanish -In this example a term/acronym that is the same in any language. Outside of duplicate content, are their other conflict potentials in rankings you can think of?
Intermediate & Advanced SEO | | bozzie3110 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Duplicate Page Title/Content Issues on Product Review Submission Pages
Hi Everyone, I'm very green to SEO. I have a Volusion-based storefront and recently decided to dedicate more time and effort into improving my online presence. Admittedly, I'm mostly a lurker in the Q&A forum but I couldn't find any pre-existing info regarding my situation. It could be out there. But again, I'm a noob... So, in my recent SEOmoz report I noticed that over 1,000 Duplicate Content Errors and Duplicate Page Title Errors have been found since my last crawl. I can see that every error is tied to a product in my inventory - specifically each product page has an option to write a review. It looks like the subsequent page where a visitor can fill out their review is the stem of the problem. All of my products are shown to have the same issue: Duplicate Page Title - Review:New Duplicate Page Content - the form is already partially filled out with the corresponding product My first question - It makes sense that a page containing a submission form would have the same title and content. But why is it being indexed, or crawled (or both for that matter) under every parameter in which it could be accessed (product A, B, C, etc)? My second question (an obvious one) - What can I do to begin to resolve this? As far as I know, I haven't touched this option included in Volusion other than to simply implement it. If I'm missing any key information, please point me in the right direction and I'll respond with any additional relevant information on my end. Many thanks in advance!
Intermediate & Advanced SEO | | DakotahW0 -
Duplicate page content and Duplicate page title errors
Hi, I'm new to SeoMoz and to this forum. I've started a new campaign on my site and got back loads of error. Most of them are Duplicate page content and Duplicate page title errors. I know I have some duplicate titles but I don't have any duplicate content. I'm not a web developer and not so expert but I have the impression that the crawler is following all my internal links (Infact I have also plenty of warnings saying "Too many on-page links". Do you think this is the cause of my errors? Should I implement the nofollow on all internal links? I'm working with Joomla. Thanks a lot for your help Marco
Intermediate & Advanced SEO | | marcodublin0 -
Duplicate content issue
Hi I installed a wiki and a forum to subdomains of one of my sites. The crawl report shows me duplicate content on the forum and on wiki. This will hurt the main site? Or the root domain? the site by the way is clean absolutely from errors. Thanks
Intermediate & Advanced SEO | | nyanainc0 -
Duplicate content on index.htm page
How do I avoid duplicate content on the index.htm page . I need to redirect the spider from the /index.htm file to the main root of http://www.manandhisvan.com.au and hence avoid duplicate content. Does anyone know of a foolproof way of achieving this without me buggering up the complete site Cheers Freddy
Intermediate & Advanced SEO | | Fatfreddy0