Duplicate Content in WordPress Taxonomies & Noindex, Follow
-
Hello Moz Community,
We are seeing duplicate content issues in our Moz report for our WordPress site’s Tag pages. After a bit of research, it appears one of the best solutions is to set Tag pages to “no index, follow” within Yoast. That makes sense, but we have a few questions:
In doing this, how are we affecting our opportunity to show up in search results?
Are there any other repercussions to making this change?
What would it take to make the content on these pages be seen as unique?
-
Hi Corey
Don pretty much nailed this answer. I'll make sure to answer your specific questions:
-
You're not negatively affecting your ability to show in search by noindexing tags. They almost never rank or get traffic since they are just pages of thin content. Noindexing them does not affect your other pages.
-
No other repercussions - unless you have a random tag archive getting traffic. Check your analytics, and if so, you can individually leave specific tags indexed with the Yoast plugin. I wrote a post all about this a while back: http://www.evolvingseo.com/2012/08/10/clean-sweep-yo-tag-archives-now/
-
You would have to figure out how to have unique title tags on each page - which is honestly more effort than is worth it. The Moz tool is showing an 'error' of duplicate content but the issue is just more that tag pages don't have much value and can just be noindexed and then ignored.
-
-
Hey there. Have a look at this post please. I believe it has everything you'll need to know. Thanks again for the help, Don!
https://mza.seotoolninja.com/community/q/different-wp-taxonomies-seen-as-duplicate-content
-
The problem you are experiencing is due to archives pages that are created every time you create a new tag, category, author, or other types of archive pages. The issue occurs when you tag or categorize the post or page. For instance, "Why Do We Need SEO" is your first ever post on your website and you tag it with SEO Best Practices, and you categorize it has SEO. You will have 3 archive pages with duplicate content. The author page, the SEO Best Practices page and the SEO page. This is because each archive page only consists of the same post "Why Do We Need SEO". So as you write more posts the duplicate pages may disappear depending on how you organize your content. If you create lots of tags and tag everything chances are you will always have duplicate content.
To not get a penalty you should no index these archive pages. But if you are disciplined with your organization of tags and categories. You will not have a problem.
Thanks,
Don
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content Error
Hello, the result of renewed content appeared in the scan results in my Shopify Store. But these products are unique. Why am I getting this error? Can anyone please help to explain why? screenshot-analytics.moz.com-2021.10.28-19_53_09.png
Moz Pro | | gokimedia0 -
What are the best practices for improving follower count on twitter?
Keen to improve our follow count on Twitter but its not all about the numbers we are also interested in getting the right followers, followers that fit our customer persona... Any tips or articles to read would be hugely appreciated!
Moz Pro | | MarkPerera0 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
I have had ro resubmit my sitemap to google, Bing & yahoo. Does SEOmoz automatically pic that up?
Hi there I am monitoring this website for a client: www.smsquality.com Someone on their side had gone and blocked the sitemap from being crawled and also in some form or another removed it as well. (Confusing I know) However I have gone and recreated the sitemat for these guys allowing robots to crawl the site, resubmitted it to all major search engines. My question is; Will SEOmoz be ableto crawl the site like it usually does and give me proper results for my Keywords placed into the Keywords Capmaign as well as give me Onsite page crawls using these keywords with proper results? Thanks in Advance Ray
Moz Pro | | RayHay0 -
I've got quite a few "Duplicate Page Title" Errors in my Crawl Diagnostics for my Wordpress Blog
Title says it all, is this an issue? The pages seem to be set up properly with Rel=Canonical so should i just ignore the duplicate page title erros in my Crawl Diagnostics dashboard? Thanks
Moz Pro | | SheffieldMarketing0 -
Campaign 4XX error gives duplicate page URL
I ran the report for my site and had many more 4xx errors than I've had in the past month. I updated my .htaccess to include 301 statements based on Google Webmaster Tools Crawl Errors. Google has been reporting a positive downward trend in my errors, but my SEOmoz campaign has shown a dramatic increase in the 4xx pages. Here is an example of an 4xx URL page: http://www.maximphotostudio.net/engagements/266/inniswood_park_engagements/http:%2F%2Fwww.maximphotostudio.net%2Fengagements%2F266%2Finniswood_park_engagements%2F This is strange because URL: http://www.maximphotostudio.net/engagements/266/inniswood_park_engagements/ is valid and works great, but then there is a duplicate entry with %2F representing forward slashes and 2 http statements in each link. What is the reason for this?
Moz Pro | | maximphotostudio1 -
Reg. internal followed links
Our site is new. So far we have a 534 pages. Our competitors have around 5000. I'm working on building more, but that's a lot of catching up. The report says we have only 174 internal followed links. Why is that? I have all links set in the site to be nofollow links, but not the inside links. I also have lots of landing pages, which are not added to the main menu, obviously. But the the main menu shows up on all landing pages. Just wondering if either the nofollow has anything to do with it or the landing pages set up, or the numbers are accurate. Any ideas? Thanks!
Moz Pro | | FinanceSite0 -
Redirecting duplicate .asp pages??
Hi all, I have a bit of a problem with duplicate content on our website. The CMS has been creating identical duplicate pages depending on which menu route a user takes to get to a product (i.e. via the side menu button or the top menu bar). Anyway, the web design company we use are sorting it out going forward, and creating 301 redirects on the duplicate pages. My question is, some of the duplicates take two different forms. E.g. for the home page: www.<my domain="">.co.uk
Moz Pro | | gdavies09031977
www..<my domain="">.co.uk/index.html
www.<my domain="">.co.uk/index.asp</my></my></my> Now I understand the 'index.html' page should be redirected, but does the 'index.asp' need to be directed also? What makes this more confusing is when I run the SEOMoz diagnostics report (which brought my attention to the duplicate content issue in the first place - thanks SEOMoz), not all the .asp pages are identified as duplicates. For example, the above 'index.asp' page is identified as a duplicate, but 'contact-us.asp' is not highlighted as a duplicate to 'contact-us.html'? I'm a bit new to all this (I'm not a IT specialist), so any clarification anyone can give would be appreciated. Thanks, Gareth0