Strange duplicate content issue
-
Hi there,
SEOmoz crawler has identified a set of duplicate content that we are struggling to resolve.
For example, the crawler picked up that this page www. creative - choices.co.uk/industry-insight/article/Advice-for-a-freelance-career is a duplicate of this page www. creative - choices.co.uk/develop-your-career/article/Advice-for-a-freelance-career.
The latter page's content is the original and can be found in the CMS admin area whilst the former page is the duplicate and has no entry in the CMS. So we don't know where to begin if the "duplicate" page doesn't exist in the CMS.
The crawler states that this page www. creative-choices.co.uk/industry-insight/inside/creative-writing is the referrer page. Looking at it, only the original page's link is showing on the referrer page, so how did the crawler get to the duplicate page?
-
it could be any one out of the following 3 scenarios.
1: The page in question was moved at some point and since the CMS still accepts the old URL, when google re-visits the old URL it still finds it. So in this scenario it will find both the old URL and the new URL and index both.
2: google hasn't revisited the page for a long while but it is still in it's index, even though it would get a 301 by the CMS when it visits the page. Can be easily fixed by going to webmaster tools and ask it to remove it from the index.
3: there are still links to the old URL either on site or off site and since the CMS doesn't 301 the oid page it will index it again with a new URL.
4:the page still exists in the CMS because of some strange setting or equivalent in the CMS.
as mentioned before the easy fix is to use a robots.txt and deny access to the page and ask google to remove it from it's index. the better fix is to find the problem in the CMS and solve it. a midway fix could be to 301 it in the .htaccess or equvilent on an ISS server.
hope it helped
-
Thanks René,
I updated my earlier reply with a question that i think you missed.
The list isn't growing, which is a good thing but how is it possible for the crawler to pick up the duplicate page urls when the the referrer page has the correct urls?
-
I have come across this sort of issue a gazillion times + infinite.. almost all of our clients seem to have dub cont problems of one kind or another
but often it is different things that is the problem. But I'm afraid that I can't point you in the right direction, since I have no experience with your CMS. To be able to do that I would need to have access to the site itself. (since I don't know the CMS.) My advice would be to get a developer on the issue or to grab hold of the support for the CMS (if any.)
-
Hi René,
Thanks for your reply and suggestions. It could well be CMS remembering old urls as this list isn't growing. But is the crawler able to pickup the old urls when the referrer page has the correct urls?
We are on Expression Engine. Have you come across this sort of issue before?
-
Well it kinda have to be in the CMS, since it has 2 different paths.. But you could fix it by going to the .htaccess (if you have access and redirect it to the right URL and make a robots.txt and disallow access to the page.
if the page has been moved to a new location theres a good chance that the CMS is setup to remember the old URL and show the page. This is indeed a problem, but a potential problem with the CMS.
Go to webmaster tools and ask them to delete the dublicate from thier index.
You specific problem could originate from a ton of different problems and it is kinda har to fix without direct access to everything. What CMS is it your using?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Purchasing duplicate content
Morning all, I have a client who is planning to expand their product range (online dictionary sites) to new markets and are considering the acquisition of data sets from low ranked competitors to supplement their own original data. They are quite large content sets and would mean a very high percentage of the site (hosted on a new sub domain) would be made up of duplicate content. Just to clarify, the competitor's content would stay online as well. I need to lay out the pros and cons of taking this approach so that they can move forward knowing the full facts. As I see it, this approach would mean forgoing ranking for most of the site and would need a heavy dose of original content as well as supplementing the data on page to build around the data. My main concern would be that launching with this level of duplicate data would end up damaging the authority of the site and subsequently the overall domain. I'd love to hear your thoughts!
Technical SEO | | BackPack851 -
Content incorrectly being duplicated on microsite
So bear with me here as this is probably a technical issue and i am not that technical. We have a microsite for one of our partner organisations and recently we have detected that content from our main site appearing in the URLs for the microsite - both in search results and then when you click through to the SERP. However, this content does not exist on the actual website at all. Anyone have a possible explanation for this? I have tried searching the web but nothing. I assume there is something in the set up of the microsite that is associating it with the content on the main site.
Technical SEO | | Discovery_SA0 -
Rel=canonical overkill on duplicate content?
Our site has many different health centers - many of which contain duplicate content since there is topic crossover between health centers. I am using rel canonical to deal with this. My question is this: Is there a tipping point for duplicate content where Google might begin to penalize a site even if it has the rel canonical tags in place on cloned content? As an extreme example, a site could have 10 pieces of original content, but could then clone and organize this content in 5 different directories across the site each with a new url. This would ultimately result in the site having more "cloned" content than original content. Is this at all problematic even if the rel canonical is in place on all cloned content? Thanks in advance for any replies. Eric
Technical SEO | | Eric_Lifescript0 -
How to protect against duplicate content?
I just discovered that my company's 'dev website' (which mirrors our actual website, but which is where we add content before we put new content to our actual website) is being indexed by Google. My first thought is that I should add a rel=canonical tag to the actual website, so that Google knows that this duplicate content from the dev site is to be ignored. Is that the right move? Are there other things I should do? Thanks!
Technical SEO | | williammarlow0 -
Issue: Duplicate Page Content
Hi All, I am getting warnings about duplicate page content. The pages are normally 'tag' pages. I have some blog posts tagged with multiple 'tags'. Does it really affect my site?. I am using wordpress and Yoast SEO plugin. Thanks
Technical SEO | | KLLC0 -
Duplicate content error - same URL
Hi, One of my sites is reporting a duplicate content and page title error. But it is the same page? And the home page at that. The only difference in the error report is a trailing slash. www.{mysite}.co.uk www.{mysite}.co.uk/ Is this an easy htaccess fix? Many thanks TT
Technical SEO | | TheTub1 -
Mod Rewrite question to prevent duplicate content
Hi, I'm having problems with a mod rewrite issue and duplicate content On my website I have Website.com Website.com/directory Website.com/directory/Sub_directory_more_stuff_here Both #1 and #2 are the same page (I can't change this). #3 is different pages. How can I use mod rewrite to to make #2 redirect to #1 so I don't have duplicate content WHILE #3 still works?
Technical SEO | | kat20 -
Duplicate content domains ranking successfully
I have a project with 8 domains and each domain is showing the same content (including site structure) and still all sites do rank. When I search for a specific word-string in google it lists me all 8 domains. Do you have an explanation, why Google doesn't filter those URLs to just one URL instead of 8 with the same content?
Technical SEO | | kenbrother0