Duplicate Content Issue from using filters on a directory listing site
-
I have a directory listing site of harpists and have alot of issues coming up that say:
Content that is identical (or nearly identical) to content on other pages of your site forces your pages to unnecessarily compete with each other for rankings.
Because this is a directory listing site the content is quite generic.The main issue appears to be coming from the functionality of the page. It appears that the "spider" is picking up each different choice of filter as a new page? If you have a look at this link you will see what I mean.
People searching the site can filter the results of the songs played by this harpist by changing the dropdowns etc... but for some reason the filter arguments are being picked up...? Do you have any good approaches to solving this issue?
A similar issue comes from the video pages for each harpist. They are being flagged as identical content - as there are currently no videos on the page.
|
http://www.find-a-harpist.co.uk/user/39/videos
|
http://www.find-a-harpist.co.uk/user/37/videos
|
Do you have any suggestions?
Many thanks for taking the time to read this and respond.
| | | | | |
| | -
Thank you both for you responses. Yes the site is relatively new. I shall implement your suggestions and hopefully they will do teh trick.
-
Is your site relatively new? I currently show no pages in the Google index at all, which makes the duplicate content issue a bit moot (at least in the short-term). The search filters and pagination are a bit different issues. You could META NOINDEX any pages with the filter parameters active, or rel-canonical them to the unfiltered version (as @Steve25 said). Since no pages are indexed yet, you could also just "nofollow" the filter links ("Title", etc.), which should help prevent those filtered versions getting crawled. Pagination (pages 2+ of search) is a trickier issue, but it might be best to just NOINDEX, FOLLOW those. You could also let Google know in Google Webmaster Tools that that page= parameter is for pagination (I've had that be hit-or-miss, but it is easy, relative to other solutions). For the empty profiles, it really depends on the scope. If you have a lot, I'd ideally want to code them to have META NOINDEX if they're empty. You can lift the NOINDEX once they have content posted. You'd have to do that dynamically, but it shouldn't be too tricky. That way, Google would see new pages only once they have some content in place.
-
Could you set up canonical tags so that when users select certain criteria a parent page is shown in the canonical?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Which backlinks sites to use for full competitive research
Hello, Doing competitive research for really gentle basic link building campaign. Which sites do I need to get a backlink list from besides open site explorer to be most successful? Thanks.
Moz Pro | | BobGW0 -
Since July 1, we've had a HUGE jump in errors on our weekly crawl. We don't think anything has changed on our website. Has MOZ changed something that would account for a large leap in duplicate content and duplicate title errors?
Our error report went from 1,900 to 18,000 in one swoop, starting right around the first of July. The errors are duplicate content and duplicate title, as if it does not see our 301 redirects. Any insights?
Moz Pro | | KristyFord0 -
Why does Crawl Diagnostics report this as duplicate content?
Hi guys, we've been addressing a duplicate content problem on our site over the past few weeks. Lately, we've implemented rel canonical tags in various parts of our ecommerce store, over time, and observing the effects by both tracking changes in SEOMoz and Websmater tools. Although our duplicate content errors are definitely decreasing, I can't help but wonder why some URLs are still being flagged with duplicate content by our SEOmoz crawler. Here's an example, taken directly from our Crawl Diagnostics Report: URL with 4 Duplicate Content errors:
Moz Pro | | yacpro13
/safety-lights.html Duplicate content URLs:
/safety-lights.html ?cat=78&price=-100
/safety-lights.html?cat=78&dir=desc&order=position /safety-lights.html?cat=78 /safety-lights.html?manufacturer=514 What I don't understand, is all of the URLS with URL parameters have a rel canonical tag pointing to the 'real' URL
/safety-lights.html So why is SEOMoz crawler still flagging this as duplicate content?0 -
Should I worry about duplicate content errors caused by backslashes?
Frequently we get red-flagged for duplicate content in the MozPro Crawl Diagnostics for URLs with and without a backslash at the end. For example: www.example.com/ gets flagged as being a duplicate of www.example.com I assume that we could rel=canonical this, if needed, but our assumption has been that Google is clever enough to discount this as a genuine crawl error. Can anyone confirm or deny that? Thanks.
Moz Pro | | MackenzieFogelson0 -
Anyone having issues with OSE?
Everytime i try to access the tool i get the following error: "Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects."
Moz Pro | | Anest0 -
My crawl diagnostic is showing 2 duplicate content and titles.
First of all Hi - My name is Jason and I've just joined - How you all doing? My 1st question then: When I view where these errors are occurring it says www mydomain co uk and www mydomain co uk/index.html Isn't this the same page? I have looked into my root folder and only index.html exists.
Moz Pro | | JasonHegarty0 -
Filtering OSE Results
Issue: When I export OSE Linking Pages results to .CSV, I'd like to filter only unique domains. It seems to me that there should be a way to do this with Excel, perhaps pivot table. Anyone have a quick solution for this? Alternatively, I can use Linking Domains, which gives all unique roots, but then i lose follow/nofollow filter. Thoughts?
Moz Pro | | Gyi0 -
Using the PA metric from the mozbar
Hey guys, just a quick question about page authority. Often I'll be on a page with a high domain authority but the page authority is 0 as it says there are 0 links from 0 domains to the page. This is despite the fact there is a followed link from the homepage to that page. Is this just a bug in PA, and can I assume that the page will have at least some authoirty as a link from the homepage is pointed to it? Or is there some other factor that may be preventing the page to not have any PA. Thanks
Moz Pro | | SureFire0