How to Fix the Errors with Duplicate Title or Content?
-
The latest Crawl Diagnostic has found 160 Errors on my site.
And my error is, that the same content or title is used on two different! pages:
on both my root domain (han-mark.com) and the www subdomain.What does it matter (with or without www)?
How serious is that error?
Do I need to fix all the errors (and hundreds of warnings too)?
What's the best practice?
Is there any Guide on how to do it
or Tools for doing it the fast way?Viggo Joergensen
-
Hi Viggo,
what you're describing here is a common issue but also easy to fix.
If you can access and edit .htaccess on your server, you will only need to implement a simple rule which will redirect all traffic either to the www or non-www version of your website via a 301 redirect.
If you want to force the use of "www" in all cases, your rule should look like this:
RewriteCond %{HTTP_HOST} !^www.example.com [NC]
RewriteCond %{HTTP_HOST} !^
$RewriteRule ^/(.*) http://www.example.com/$1 [L,R]
You can refer here for more information.
-
Hi Viggo,
Canonical tag was created to resolve duplicate content caused by multiple paths to the same content. In "eyes" of Search Engines, www. and non-www. versions of the site are two different paths to the same content, which causes to be duplicate.
it is good to resolve this issue by redirecting to one of this versions.
I hope that helped,
Istvan
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
My "tag" pages are showing up as duplicate content. Is this harmful?
Hi. I ran a Moz sitecrawl. I see "Yes" under "Duplicate Page Content" for each of my tag pages. Is this harmful? If so, how do I fix it? This is a Wordpress site. Tags are used in both the blog and ecommerce sections of the site. Ecommerce is a very small portion. Thank you. | |
Moz Pro | | dlmilli1 -
Moz says I am missing titles and meta tags and have duplicate content
I just redesigned my website and suddenly traffic has dropped. Moz says I am missing titles, meta tag descriptions and have a lot of duplicate content. My site is http://skigenie.com and is full of unique and hand written content,. Are there any wordpress plugins that will add titles etc to my pages (some of the added ones are custom)? Any help would be much appreciated!
Moz Pro | | flexy0 -
"Does not respond to web requests" error
When trying to set up a new campaign I get the following message:
Moz Pro | | bshanahan
"Roger has detected a problem: We have detected that the domain www.chicagofinancialadvisers.com does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information." Can someone please tell me what I need to do on my site to make this work? I haven't seen this before and have done many other campaigns. Thanks a lot!0 -
Dot Net Nuke generating long URL showing up as crawl errors!
Since early July a DotNetNuke site is generating long urls that are showing in campaigns as crawl errors: long url, duplicate content, duplicate page title. URL: http://www.wakefieldpetvet.com/Home/tabid/223/ctl/SendPassword/Default.aspx?returnurl=%2F Is this a problem with DNN or a nuance to be ignored? Can it be controlled? Google webmaster tools shows no crawl errors like this.
Moz Pro | | EricSchmidt0 -
Duplicate Page Content and Title - Miva - How to fix?
Hi, I'm new to SEOmoz and just diving into it. I'm feeling a bit overwhelmed. I use Miva Merchant as my storefront interface. SEMOz is returning a bunch of duplicate page content and duplicate page titles and I can't figure out what to do about it. It seems it may have something to do with Miva shortlinks. I click on the dup URL's in SEMOz and it brings me to a dead page. I can't figure out where it's coming from. I know without seeing the actual information it'll probably be tough to help me but any suggestions would be appreciated. I try to fix them and come to a point (after about three hours of getting nowhere) it becomes too frustrating. Thanks!
Moz Pro | | musicforkids
Gary0 -
Crawl reports, date/time error found
Hello! I need to filter out the crawl errors found before a certain date/time. I find the date and time the errors were discovered to be the same. It looks more like the time the report was generated. Fix?
Moz Pro | | AJPro0 -
Why are these pages considered duplicate page content?
A recent crawl diagnostic for a client's website had several new duplicate page content errors. The problem is, I'm not sure where the error comes from since the content in the webpage is different from one another. Here's the pages that SEOMOZ reported to have duplicate page content errors: http://www.imaginet.com.ph/wireless-internet-service-providers-term http://www.imaginet.com.ph/antivirus-term http://www.imaginet.com.ph/berkeley-internet-name-domain http://www.imaginet.com.ph/customer-premises-equipment-term The only thing similar that I see is the headline which says "Glossary Terms Used in this Site" - I hope that the one sentence is the reason for the error. Any input is appreciated as I want to find out the best solution for my client's website errors. Thanks!
Moz Pro | | TheNorthernOffice790