Duplicate page titles in SEOMoz
-
My on page reports are showing a good number of duplicate title tags, but they are all because of a url tracking parameter that tells us which link the visitor clicked on. For example, http://www.example.com/example-product.htm?ref=navside and http://www.example.com/example-product.htm are the same page, but are treated as to different urls in SEOMoz. This is creating "fake" number of duplicate page titles in my reports.
This has not been a problem with Google, but SEOMoz is treating it like this and it's confusing my data. Is there a way to specify this as a url parameter in the Moz software?
Or does anybody have another suggestion? Should I specify this in GWT and BWT?
-
The best way to handle this, for all crawlers including Google, Yahoo and Moz, is to make sure you have proper canonical tags on those URLs that point to the non-parameterized URL.
So http://www.example.com/example-product.htm?ref=navside will have a canonical that points to http://www.example.com/example-product.htm
-
My understanding is that the Moz crawlers should be checking the canonical, in which case it will ignore duplicate content and title tag issues. If you find this is not the case with your crawl, please let our help team know at help @ moz.com
-
No, Moz's tool won't check the canonical to see if that would ignore this.
-
Do you think that setting a canonical url tag might help fix this?
-
Hi Robert,
Yes, I'm sorry but you're overlooking something. Ignoring parameters is something you should do in regards to SEO. It won't stop Google Analytics tracking these parameters.
-
I'm not sure why I'd want to ask google to ignore those parameters... we're explicitly adding the ones that they suggested we use from here:
The issue that I'm having is that Moz analytics is showing duplicate pages as an issue to resolve when the only difference is that these params exist.
Am I overlooking something here?
-
Hi Robert,
I wouldn't handle this via robots.txt if you only want to do this for Rogerbot. The best way to tell Google to ignore your UTM parameters is via Google Webmaster Tools. Under Crawl > URL Parameters you've got the option to add parameters that don't change any of the content and are solely used for tracking purposes.
-
I'm having a similar issue. Is there an example of how to add this to the robots.txt file to ignore the utm stuff for RogerBot?
Our scenario is that we send out PDFs with links to pages on our site and those links have utm parameters included... and are showing up as duplicate content.
Thanks in advance,
Robby
-
I think this is the answer I was looking for... Yeah, GWT already has a bunch of our parameters added, and hasn't had a problem with this one. It's not showing these pages as duplicate like SEOMoz does.
Thanks guys!
-
Hi,
What you might want to do to get rid of this issues within SEOMoz is add the parameters to your robots.txt file and specifically target the user agent of SEOMoz: Rogerbot. This way SEOMoz won't crawl the links with this parameter and by doing that also won't warn you about these duplicate titles.
Hope this helps!
Btw. As James already mentioned I would also recommend to configure these parameters within Google Webmaster Tools.
-
You could set it up in GWT but it sounds like you are using utm tags on internal links so you can see which physical links on a page are driving clicks. If that's the case a cleaner solution is to upgrade your Google Analytics code for enhanced link attribution. I'm assuming you are using GA but if so this will allow you to see which links are driving which clicks and won't create tons of duplicate page titles in SEOmoz.
See link: http://support.google.com/analytics/bin/answer.py?hl=en&answer=2558867
Let me know if you have questions,
JS
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Temporary Redirect (CTA)
I had MOZ crawl my site and I had 5 CTA pages with a temporary redirect. How do I correct the issue? Thank You! -Nick
Moz Pro | | X2Metrology10 -
On-Page Report Card B grade because its a PPC landing page
I have a PPC landing page with I'm getting a B grade on the On-Page Report Card. Can I just ignore that, it says its a "Critical Factor" Thanks Mike Crawl status <dd>Status Code: 200
Moz Pro | | mjrinvent
meta-robots: noindex,nofollowall
meta-refresh: None
X-Robots: None</dd> <dt>Explanation</dt> <dd>Pages that can't be crawled or indexed have no opportunity to rank in the results. Before tweaking keyword targeting or leveraging other optimization techniques, it's essential to make sure this page is accessible.</dd> <dt>Recommendation</dt> <dd>Ensure the URL returns the HTTP code 200 and is not blocked with robots.txt, meta robots or x-robots protocol (and does not meta refresh to another URL)</dd>0 -
SEOmoz Rank tracking
I think this tool is a bit off : ) This morning I checked my ranking for a keyword and every time I recheck it I see my results in a different place. I checked on word 5 times and got three different results. Is Google brocken or is the tracker doing its own thing? This is not the first time I am seeing this issue only today its really acting up. IMHO I think it would be best if the tool just says out of order 🙂 rather than 3, 10, 6 for one keyword phrase within ten minutes...
Moz Pro | | SEODinosaur0 -
Issue in number of pages crawled
i wanted to figure out how our friend Roger Bot works. On the first crawl of one of my large sites, the number of pages crawled stopped at 10000 (due to the restriction on the pro account). However after a few weeks, the number of pages crawled went down to about 5500. This number seemed to be a more accurate count of the pages on our site. Today, it seems that Roger Bot has completed another crawl and the number is up to 10000 again. I know there has been no downtime on our site, and the items that we fixed on our site did not reduce or increase the number of pages we had. Just making sure there are no known issues with Roger Bot before I look deeper into our site to see if there is an issue. Thanks!
Moz Pro | | cchhita0 -
4XX links in SEOmoz
My campaign is showing me that I have 7 4XX errors. Is there anyway to see where these pages are linked from in order to remove the links?
Moz Pro | | MirandaP0 -
Too many pages indexed in SEOMoz
I am running a campaign for a client that has 86 pages via Google and SEmoz is up to almost 10K pages. I am really confused. Any ideas?
Moz Pro | | LaurieK130 -
SEOMoz Campaign Tool
I've noticed that when looking at the SEOmoz tool, specifically the On Page analysis tool, it is still looking at an old url. About two months ago I made updates to all of our category page URLs. Previously the old urls were stuffed with keywords, strange characters and were really long. When looking at the on-page tool though it is referencing the old urls for keywords and I'm wondering why? I figure its been long enough to recognize the new urls. Is the paring of a keyword and a url saved and just graded on a weekly basis to produce the report? I had expected to see the new url's by now which are also represented in the sitemap. Around that same time I also added our TellAFriend Page and Review pages to our Robots.txt file as not to be crawled but I still see these pages come up in the errors report. Should this update as well?
Moz Pro | | dgmiles0 -
How do I find the most linked to page of a site?
I'm looking at a site for a potential link and am trying to find the most linked to page. The SEOmoz toolbar tells me the root domain (DA) is linked to by 660 root domains but the main URL (PA) is linked to by 38 root domains. I used open site explorer and got the same # of 38 root domains in the result. From the Top Pages tab, I clicked on the 2nd page down and the SEOmoz toolbar gives me 189 root domains linking to that page (PA). Then I ran a Linkscape report to see what that would say and I get 146 linking root domains. 1. Is this 2nd page down on OSE the most linked to page? 2. a. Is something off in these numbers?
Moz Pro | | Motava
b. How come OSE/Linkscape doesn't report the 660 root domains in the DA?0