Add URL parameters in SEOMoz as per GWT?
-
Hi, this may be a tall order, or maybe it's already in place and I'm behind the times!
Any chance on getting something like this going? Even handier, have SEOMoz import these settings directly from GWT.
The issue comes into play when looking at my duplicate page content reports; I'm guessing that SEOMoz will continue showing these as duplicates even after I have tweaked GWT to read them properly.
Haven't tested this theory as I just started down this road on GWT myself.
Thanks.
-
Thanks Corey, appreciate the answer.
-
Personally, I would no longer look at SEOmoz's duplicate content identification if they did this. One thing that's crucial to consider here, is that GWT != Google Search.
Just because GWT adds a suggestive signal for organic search doesn't mean it's going to work (in fact, in my experience, usually it doesn't). A great example of a 100% GWT vs. Google Search mismatch- have you tried "Download Latest Links" in GWT? (http://www.northcutt.com/blog/2012/07/what-download-latest-links-means-for-seo/). I don't know about you, but every time I've tried it, the "new" links range from a month ago to 10 years ago. And they're definitely pages that are long indexed as per a "site:" Google search. Totally out of sync.
There are many more applications of this that I see each week as well. Pages that have long used rel="canonical" correctly still having hundreds of duplicate pages in the index. Parts of Google Analytics working totally different than other parts of Google Analytics. It's a kind of 'Microsoft syndrome' that's formed, whereby teams don't mesh quite like the cohesive public-facing brand image would imply. Your best bet is to configure GWT, use rel="canonical", and while you're at it, work with your application to make sure that no crawler-accessible page ever uses GET variables.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I treat URLs with bookmarks when migrating a site?
I'm migrating an old website into a new one, and have several pages that have bookmarks on them. Do I need to redirect those? or how should they be treated? For example, both https://www.tnscanada.ca/our-expertise.html and https://www.tnscanada.ca/our-expertise.html#auto resolve .
Intermediate & Advanced SEO | | NatalieB_Kantar0 -
Long product urls ecommerce store
Hi we have a site in the mens fashion space who have long product urls which look like this: https://www.domain.com/catalog/product/view/id/13700/s/the-mate-tee-grey-marle-upm618g/category/120/ The site is on Magento. Are there any serious SEO negatives of having such a long product url and including irrelevant information in the url like product/view/id/13700/s/ & /category/120/ in the URL. Or are the benefits of changing them to more URL friendly product urls like: https://www.domain.com/the-mate-tee-grey-marle-upm/ Minimal? Cheers.
Intermediate & Advanced SEO | | wozniak650 -
How much is the effect of redirecting an old URL to another URL under a new domain?
Example: http://www.olddomain.com/buy/product-type/region/city/area http://www.newdomain.com/product-type-for-sale/city/area Thanks in advance!
Intermediate & Advanced SEO | | esiow20130 -
Tracking URLS and Redirects
We have a client with many archived newsletters links that contain tracking code at the end of the URL. These old URLs are pointing to pages that don't exist anymore. Is there a way to set up permanent redirects for these old URLs with tracking code? We have tried and it doesn't seem to work. Thank you!
Intermediate & Advanced SEO | | BopDesign0 -
Penguin Penalty On A Duplicate url
Hi I have noticed a distinct drop in traffic to a page on my web site which occurred around April of last year. Doing some analysis of links pointing to this page, I found that most were sitewide and exact match commercial anchor text. I think the obvious conclusion from this is I got slapped by Penguin although I didn't receive a warning in Webmaster Tools. The page in question was ranking highly for our targeted terms and the url was structured like this: companyname.com/category/index.php The same page is still ranking for some of those terms, but it is the duplicate url: companyname.com/category/ The sitewide problem is associated with links going to the index.php page. There aren't too many links pointing to the non index.php page. My question is this - if we were to 301 redirect index.php to the non php page, would this be detrimental to the rankings we are getting today? ie would we simply redirect the penguin effect to the non php page? If anybody has come across a similar problem or has any advice, it would be greatly appreciated. Thanks
Intermediate & Advanced SEO | | sicseo0 -
New URL : Which is best
Which is best: www.domainname.com/category-subcategory or www.domainname.com/subcategory-category or www.domainname.com/category/subcategory or www.domain.com/subcategory/category I am going to have 12 different subcategories under the category
Intermediate & Advanced SEO | | Boodreaux0 -
Minimum word count per page?
I'm seeding a new site with hundreds of (high quality) posts, but since I am paying per word written, I'm wondering if anybody in the community has any anecdotal evidence as to how many words of content there should now be for a page to be counted just the same as a 700 word+ post, for example? I know there are always examples of pages ranking well with, for instance, 50 words or less of content, but does anyone have any strong evidence on what the minimum count should be, or has anyone read anything very informative in regards to this issue? Thanks a lot in advance!
Intermediate & Advanced SEO | | corp08030 -
How was cdn.seomoz.org configured?
The SEOmoz CDN appears to have a "pull zone" that is set to the root of the domain, such that any static file can be addressed from either subdomain: http://www.seomoz.org/q/moz_nav_assets/images/logo.png http://cdn.seomoz.org/q/moz_nav_assets/images/logo.png The risk of this configuration is that web pages (not just images/CSS/JS) also get cached and served by the CDN. I won't put the URL here for fear of Google indexing it, but if you replace the 'www' in the URL below with 'cdn', you'll see a cached copy of the original: http://www.seomoz.org/ugc/the-greatest-attribution-ever-graphed The worst-case scenario is that the homepage gets indexed. But this doesn't happen here: http://cdn.seomoz.org/ That URL issues a 301 redirect back to the canonical www subdomain. As it should. Here's my question: how was that done? Because maxcdn.com can't do it. If you set a "pull zone" to your entire domain, they'll cache your homepage and everything else. googlebot has a field day with that; it will reindex your entire site off the CDN. Maybe the SEOmoz CDN provider (CloudFront) allows specific URLs to be blocked? Or do you detect the CloudFront IPs and serve them a 301 (which they'd proxy out to anyone requesting cdn.seomoz.org)? One solution is to create a pull zone that points to a folder, like example.com/images... but this doesn't help a complex site that has cacheable content in multiple places (do you Wordpress users really store ALL your static content under /wp-content/ ?). Or, as suggested above, dynamically detect requests from the CDN's proxy servers, and give them a 301 for any HTML-page request. This gets complex quickly, and is both prone to breakage and very difficult to regression-test. Properly retrofitting a complex site to use a CDN, without creating a half-dozen new CDN subdomains, does not appear to be easy.
Intermediate & Advanced SEO | | mcglynn0