How to solve issues regarding canonicalization?
-
Today, I was searching for article which may help me in issues regarding canonicalization and found very interesting article on SEOmoz.
I am facing issues regarding de-indexing of pages and down of organic search engine visits.
I have done proper R & D and apply it very carefully. But, still my indexed pages and visits are going down.
I have applied canonical tag to following pages.
Narrow by search:
http://www.vistastores.com/outdoor-umbrellas?manufacturer=California+Umbrella
Sorting:
http://www.vistastores.com/outdoor-umbrellas?dir=desc&order=position
Pagination:
http://www.vistastores.com/outdoor-umbrellas?p=2
How can I improve my performance?
-
Good to know that, Visits may be short term. I have set Canonical for all dynamic pages. I have done big R & D before make it happen. That study includes big eCommerce websites. But, I am surviving with my visits since January 8, 2012.
-
Hey,
I agree with Alan - you've set it up correctly from what I looked at.
What else have you done recently to the site? It might just be a natural dip in traffic (have you compared YoY to see if you always get a dip at the same time) or have you done anything to the site?
What do you mean by visits dropping also? Are you talking about a few percent here or 50% drop?
You will naturally get less pages indexed with the canonicals in place - that's the idea.
-
Using canonical tags will reduse the pages indexed, as you are tell them to index the canonical and not the copies.
visist may be short term or because of some other reason
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Overdynamic Pages - How to Solve it?
Hi everyone, I'm running a classified real estate ads site, where people can publish their apartment or house they want to sell, so we use multiple filters to help people find what they want. Lately we added multiple filters to the URL to make the search more precise, things like: Prices (priceAmount=###) Bedrooms (BedroomsNumber=2) Bathrooms (BathroomsNumber=3) TotalArea (totalArea=1_50) Services (Elevator, CommonAreas, security) Among other Filters so you see the picture, all this filters are on the URL so that people can share their search on multiple social media, that makes two problems for moz crawl: Overdynamic URLs Too long URLs Now what would be a good solution for this 2 problems, would a canonical to the original page before the "?" would be ok? Example:
Technical SEO | | JoaoCJ
http://urbania.pe/buscar/venta-de-propiedades?bathroomsNumber=2&services=gas&commonAreas=solarium The problem I have with this solution is that I also have a pagination parameter (page=2), and I'm using prev and next tags, if I use a such canonical will break the prev and next tag? http://urbania.pe/buscar/venta-de-propiedades?bathroomsNumber=2&services=gas&commonAreas=solarium&page=2 Also thinking if adding a noindex on pages with paramters could also be an option. Thanks a lot, I'm trying to address this issues.0 -
Little confused regarding robots.txt
Hi there Mozzers! As a newbie, I have a question that what could happen if I write my robots.txt file like this... User-agent: * Allow: / Disallow: /abc-1/ Disallow: /bcd/ Disallow: /agd1/ User-agent: * Disallow: / Hope to hear from you...
Technical SEO | | DenorL0 -
Fetch as Google issues
HI all, Recently, well a couple of months back, I finally got around to switching our sites over to HTTPS://. In terms of rankings etc all looks fine and we have not move about much, only the usual fluctuations of a place or two on a daily basis in a competitive niche. All links have been updated, redirects in place, the usual https domain migration stuff. I am however, troubled by one thing! I cannot for love nor money get Google to fetch my site in GSC. No matter what I have tried it continues to display "Temporarily unreachable". I have checked the robots.txt and it is on a new https:// profile in GSC. Has anyone got a clue as I am stumped! Have I simply become blinded by looking too much??? Site in Q. caravanguard co uk. Cheers and looking forward to your comments.... Tim
Technical SEO | | TimHolmes0 -
SEO Issues
Hi, We have created a moving cost calculator tools and other moving company can added this tools their website. This is the code: [ <iframe src="http: www.enakliyat.com.tr="" fiyat-hesapla.aspx"="" height="554" width="400" frameborder="0" scrolling="no" style="border:none;"> ] when the other moving company added this code their websites, tool also works on the site and the tool make the referrals traffic our site.** Is it right using this method?**</iframe src="http:> http://www.enakliyat.com.tr/evden-eve-nakliyat-fiyatlari-hesaplama/ here is the tool
Technical SEO | | iskq0 -
Duplicate Content Issue
My issue with duplicate content is this. There are two versions of my website showing up http://www.example.com/ http://example.com/ What are the best practices for fixing this? Thanks!
Technical SEO | | OOMDODigital0 -
Development Website Duplicate Content Issue
Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin dev.rollerbannerscheap.co.uk/ A description for this result is not available because of this site's robots.txt – learn more. This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google. In GWT I have tried to remove the sub domain. When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk. I want to remove a sub domain not a page. Can anyone help please?
Technical SEO | | SO_UK0 -
Internal file extension canonicalization
Ok no doubt this is straightforward, however seem to be finding to hard to find a simple answer; our websites' internal pages have the extension .html. Trying to the navigate to that internal url without the .html extension results in a 404. The question is; should a 401 be used to direct to the extension-less url to future proof? and should internal links direct to the extension-less url for the same reason? Hopefully that makes sense and apologies for what I believe is a straightforward answer;
Technical SEO | | jg1000 -
What's the best way to solve this sites duplicate content issues?
Hi, The site is www.expressgolf.co.uk and is an e-commerce website with lots of categories and brands. I'm trying to achieve one single unique URL for each category / brand page to avoid duplicate content and to get the correct URL's indexed. Currently it looks like this... Main URL http://www.expressgolf.co.uk/shop/clothing/galvin-green Different Versions http://www.expressgolf.co.uk/shop/clothing/galvin-green/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/1 http://www.expressgolf.co.uk/shop/clothing/galvin-green/2 http://www.expressgolf.co.uk/shop/clothing/galvin-green/3 http://www.expressgolf.co.uk/shop/clothing/galvin-green/4 http://www.expressgolf.co.uk/shop/clothing/galvin-green/all http://www.expressgolf.co.uk/shop/clothing/galvin-green/1/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/2/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/3/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/4/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/all/ Firstly, what is the best course of action to make all versions point to the main URL and keep them from being indexed - Canonical Tag, NOINDEX or block them in robots? Secondly, do I just need to 301 the (/) from all URL's to the non (/) URL's ? I'm sure this question has been answered but I was having trouble coming to a solution for this one site. Cheers, Paul
Technical SEO | | paulmalin0