Should I implement pagination(rel=next, rel=prev) if I have duplicate meta tags?
-
Hi,
I just want to ask if it is necessary to implement pagination(rel=next, rel=prev) to my category pages because Google webmaster tools is telling me that these pages are having similar meta title and meta description. Ex.
page1: http://www.site.com/iphone-resellers/1
meta title:Search for iphone resellers in US
page2:http://www.site.com/iphone-resellers/2
meta title:Search for iphone resellers in US
page3:http://www.site.com/iphone-resellers/3
meta title:Search for iphone resellers in US
Thanks in advance.
-
I agree with you one hundred percent Dr Pete. Thanks for your detailed insight. Always helps
-
This is a constantly changing area of SEO the past couple of years, but my general feeling is that the rel=next/prev tags are working pretty well. They're low risk, and it can help you reduce duplication in Google's eyes without de-indexing the pages (page 3 could still rank, for example).
The biggest downside of the tags is that they're a bit tricky to implement, especially if you have search filters and sorts (in which case the proper tags get pretty complicated fast). Another option (as Nakul mentioned) is to NOINDEX pages 2+, which is simpler but would knock those extra pages out of ranking contention. That's a route I'd go only if you seemed to be getting hit hard for thin content.
The only area where I'll disagree slightly with Nakul is that handling pagination for SEO isn't always one of those areas where usability considerations help much. From a core architecture and internal search perspective, give your users a good experience, absolutely. From the standpoint of how to index those search pages, though, it's almost all about how Google views near-duplicate content. This is an area of SEO that is getting more technical and really comes down to the quirks of how Google indexes content.
-
Hi Nakul,
I don't have a view-all page. Both suggestions are great but they have disadvantages, based on what I read in Google, and it would really depend on what's the purpose. And a big YES, that's what I am thinking since user experience is more important.
Thanks a lot!
-
I see you have 2 responses from SWD and SanketPatel. They are both different strategies and you need to decide what you want to do as a Business. Here's why:
If you adopt SWD's solution, you could technically get rid of the problem, by telling Google that do not index page 2, page 3 and so on and just index your page 1. My question would be, do you have a View All page ? Do you want search engines to index and rank each one of your paginated pages ? Do they have unique collection of products and does it help the user if they land directly on Page 2 or would you rather then have land on Page 1 always ?
SanketPatel's solution definitely gets rid of the problem from a GWT perspective, however, the bigger question is, what you are trying to achieve and what your users would prefer.
Instead of looking it it from what's right in GWT or SEO, find what's right for your user first and then implement that in an SEO Friendly way.
I hope that helps and makes sense.
-
Its not necessary but its advisable if you implement it, to get out of duplication errors. If you don't want to do that then you can change title on page 2 like "Search for iphone resellers in US - Page 2". Same as you can implement for 3rd, 4th... page
-
I think it would be better for you if implement pagination and canonical url of ../1, ../2, ../3 as http://www.site.com/iphone-resellers. Google always prefer a good herarchy in every website.
I hope it can help you.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta Data Question
Hi There, I am working on the umbraco CMS and we have a Menu page which sits under one page on the CMS. When accessing this page on the front end and navigating between the food menu / drinks menu, the url changes depending on which content you are on, however i have only one place to input a meta title and description meaning that it is seeing them as duplicate content as both the drinks menu url and food menu url are showing the same meta data. Hopefully this makes sense, does anyone have anything similair where a url change happens when content within the page changes.
Technical SEO | | AlexStanleyGK0 -
Is any code to prevent duplicate meta description on blog pages
Is any code to prevent duplicate meta description on blog pages I use rell canonical on blog page and to prevent duplicate title y use on page category title de code %%page%% Is there any similar code so to description?
Technical SEO | | maestrosonrisas0 -
Duplicate content or titles
Hello , I am working on a site, I am facing the duplicate title and content errors,
Technical SEO | | KLLC
there are following kind of errors : 1- A link with www and without www having same content. actually its a apartment management site, so it has different bedrooms apartments and booking pages , 2- my second issue is related to booking and details pages of bedrooms, because I am using 1 file for all booking and 1 file for all details page. these are the main errors which i am facing ,
can anyone give me suggestions regarding these issues ? Thnaks,0 -
Content Duplication and Canonical Tag settings
Hi all, I have a question regarding content duplication.My site has posted one fresh content in the article section and set canonical in the same page for avoiding content duplication._But another webmaster has taken my post and posted the same in his site with canonical as his site url. They have not given to original source as well._May I know how Google will consider these two pages. Which site will be affected with content duplication by Google and how can I solve this issue?If two sites put canonical tags in there own pages for the same content how the search engine will find the original site which posted fresh content. How can we avoid content duplication in this case?
Technical SEO | | zco_seo0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Duplicate Footer Content
A client I just took over is having some duplicate content issues. At the top of each page he has about 200 words of unique content. Below this is are three big tables of text that talks about his services, history, etc. This table is pulled into the middle of every page using php. So, he has the exact same three big table of text across every page. What should I do to eliminate the dup content. I thought about removing the script then just rewriting the table of text on every page... Is there a better solution? Any ideas would be greatly appreciated. Thanks!
Technical SEO | | BigStereo0 -
Rel=canonical + no index
We have been doing an a/b test of our hp and although we placed a rel=canonical tag on the testing page it is still being indexed. In fact at one point google even had it showing as a sitelink . We have this problem through out our website. My question is: What is the best practice for duplicate pages? 1. put only a rel= canonical pointing to the "wanted original page" 2. put a rel= canonical (pointing to the wanted original page) and a no index on the duplicate version Has anyone seen any detrimental effect doing # 2? Thanks
Technical SEO | | Morris770 -
Duplicate Content
Hi - We are due to launch a .com version of our site, with the ability to put prices into local currency, whereas our .co.uk site will be solely £. If the content on both the .com and .co.uk sites is the same (at product level mainly), will we be penalised? What is the best way to get around this?
Technical SEO | | swgolf1230