Rel="prev" / "next"
-
Hi guys,
The tech department implemented rel="prev" and rel="next" on this website a long time ago.
We also added a canonical tag to the 'own' page.We're talking about the following situation:
However we still see a situation where a lot of paginated pages are visible in the SERP.
Is this just a case of rel="prev" and "next" being directives to Google?
And in this specific case, Google deciding to not only show the 1st page in the SERP, but still show most of the paginated pages in the SERP?Please let me know, what you think.
Regards,
Tom -
Interesting development which may be of interest to you Ernst:
Google admitted just the other day that they "haven't supported rel=next/prev for years." https://searchengineland.com/google-apologizes-for-relnext-prev-mixup-314494
"Should you remove the markup? Probably not. Google has communicated this morning in a video hangout that while it may not use rel=next/prev for search, it can still be used by other search engines and by browsers, among other reasons. So while Google may not use it for search indexing, rel=prev/next can still be useful for users. Specifically some browsers might use those annotations for things like prefetching and accessibility purposes."
-
I was looking into this today and happened across this line in Google's Search Console Help documents:
rel="next" and rel="prev" are compatible with rel="canonical" values. You can include both declarations in the same page. For example, a page can contain both of the following HTML tags:
Here's the link to the doc - https://support.google.com/webmasters/answer/1663744?hl=en
But I wouldn't be using a canonical to somewhere else and the rel="next" directives.
-
I had never actually considered that. My thought is, no. I'd literally just leave canonicals entirely off ambiguous URLs like that. Have seen a lot of instances lately where over-zealous sculpting has led to loss of traffic. In the instance of this exact comment / reply, it's just my hunch here. I'd just remove the tag entirely. There's always risk in adding layers of unrequired complexity, even if it's not immediately obvious
-
I'm going to second what @effectdigital is outlining here. Google does what they want, and sometimes they index paginated pages on your site. If you have things setup properly and you are still seeing paginated pages when you do a site: search in Google then you likely need to strengthen your content elsewhere because Google still sees these paginated URLs as authoritative for your domain.
I have a question for you @effectdigital - Do you still self-canonical with rel= prev / next? I mean, I knew that you wouldn't want to canonical to another URL, but I hadn't really thought about the self-canonical until I read something you said above. Hadn't really thought about that one haha.
Thanks!
-
Both are directives to google. All of the "rel=" links are directives, including hreflang, alternate/mobile, AMP, prev/next
It's not really necessary to use a canonical tag in addition to any of the other "rel=" family links
A canonical tag says to Google: "I am not the real version of this page, I am non-canonical. For the canonical version of the page, please follow this canonical tag. Don't index me at all, index the canonical destination URL"
The pagination based prev/next links say to Google: "I am the main version of this page, or one of the other paginated URLs. Did you know, if you follow this link - you can find and index more pages of content if you want to"
So the problem you create by using both, is creating the following dialogue to Google:
1.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
2.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from #buildawall"
*Google goes backwards to non-paginated URL
3.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
4.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from"
*Google goes backwards to non-paginated URL
... etc.
As you can see, it's confusing to tell Google to crawl and index URLs with one tag, then tell them not to with another. All your indexation factors (canonical tags, other rel links, robots tags, HTTP header X-Robots, sitemap, robots.txt files) should tell the SAME, logical story (not different stories, which contradict each other directly)
If you point to a web page via any indexation method (rel links, sitemap links) then don't turn around and say, actually no I've changed my mind I don't want this page indexed (by 'canonicalling' that URL elsewhere). If you didn't want a page to be indexed, then don't even point to it via other indexation methods
A) If you do want those URLs to be indexed by Google:
1) Keep in mind that by using rel prev/next, Google will know they are pagination URLs and won't weight them very strongly. If however, Google decides that some paginated content is very useful - it may decide to rank such URLs
2) If you want this, remove the canonical tags and leave rel=prev/next deployment as-is
B) If you don't want those URLs to be indexed by Google:
1) This is only a directive, Google can disregard it but it will be much more effective as you won't be contradicting yourself
2) Remove the rel= prev / next stuff completely from paginated URLs. Leave the canonical tag in place and also add a Meta no-index tag to paginated URLs
Keep in mind that, just because you block Google from indexing the paginated URLs, it doesn't necessarily mean that the non-paginated URLs will rank in the same place (with the same power) as the paginated URLs (which will be, mostly lost from the rankings). You may get lucky in that area, you may not (depending upon the content similarity of both URLs, depending whether or not Google's perceived reason to rank that URL - hinged strongly on a piece of content that exists only in the paginated URL variant)
My advice? Don't be a control freak and use option (B). Instead use option (A). Free traffic is free traffic, don't turn your nose up at it
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://mza.seotoolninja.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
Is this the correct way of using rel canonical, next and prev for paginated content?
Hello Moz fellows, a while ago (3-4 years ago) we setup our e-commerce website category pages to apply what Google suggested to correctly handle pagination. We added rel "canonicals", rel "next" and "prev" as follows: On page 1: On page 2: On page 3: And so on, until the last page is reached: Do you think everything we have been doing is correct? I have doubts on the way we have handled the canonical tag, so, any help to confirm that is very appreciated! Thank you in advance to everyone.
Intermediate & Advanced SEO | | fablau0 -
Canonical page 1 and rel=next/prev
Hi! I'm checking a site that has something like a News section, where they publish some posts, quite similar to a blog.
Intermediate & Advanced SEO | | teconsite
They have a canonical url pointing to the page=1. I was thinking of implementing the rel=next/ prev and the view all page and set the view all page as the canonical. But, as this is not a category page of an ecommerce site, and it would has more than 100 posts inside in less than a year, It made me think that maybe the best solution would be the following Implementing rel=next/prev
Keep page 1 as the canonical version. I don't want to make the users wait for a such a big page to load (a view all with more than 100 elements would be too much, I think) What do you think about this solution? Thank you!0 -
How to structure links on a "Card" for maximum crawler-friendliness
My question is how to best structure the links on a "Card" while maintaining usability for touchscreens. I've attached a simple wireframe, but the "card" is a format you see a lot now on the web: it's about a "topic" and contains an image for the topic and some text. When you click the card it links to a page about the "topic". My question is how to best structure the card's html so google can most easily read it. I have two options: a) Make the elements of the card 2 separate links, one for the image and one for the text. Google would read this as follows. //image
Intermediate & Advanced SEO | | jcgoodrich
[](/target URL) //text
<a href=' target="" url'="">Topic</a href='> b) Make the entire "Card" a link which would cause Google to read it as follows: <a></a> <a>Bunch of div elements that includes anchor text and alt-image attributes above along with a fair amount of additional text.</a> <a></a> Holding UX aside, which of these options is better purely from a Google crawling perspective? Does doing (b) confuse the bot about what the target page is about? If one is clearly better, is it a dramatic difference? Thanks! PwcPRZK0 -
Google webmaster tools showing "no data available" for links to site, why?
In my google webmaster account I'm seeing all the data in other categories except links to my site. When I click links to my site I get a "no data available" message. Does anyone know why this is happening? And if so, what to do to fix it? Thanks.
Intermediate & Advanced SEO | | Nicktaylor10 -
Fluctuating Rankings on "difficult" keywords
Hi guys, I have a client who wants to rank well for two very "difficult" keywords and eight easier ones. The easy ones are "treadmills + city" and the difficult ones are "treadmills" and "treadmill". We have got great traction on the "+city" keywords and he now ranks on page one for all those. However, we have noticed that although he ranks on page 2-3 for "treadmill" treadmills", those rankings fluctuate widely day to day. Rankings for the "+city" versions are stable, a rising slowly as I would expect. Rankings for the difficult keywords can be 235 one day, 32 the next week, 218 the day after that, then stable at 30ish for a week, then fluctuation again. I know Google update every day, but what are the likely causes of the easier keywords being stable, while the harder ones fluctuate? Thanks.
Intermediate & Advanced SEO | | stevedeane0 -
Manage Ranking for " Out of Stock" pages
Hi, I own an e-commerce marketplace where the products are sold by 3rd party sellers and purchased by end users. My problem is that whenever a new product is added the search engine crawls the website and it ranks the new page on 4th page. when I start optimizing it to gain better rankings in search engines the product goes out of stock and the rankings drop to below 100. To counter that I started showing other related products on the "Out of Stock" pages but even then the rankings are dropping. Can someone help me with this problem?
Intermediate & Advanced SEO | | RuchiPardal0 -
Hidden Content with "clip"
Hi We're relaunching a site with a Drupal 7 CMS. Our web agency has hidden content on it and they say it's for Accessibility (I don't see the use myself, though). Since they ask for more cash in order to remove it, the management is unsure. So I wanted to check if anyone knows whether this could hurt us in search engines. There is a field in the HTML where you can skip to the main content: Skip to main content The corresponding CSS comes here: .element-invisible{position:absolute !important;clip:rect(1px 1px 1px 1px);clip:rect(1px,1px,1px,1px);} #skip-link a,#skip-link a:visited{position:absolute;display:block;left:0;top:-500px;width:1px;height:1px;overflow:hidden;text-align:center;background-color:#666;color:#fff;} The crucial point is that they're hiding the text "skip to main content", using clip:rect(1px 1px 1px 1px), which shrinks the text to one pixel. So IMO this is hiding content. How bad is it? PS: Hope the source code is sufficient. Ask me if you need more. Thx!
Intermediate & Advanced SEO | | zeepartner0