Last Part Breadcrumb Trail Active or Non-Active
-
Breadcrumbs have been debated quite a bit in the past. Some claim that the last part of the breadcrumb trail should be non-active to inform users they have reached the end. In other words, Do not link the current page to itself.
On the other hand, that portion of the breadcrumb would won't be displayed in the SERPS and if it was may lead to a higher CTR.
Foe example: www.website.com/fans/panasonic-modelnumber
panasonic-modelnumber would not be active as part of the breadcrumb.
What is your take?
-
I would usually say no but so many sites seem to link to that same page they are viewing... If you are doing it for schema markup in Google SERPs all their examples show linking to the last part of the breadcrumb, see google rich snippets
If you want to inform users you can examine adding a text element for the last part but most just leave the last part as the page you are viewing.
-
Hi
I believe breadcrumbs are very valuable and you should not turn them for the home page every other page should have a breadcrumb.
There are methods you want to show up in the SERPS but honestly you page title should reflect the relevance of the page to the person searching therefore showing people the breadcrumb is not a bad thing in my opinion.
All best,
Tom
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google tries to index non existing language URLs. Why?
Hi, I am working for a SAAS client. He uses two different language versions by using two different subdomains.
Technical SEO | | TheHecksler
de.domain.com/company for german and en.domain.com for english. Many thousands URLs has been indexed correctly. But Google Search Console tries to index URLs which were never existing before and are still not existing. de.domain.com**/en/company
en.domain.com/de/**company ... and an thousand more using the /en/ or /de/ in between. We never use this variant and calling these URLs will throw up a 404 Page correctly (but with wrong respond code - we`re fixing that 😉 ). But Google tries to index these kind of URLs again and again. And, I couldnt find any source of these URLs. No Website is using this as an out going link, etc.
We do see in our logfiles, that a Screaming Frog Installation and moz.com w opensiteexplorer were trying to access this earlier. My Question: How does Google comes up with that? From where did they get these URLs, that (to our knowledge) never existed? Any ideas? Thanks 🙂0 -
Non-standard HTML tags in content
I had coded my website's article content with a non-standard tag <cnt>that surrounded other standard tags that contained the article content, I.e.</cnt> , . The whole text was enclosed in a div that used Schema.org markup to identify the contents of the div as the articleBody. When looking at scraped data for stories in Webmaster Tools, the content of the story was there and identified as the articleBody correctly. It's recently been suggested by someone else that the presence of the non-standard <cnt>tags were actually making the content of the article uncrawlable by the Googlebot, this effectively rendering the content invisible. I did not believe this to be true, since the content appeared to be correctly indexed in Webmaster Tools, but for the sake of a test I agreed to removing them. In the last 6 weeks since they were removed, there have been no changes in impressions or traffic from organic search, which leads me to believe that the removal of the <cnt>tags actually had no effect, since the content was already being indexed successfully and nothing else has changed.</cnt></cnt> My question is whether or not an encapsulating non-standard tag as I've described would actually make the content invisible to Googlebot, or if it should not have made any difference so long as the correct Schema.org markup was in place? Thank you.
Technical SEO | | dlindsey0 -
Is Removing Breadcrumbs Detrimental for SEO?
We have full navigational breadcrumbs on our site for the menu and the brand menu. i.e. Home > Clothing > Jackets Brand > Brand Name > Brand Jackets There's been talk of removing this and having it like Chico's does, where on item pages they just have a link at the top to previous category (i.e. you're on a shirt product page and at the top it says "Back to Tops" instead of listing Home > Clothing > Tops) Is doing something like this detrimental to SEO? From what I've read Breadcrumbs are for user experience but I just want to be sure.
Technical SEO | | AliMac261 -
Is it worth re-structuring URLs if breadcrumbs are enabled?
Hi Moz Community, I am wondering if anyone can shed some light on this current predicament I am facing... For my website, which is the site for a magazine I work for, the current URL structure is www.website.com/article-title At first glance, I thought it must be that we would have to re-structure the URLs to include the category structure, for example... www.website.com/category/sub-category/article-title However, upon deeper investigation, I've seen that we do actually have breadcrumbs enabled therefore google is indexing and following the structure that we would re-activate for the URL structure i.e. www.website.com/category/sub-category/article-title With this in mind, is it actually worth re-structuring the URLs to include these categories as it will take a long time to organise and implement?! Obviously, thinking in terms of UX, it is a must-do, but I'm just trying to weigh up the pro's and cons with this.. Appreciate your help, Leigh
Technical SEO | | leighcounsell0 -
Canonical homepage link uses trailing slash while default homepage uses no trailing slash, will this be an issue?
Hello, 1st off, let me explain my client in this case uses BigCommerce, and I don't have access to the backend like most other situations. So I have to rely on BG to handle certain issues. I'm curious if there is much of a difference using domain.com/ as the canonical url while BG currently is redirecting our domain to domain.com. I've been using domain.com/ consistently for the last 6 months, and since we switches stores on Friday, this issue has popped up and has me a bit worried that we'll loose somehow via link juice or overall indexing since this could confuse crawlers. Now some say that the domain url is fine using / or not, as per - https://mza.seotoolninja.com/community/q/trailing-slash-and-rel-canonical But I also wanted to see what you all felt about this. What says you?
Technical SEO | | Deacyde0 -
Best Practice - Disavow tool for non-canonical domain, 301 Redirect
The Situation: We submitted to the Disavow tool for a client who (we think) had an algorithmic penalty because of their backlink profile. However, their domain is non-canonical. We only had access to http://clientswebsite.com in Webmaster Tools, so we only submitted the disavow.txt for that domain. Also, we have been recommending (for months - pre disavow) they redirect from http://clientswebsite.com to http://www.clientswebsite.com, but aren't sure how to move forward because of the already submitted disavow for the non-www site. 1.) If we redirect to www. will the submitted disavow transfer or follow the redirect? 2.) If not, can we simply re-submit the disavow for the www. domain before or after we redirect? Any thoughts would be appreciated. Thanks!
Technical SEO | | thebenro0 -
Are lots of links from an external site to non-existant pages on my site harmful?
Google Webmaster Tools is reporting a heck of a lot of 404s which are due to an external site linking incorrectly to my site. The site itself has scraped content from elsewhere and has created 100's of malformed URLs. Since it unlikely I will have any joy having these linked removed by the creator of the site, I'd like to know how much damage this could be doing, and if so, is there is anything I can do to minimise the impact? Thanks!
Technical SEO | | Nobody15569050351140 -
Www vs non www and understanding opensite
Hi Guys, New guy here with some questions regarding the difference between www and non www. I am helping with a site at the moment and gradually working my way through bits and learning all the time. I was watching one of the seomoz videos and it brought my attention back to www vs non www. I understand that google will treat these as two seperate sites but wanted to check what the stats are telling me. I was under the impression that www.mydummysite.com was getting most links etc as this is what I have always used. However when I used Opensite explorer it told me something different as follows: www.mydummysite.com 32/100 29/100 5 16 mydummysite.com 32/100 29/100 2 1,500 Am i correct in saying that i should be adding a redirect from www.mydummysite.com to mydummysite.com ???? I am thinking that this is telling me that I am potentially missing out on 1,500 links to my site but it could mean I am missing out on just 16. Eitherway I guess its something I should fix right? Do I just redirect that page or would all pages beneith it such as mydummysite.com/news also need redirect??? Can i use Canonical Rel links for this now? Thanks for taking the time to read and reply! 🙂
Technical SEO | | wedmonds0