What happens when most of the website visitors end up at an "noindex" log-in page?
-
Hi all,
As most of the users are visiting our website for log-in, we are planning to deindex login page. As they cannpt find it on SERP, they gonna visit our website and login; I just wonder what happens when most of the visitors just end up at homepage by browsing into an "noindex" page. Obviously it increases bounce rate and exit rate as they just gonna disappear. Is this going to push down us in rankings? What are the other concerns to check about?
Thanks
-
Hi Linda,
So, if we noindex the popular page of our website, what difference it is going to make at Google beside that page not showing up in SERP.
Actually I have replied to the related thread to your post below: https://mza.seotoolninja.com/community/q/log-in-page-ranking-instead-of-homepage-due-to-high-traffic-on-login-page-how-to-avoid
Please suggest.
Thanks
-
Noindex doesn't mean the page is invisible to Google, it just means it won't show up in search results. So no one "disappears."
-
Hi Matijn,
Thanks for the response. Actually our login page is the most visited page on our website and it's taking over other pages by beating homepage on rankings. So just wondering how about removing it from search index. So most of the visitors are going to come to our website but will end up on deindexed login page. How it works. Please suggest.
Thanks
-
Maybe I don't get it, but why wouldn't you want your users to find your login page. Apparently that's what they're looking for. That's why I'm having a hard time figuring out why you would want to no-index that page.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta robots at every page rather than using robots.txt for blocking crawlers? How they'll get indexed if we block crawlers?
Hi all, The suggestion to use meta robots tag rather than robots.txt file is to make sure the pages do not get indexed if their hyperlinks are available anywhere on the internet. I don't understand how the pages will be indexed if the entire site is blocked? Even though there are page links are available, will Google really index those pages? One of our site got blocked from robots file but internal links are available on internet for years which are not been indexed. So technically robots.txt file is quite enough right? Please clarify and guide me if I'm wrong. Thanks
Algorithm Updates | | vtmoz0 -
One of my pages doesn't appear in Google's search
Our page has been indexed (I just checked) but literally doesn't exist in the first 300 results despite having a respectable DA & PA. Is there something I can do? There's no reason why this specific page doesn't rank, as far as I can see. It's not a new page. Cheers, Rhys
Algorithm Updates | | SwanseaMedicine0 -
The Google Algo update that happened 1-8 is KILLING my rankings
Does anyone know what happened?? I have a great website, we ranked very highly for a slew of industry keywords, #1 in most of our top-money kws....and our keywords have been in freefall since the update. Help?!
Algorithm Updates | | Sean_Gutermuth0 -
With regards to SEO is it good or bad to remove all the old events from our website?
Our website sells tickets for various events across the UK, we do have a LOT of old event pages on our website which simply say SOLD OUT. What is the best practice? Should these event pages be removed and a 301 redirect added to redirect to the home page? Or should these pages remain in tact with simply SOLD OUT on the page?
Algorithm Updates | | Alexogilvie0 -
Why is a sub page ranking over home page?
Hey guys! I was wondering whether any of you Mozzers out there could shed some light on this query for me. Currently, one of our clients is ranking (on the second page, at least) for one of their target keywords. However, it's not the home page that is ranking - it is a sub page. I guess you could say both are targeted to rank for the keyword in question but the home page has a considerable more PA (+10) and has a lot more incoming links so it's a little bit baffling as to why the sub page has been given an advantage. Does anyone know why this may be? Also, on a secondary note, should I continue to build links to the home page or target this particular sub page to have a better chance of ranking higher for the keyword? Any advice on this welcome! Cheers!
Algorithm Updates | | Webrevolve0 -
Quickest way to deindex a large number of pages
Our site was recently hacked by spammers posting fake content and bringing down our servers, etc. After a few months, we finally figured out what was going on and fixed the issue. However, it turns out that Google has indexed 26K+ spammy pages and we've lost page rank and search engine rankings as a result. What is the best and fastest way to get these pages out of Google's index?
Algorithm Updates | | powpowteam0 -
Ecommerce good/bad? Showing product description on sub/category page?
Hi Mozers, I have a ecommerce furniture website, and I have been wondering for some time if showing the product descriptions on the sub/category page helps the website. If there is more content displayed on the subcategory, it should be more relevant, right? OR does it not matter, as it is duplicate content from the product page. I think showing the product descriptions on non-product pages is hurting my design/flow, but i worry that if I am to hide product content on sub/category pages my traffic will be hurt. Despite my searches I have not found an answer yet. Please take a look at my site and share your thoughts: http://www.ecustomfinishes.com/ Chris 27eVz
Algorithm Updates | | longdenc_gmail.com0 -
ECommerce site being "filtered" by last Panda update, ideas and discussion
Hello fellow internet go'ers! Just as a disclaimer, I have been following a number of discussions, articles, posts, etc. trying to find a solution to this problem, but have yet to get anything conclusive. So I am reaching out to the community for help. Before I get into the questions I would like to provide some background: I help a team manage and improve a number of med-large eCommerce websites. Traffic ranges anywhere from 2K - 12K+ (per day) depending on the site. Back in March one of our larger sites was "filtered" from Google's search results. I say "filtered" because we didn't receive any warnings and our domain was/is still listed in the first search position. About 2-3 weeks later another site was "filtered", and then 1-2 weeks after that, a third site. We have around ten niche sites (in total), about seven of them share an identical code base (about an 80% match). This isn't that uncommon, since we use a CMS platform to manage all of our sites that holds hundreds of thousands of category and product pages. Needless to say, April was definitely a frantic month for us. Many meetings later, we attributed the "filter" to duplicate content that stems from our product data base and written content (shared across all of our sites). We decided we would use rel="canonical" to address the problem. Exactly 30 days from being filtered our first site bounced back (like it was never "filtered"), however, the other two sites remain "under the thumb" of Google. Now for some questions: Why would only 3 of our sites be affected by this "filter"/Panda if many of them share the same content? Is it a coincidence that it was an exact 30 day "filter"? Why has only one site recovered?
Algorithm Updates | | WEB-IRS1