Facets Being Indexed - What's the Impact?
-
Hi
Our facets are from what I can see crawled by search engines, I think they use javascript - see here http://www.key.co.uk/en/key/lockers
I want to get this fixed for SEO with an ajax solution - I'm not sure how big this job is for developers, but they will want to know the positive impact this could have & whether it's worth doing.
Does anyone have any opinions on this?
I haven't encountered this before so any help is welcome
-
I think I'd have to request these. I know it's something I need to look at, but I;m not sure how high a priority I should put on it.
Do you think it would make a huge difference if they were stopped from being crawled?
-
Hey Becky, I definitely question if they're being crawled at all. Do you have access to your server logs at all? If so, you could then use Screaming Frog's Log Analyser (https://www.screamingfrog.co.uk/log-file-analyser/) to parse through them and find if Googlebot is indeed hitting those pages. It would be worth the investigation!
-
I am confused as to whether they're even being crawled if Google ignores everything after the #
Perhaps they're being crawled but not indexed...
-
Thanks, I'll do that as a starting point
-
It's a really interesting question and I wonder if they are being crawled. The link destination on them in the right sidebar goes to /#, which shouldn't let the search engines crawl these links.
Are you seeing these parameters in Search Console or your log files? That is where I would look to see if they are actually being hit by Googlebot.
If they are, then you should remove that anchor link and let the checkboxes activate the facets. Not sure how easy this is to do technically, but it's the right way to do it.
-
Hi John,
Yeh I'm just trying to understand it all Yes that's what I mean with the facet link you've shown.
I just want to ensure I'm not wasting Googlebot's time crawling facets which don't need to be crawled.
I'm not so worried about the duplicate pages as there's a canonical, but I don't think these facets are SEO friendly - I'm trying to work out how to make them SEO friendly
-
Hey Becky, I see you posting a bunch about your technical SEO and internal linking/indexation discoveries. Great to see that you're digging in deep!
When you say a "facet", do you mean a link like this - http://www.key.co.uk/en/key/multipurpose-storage-lockers#facet:-70000000000000105744949554832109109&productBeginIndex:0&orderBy:5&pageView:grid& ?
If that's the case, that page has a canonical on it back to the base of http://www.key.co.uk/en/key/multipurpose-storage-lockers, but you should take a look in your server logs (this is a good place to start - https://builtvisible.com/log-file-analysis/) to see if these are being hit by Googlebot.
Just trying to figure out what you're asking so I can try to help!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
How should I handle URL's created by an internal search engine?
Hi, I'm aware that internal search result URL's (www.example.co.uk/catalogsearch/result/?q=searchterm) should ideally be blocked using the robots.txt file. Unfortunately the damage has already been done and a large number of internal search result URL's have already been created and indexed by Google. I have double checked and these pages only account for approximately 1.5% of traffic per month. Is there a way I can remove the internal search URL's that have already been indexed and then stop this from happening in the future, I presume the last part would be to disallow /catalogsearch/ in the robots.txt file. Thanks
Intermediate & Advanced SEO | | GrappleAgency0 -
Google Indexed my Site then De-indexed a Week After
Hi there, I'm working on getting a large e-commerce website indexed and I am having a lot of trouble.
Intermediate & Advanced SEO | | Travis-W
The site is www.consumerbase.com. We have about 130,000 pages and only 25,000 are getting indexed. I use multiple sitemaps so I can tell which product pages are indexed, and we need our "Mailing List" pages the most - http://www.consumerbase.com/mailing-lists/cigar-smoking-enthusiasts-mailing-list.html I submitted a sitemap a few weeks ago of a particular type of product page and about 40k/43k of the pages were indexed - GREAT! A week ago Google de-indexed almost all of those new pages. Check out this image, it kind of boggles my mind and makes me sad. http://screencast.com/t/GivYGYRrOV While these pages were indexed, we immediately received a ton of traffic to them - making me think Google liked them. I think our breadcrumbs, site structure, and "customers who viewed this product also viewed" links would make the site extremely crawl-able. What gives?
Does it come down to our site not having enough Domain Authority?
My client really needs an answer about how we are going to get these pages indexed.0 -
Report card shows many F's. How do I specify keywords for pages?
I have been doing general optimization for on-page, but still have many F's because SEOMoz considers the pages to be weak for keywords that are anyway not relevant. Is there a way to tease out keywords for specific pages so I can get a more accurate report card?
Intermediate & Advanced SEO | | Ocularis1 -
How to find all of a website's SERPs?
Was wondering how easiest to find all of a website's existing SERPs?
Intermediate & Advanced SEO | | McTaggart0 -
Inspiration from today's WBF!
Hello and Welcome MozFriends! so I watched the WBF this morning, and I got the idea of making Keyword Tiers for a site like so. Site Products- wheelchair, Powerchairs, Hospital Beds, Lifts, Lift Chairs Specific Items- 16" wheelchairs, 4 wheel power chair, Patient lifts and such. The Keywords for the Front page would be very general not referencing the sites specific items at all. Like Medical Equipment, supplies things like that. Keywords for products would be the Manufacturers names, and the category name. Specific Items would have specific keywords to draw an audience that has a goal and is searching for that specific product. So my theory/experiment is this. Instead of making the whole site generate traffic for one type of audience, I am making certain tiers for certain audiences. The higher up in the Site Hierarchy the more global the keywords are designed for. It may just be complete and utter non sense but I would like to hear any thoughts on it if it works. Thank You Friends! Justin Smith
Intermediate & Advanced SEO | | FrontlineMobility0 -
Culling 99% of a website's pages. Will this cause irreparable damage?
I have a large travel site that has over 140,000 pages. The problem I have is that the majority of pages are filled with dupe content. When Panda came in, our rankings were obliterated, so I am trying to isolate the unique content on the site and go forward with that. The problem is, the site has been going for over 10 years, with every man and his dog copying content from it. It seems that our travel guides have been largely left untouched and are the only unique content that I can find. We have 1000 travel guides in total. My first question is, would reducing 140,000 pages to just 1,000 ruin the site's authority in any way? The site does use internal linking within these pages, so culling them will remove thousands of internal links throughout the site. Also, am I right in saying that the link juice should now move to the more important pages with unique content, if redirects are set up correctly? And finally, how would you go about redirecting all theses pages? I will be culling a huge amount of hotel pages, would you consider redirecting all of these to the generic hotels page of the site? Thanks for your time, I know this is quite a long one, Nick
Intermediate & Advanced SEO | | Townpages0 -
We are changing ?page= dynamic url's to /page/ static urls. Will this hurt the progress we have made with the pages using dynamic addresses?
Question about changing url from dynamic to static to improve SEO but concern about hurting progress made so far.
Intermediate & Advanced SEO | | h3counsel0