How much of an issue is JS?
-
Hey folks,
So, I got two pages. Page A has a lot more content but in a tabular format which uses javascript and a Title Tag which is a synonym for our keyword, but not the actual keyword. Page B has less content, and the title tag is the exact keyword phrase we want to rank for. Page A has a bigger backlink profile (though not enormous by any extent).
Page A ranks in 30th. Page B ranks in 7th.
Importance of Title tag? Importance of JS? Both?
Discuss!
Cheers,
Rhys
-
Hi SwanseaMedicine,
Have a read of this hidden content experiment by Reboot Online: https://www.rebootonline.com/blog/hidden-text-experiment/
It was a very well-run experiment and, in summary, they found that visible content outperformed hidden content.
However, this will change once Google's mobile-first index rolls out (sometime in 2018?) where hidden content will be given full weight (source).
Cheers,
David
-
Google is all about serving the best experience with the best content. When you put tabbed content on a website, especially if the tabs serve multiple topics, you are watering down that page. Also because a portion of the content starts at hidden, it makes a worse UX experience for a user to directly get to your content (because they have to click).
-
Hey Michael,
Thanks for your response. The question is, I suppose, why does it not rank as well? Does Google not value it as highly? Or does it struggle to fetch and render it because it's tabbed? It does seem to be the biggest factor, in my opinion, in the difference between the two pages.
Cheers,
Rhys
-
Without knowing more I would guess the issue is tabbed content does not perform as well as content that is always displayed on a page. Always look to your content first, then worry about things like title tags.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If some rooted domains providing back links to a website are from the same server, would it cause an issue?
My client with alliedautotransport.com has a brother that owns hundreds of relevant websites that has great content on there, however, if we have him do some back linkings from those pages from the same server, would it hurt the rankings or make a difference?
Technical SEO | | SeobyKP1 -
Issue with GA tracking and Native AMP
Hi everyone, We recently pushed a new version of our site (winefolly.com), which is completely AMP native on WordPress (using the official AMP for WordPress plugin). As part of the update, we also switched over to https. In hindsight we probably should have pushed the AMP version and HTTPS changes in separate updates. As a result of the update, the traffic in GA has dropped significantly despite the tracking code being added properly. I'm also having a hard time getting the previous views in GA working properly. The three views are: Sitewide (shop.winefolly.com and winefolly.com) Content only (winefolly.com) Shop only (shop.winefolly.com) The sitewide view seems to be working, though it's hard to know for sure, as the traffic seems pretty low (like 10 users at any given time) and I think that it's more that it's just picking up the shop traffic. The content only view shows maybe one or two users and often none at all. I tried a bunch of different filters to only track to the main sites content views, but in one instance the filter would work, then half an hour later it would revert to no traffic. The filter is set to custom > exclude > request uri with the following regex pattern: ^shop.winefolly.com$|^checkout.shopify.com$|/products/.|/account/.|/checkout/.|/collections/.|./orders/.|/cart|/account|/pages/.|/poll/.|/?mc_cid=.|/profile?.|/?u=.|/webstore/. Testing the filter it strips out anything not related to the main sites content, but when I save the filter and view the updated results, the changes aren't reflected. I did read that there is a delay in the filters being applied and only a subset of the available data is used, but I just want to be sure I'm adding the filters correctly. I also tried setting the filter to predefined, exclude host equal to shop.winefolly.com, but that didn't work either. The shop view seems to be working, but the tracking code is added via Shopify, so it makes sense that it would continue working as before. The first thing I noticed when I checked the views is that they were still set to http, so I updated the urls to https. I then checked the GA tracking code (which is added as a json object in the Analytics setting in the WordPress plugin. Unfortunately, while GA seems to be recording traffic, none of the GA validators seem to pickup the AMP tracking code (adding using the amp-analytics tag), despite the json being confirmed as valid by the plugin. This morning I decided to try a different approach and add the tracking code via Googles Tag Manager, as well as adding the new https domain to the Google Search Console, but alas no change. I spent the whole day yesterday reading every post I could on the topic, but was not able to find any a solution, so I'm really hoping someone on Moz will be able to shed some light as to what I'm doing wrong. Any suggestions or input would be very much appreciated. Cheers,
Technical SEO | | winefolly
Chris (on behalf of WineFolly.com)0 -
What's the best way to test Angular JS heavy page for SEO?
Hi Moz community, Our tech team has recently decided to try switching our product pages to be JavaScript dependent, this includes links, product descriptions and things like breadcrumbs in JS. Given my concerns, they will create a proof of concept with a few product pages in a QA environment so I can test the SEO implications of these changes. They are planning to use Angular 5 client side rendering without any prerendering. I suggested universal but they said the lift was too great, so we're testing to see if this works. I've read a lot of the articles in this guide to all things SEO and JS and am fairly confident in understanding when a site uses JS and how to troubleshoot to make sure everything is getting crawled and indexed. https://sitebulb.com/resources/guides/javascript-seo-resources/ However, I am not sure I'll be able to test the QA pages since they aren't indexable and lives behind a login. I will be able to crawl the page using Screaming Frog but that's generally regarded as what a crawler should be able to crawl and not really what Googlebot will actually be able to crawl and index. Any thoughts on this, is this concern valid? Thanks!
Technical SEO | | znotes0 -
Breadcrumb issue
The site has 2 main categories for scooters. One category is Type of Scooter menu item with nested types and the second category is Manufacturer menu item with nest makes. So all the scooters can be found in either of these categories depending on how you search. The Manufacturer category is mainly thin content and set as noindex, as well as the nested makes categories. However when searching for products Google is invariably using the breadcrumb for the Manufacturer category rather than the Type of Scooter category, which is indexed. Should this be of concern Google using breadcrumbs of non indexed URLs, even if they are followed and therefore the site navigable?
Technical SEO | | MickEdwards0 -
Filter Tag Duplicate Content E-Commerce Issue
Hello, I just launched a new site for a client but am seeing some duplicate content issues in the campaign crawl. It has to do with the drill-down, filter "tags" that helps users find the product they are looking for. You can see them in the sidebar here: http://www.ssmd.com/shop/ In my crawl report this is what is showing up as duplicate content (attached image). How do I keep these widgets from generating duplicate content on the site? Also, not sure if it's important or not, but I am using Wordpress, WooCommerce and Yoast's SEO Tool. Any suggestions are appreciated! Screen%20Shot%202012-10-23%20at%202.56.00%20PM.png
Technical SEO | | kylehungate0 -
Duplicate content issues, I am running into challenges and am looking for suggestions for solutions. Please help.
So I have a number of pages on my real estate site that display the same listings, even when parsed down by specific features and don't want these to come across as duplicate content pages. Here are a few examples: http://luxuryhomehunt.com/homes-for-sale/lake-mary/hanover-woods.html?feature=waterfront http://luxuryhomehunt.com/homes-for-sale/lake-mary/hanover-woods.html This happens to be a waterfront community so all the homes are located along the waterfront. I can use a canonical tag, but I not every community is like this and I want the parsed down feature pages to get index. Here is another example that is a little different: http://luxuryhomehunt.com/homes-for-sale/winter-park/bear-gully-bay.html http://luxuryhomehunt.com/homes-for-sale/winter-park/bear-gully-bay.html?feature=without-pool http://luxuryhomehunt.com/homes-for-sale/winter-park/bear-gully-bay.html?feature=4-bedrooms http://luxuryhomehunt.com/homes-for-sale/winter-park/bear-gully-bay.html?feature=waterfront So all the listings in this community happen to have 4 bedrooms, no pool, and are waterfront. Meaning that they display for each of the parsed down categories. I can possible set something that if the listings = same then use canonical of main page url, but in the next case its not so simple. So in this next neighborhood there are 48 total listings as seen at: http://luxuryhomehunt.com/homes-for-sale/windermere/isleworth.html and being that it is a higher end neighborhood, 47 of the 48 listings are considered "traditional listings" and while it is not exactly all of them it is 99%. Any recommendations is appreciated greatly.
Technical SEO | | Jdubin0 -
Robots.txt issue - site resubmission needed?
We recently had an issue when a load of new files were transferred from our dev server to the live site, which unfortunately included the dev site's robots.txt file which had a disallow:/ instruction. Bad! Luckily I spotted it quickly and the file has been replaced. The extent of the damage seems to be that some descriptions aren't displaying and we're getting a message about robots.txt in the SERPs for a few keywords. I've done a site: search and generally it seems to be OK for 99% of our pages. Our positions don't seem to be affected right now but obviously it's not great for the CTRs on those keywords affected. My question is whether there is anything I can do to bring the updated robots.txt file to Google's attention? Or should we just wait and sit it out? Thanks in advance for your answers!
Technical SEO | | GBC0 -
Duplicate content issue index.html vs non index.html
Hi I have an issue. In my client's profile, I found that the "index.html" are mostly authoritative than non "index.html", and I found that www. version is more authoritative than non www. The problem is that I find the opposite situation where non "index.html" are more authoritative than "index.html" or non www more authoritative than www. My logic would tell me to still redirect the non"index.html" to "index.html". Am I right? and in the case I find the opposite happening, does it matter if I still redirect the non"index.html" to "index.html"? The same question for www vs non www versions? Thank you
Technical SEO | | Ideas-Money-Art0