Javascript late loaded content not read by Gogglebot
-
Hi,
We have a page with some good "keyword" content (user supplied comment widget), but there was a design choice made previously to late load it via JavaScript. This was to improve performance and the overall functionality relies on JavaScript. Unfortunately since it is loaded via js, it isn't read by Googlebot so we get no SEO value.
I've read Google doesn't weigh
<noscript>content as much as regular content. is this true? Once option is just to load some of the content via <noscript> tags. I just want to make sure Google still reads this content.</p> <p>Another option is to load some of the content via simple html when loading the page. If JavaScript is enabled, we'd hide this "read only" version via css and display the more dynamic user friendly version. - Would changing display based on js enabled be deemed as cloaking? Since non-js users would see the same thing (and this provides a ways for them to see some of the functionality in the widget, it is an overall net gain for those users too).</p> <p>In the end, I want Google to read the content but trying to figure out the best way to do so.</p> <p>Thanks,</p> <p>Nic</p> <p> </p></noscript>
-
If the content is too late, you're right, the Googlebot may not grab it. However, Google is getting better and better at indexing AJAX content that's loaded after the fact. On one of the sites I work on, we really didn't want to go through the whole process of serving up an HTML snapshot to Googlebot (outlined http://code.google.com/web/ajaxcrawling/). About a month ago, I did a search in Google based on the AJAX content, and it returned the page, meaning Google is finding that AJAX content and indexing it! They're indexing comments now (see http://www.searchenginejournal.com/google-indexing-facebook-comments/35594/) as well, like Disqus and Facebook comments. What kind of comments widget are you loading that Google can't get at? Maybe they'll be able to index them soon?
I would guess that Google would devalue
<noscript>text, as almost everyone has JavaScript enabled. Otherwise, everyone would be keyword stuffing in their <noscript> tags.</p> <p style="color: #5e5e5e;">The option you outlined sounds like it could work. If you're just taking the content from JavaScript, and loading it in the HTML if the user doesn't have JavaScript enabled. Google is actually suggesting in their ajax crawling guide to actually serve the Googlebot a static page instead of the page with AJAX content, which seems much closer to cloaking than the option you're suggesting.</p></noscript>
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Buying Domains with Keywords but no PA, no content
MOZ Community, I am trying to gauge both the potential upside and downside of buying a few (relatively long) URLs that encompass some new keywords that are surfacing in our industry and creating permanent redirects to our branded website. [This wasn't my idea!] These URLs haven't previously had any content or owners so their domain authority is low. Will Google still ding us for this behavior? I hope not but I worry that there might be some penalty for having a bunch of redirects pointing at our site. I have read that google will penalize you for buying content-rich sites with high DA and redirecting those URLs to your site but I am unclear about this other approach. It seems like a fairly mundane (and fruitless) play. I tried to explain that we won't reap any SEO rewards for owning these URLS (if there is no content) but that wasn't really heard. Thanks for any resources or information you can share! I would appreciate any resources.
Technical SEO | | ColleenHeadLight0 -
How to allow bots to crawl all but WP-content
Hello, I would like my website to remain crawlable to bots, but to block my wp content and media. Does the following robots.txt work? I worry that the * user agent may conflict with the others. User-agent: *
Technical SEO | | Tom3_15
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/ User-agent: GoogleBot
Allow: / User-agent: GoogleBot-Mobile
Allow: / User-agent: GoogleBot-Image
Allow: / User-agent: Bingbot
Allow: / User-agent: Slurp
Allow: /0 -
If content is at the bottom of the page but the code is at the top, does Google know that the content is at the bottom?
I'm working on creating content for top category pages for an ecommerce site. I can put them under the left hand navigation bar, and that content would be near the top in the code. I can also put the content at the bottom center, where it would look nicer but be at the bottom of the code. What's the better approach? Thanks for reading!
Technical SEO | | DA20130 -
Duplicate content due to csref
Hi, When i go trough my page, i can see that alot of my csref codes result in duplicate content, when SeoMoz run their analysis of my pages. Off course i get important knowledge through my csref codes, but im quite uncertain of how much it effects my SEO-results. Does anyone have any insights in this? Should i be more cautios to use csref-codes or dosent it create problems that are big enough for me to worry about them.
Technical SEO | | Petersen110 -
Do dropdowns count as unique content?
My current site has some extensive unique database content by "widget" type. Currently we display this info into HTML 's, but we are considering utilizing this data in a dropdown field on each respective widget page. I want to ensure we don't have thin content...Does the content within the <option>tags on a dropdown count towards unique content?</option>
Technical SEO | | TheDude0 -
Entry based content and SEO
My E-commerce team is implementing functionality that allows us to display different content based on what channel and even what keyword the customers used to reach our page. This is of course a move that we believe will strengthen our conversion rates, but how will this effect our organic search listings? Do you guys have any examples of how this could affect us, and are there any technology pitfalls that we absolutely need to know about?
Technical SEO | | GEMoney_No0 -
Duplicate content
I am getting flagged for duplicate content, SEOmoz is flagging the following as duplicate: www.adgenerator.co.uk/ www.adgenerator.co.uk/index.asp These are obviously meant to be the same path so what measures do I take to let the SE's know that these are to be considered the same page. I have used the canonical meta tag on the Index.asp page.
Technical SEO | | IPIM0 -
Canonical pagination content
Hello We have a large ecommerce site, as you are aware that ecommerce sites has canonical issues, I have read various sources on how best to practice canonical on ecommerce site but I am not sure yet.. My concert is pagination where I am on category product listing page.. the pagination will have all different product not same however the meta data will be same so should I make let's say page 2 or 3 to main category page or keep them as is to index those pages? Another issue is using filters, where I am on any page and I filter by price or manufacturer basically the page will be same so here It seems issue of duplicate content, so should I canonical to category page only for those result types? So basically If I let google crawl my pagination content and I only canonical those coming with filter search result that would be best practice? and would google webmaster parameter handling case would be helpful in this scenario ? Please feel free to ask in case you have any queries regards
Technical SEO | | CNMOnline28
Carl0