Is use of javascript to simplify information architecture considered cloaking?
-
We are considering using javascript to format URLs to simplify the navigation of the googlebot through our site, whilst presenting a larger number of links for the user to ensure content is accessible and easy to navigate from all parts of the site. In other words, the user will see all internal links, but the search engine will see only those links that form our information hierarchy.
We are therefore showing the search engine different content to the user only in so far as the search engine will have a more hierarchical information architecture by virture of the fact that there will be fewer links visible to the search engine to ensure that our content is well structured and discoverable.
Would this be considered cloaking by google and would we be penalised?
-
Pagination is just links. Google can follow the links.
How you set up and offer your pages is important, especially for areas with a lot of pages.
If you have 40 pages of content then I would recommend a structure that offers pages something like "1,2,3,...20...40". If you don't offer a middle selection then that content will probably never be seen.
-
Does the googlebot follow pagination of search results? All our product pages are on the third tier, but their discovery would rely on google following pagination if we cannot use our original approach to infroamtion architecture (ie use javascript to channel the google bot to discover our tier 3 pages)
Thanks for your help!
-
Search engines will determine how deep to crawl a site based on it's importance. You can use the Domain Authority and Page Authority metrics to measure this factor.
In general, you want your content to be a maximum of 3 clicks from your landing page. If you have buried your content deeper, consider either flattening out your architecture or adding links to the buried content. It is very helpful to build external links to the deeper content which will help search engines discover those pages.
-
Ryan is right... you shouldn't do this. If you want to help the crawlers find their way through your site, you could submit a sitemap?
-
Hi Ryan
We use a navigation bar in the header which means that there are a large number of on page links and there is no clear way to determine our information architecture from our internal link structure. i.e. many pages at different levels in our information architecture can be accessed from every page on the site.
Is this an issue? Or will the URL structure be sufficient for the search engines to categorise our content? How can we help the search engine discover content at level 3 in our hierarchy if we insist on using a navigation bar in the header which we believe gives a good user experience?
Thanks!!
-
I have to agree with Ryan. Yes it's cloaking. ... And if you get caught, you could and most likely would be penalized.
-
The actions you describing define cloaking and would be penalized.
If that process were allowed then it would be severely abused. Sites would remove links that were less desirable such as to their privacy page. Sites might also add links.
Search engines insist upon seeing the same content that a user would see.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Using images from one domain on another?
I run a travel photography business where I sell fine art prints. I've been toying with the idea of creating a few new websites about some of the places I've traveled to. This is for a few reasons. First, because I love talking about my travels. But second, because I feel like it might be a good way to bring in more print sales from those places. The question I have, if I were to use the images from my main photography sales domain on a different domain, how does this affect SEO? These images filenames for the photographs are already optimized well for searching. Thanks!
Technical SEO | | shannmg10 -
Is it appropriate to use review markup for testimonials without numerical rankings
If a site has written testimonials from past clients, what is/is there an appropriate way to mark these up in schema since they don't have a specific numerical rating attached to them?
Technical SEO | | Oren.0 -
An EMD with top level domain of another country. Still useful ?
Normally, EMDs (with couple of keywords in it) has more advantage over other domain names ( I understand other factors matters too & I am aware of EMD update). But what if there is an EMD with couple of keywords but top level domain is of another country. Confused? see ex - shoesinsydney.co.uk Will it STILL have a natural advantage over other domains (with no keywords in them)?
Technical SEO | | Personnel_Concept0 -
Cross-Domain Canonical - Should I use it under the following circumstances?
I have a number of hyper local directories, where businesses get a page dedicated to them. They can add images and text, plus contact info, etc. Some businesses list on more than one of these directory sites, but use exactly the same description. I've tried asking businesses to use unique text when listing on more than one site to avoid duplication issues, but this is proving to be too much work for the business owner! Can I use a cross-domain canonical and point Google towards the strongest domain from the group of directories? What effects will this have? And is there an alternative way to deal with the duplicate content? Thanks - I look forward to hearing your ideas!
Technical SEO | | cmaddison0 -
Does using tags instead of " " good for SEO purposes?
I'm currently using <pr>tags for paragraphs and came across an article that said it is better for search engines to see the</pr> tag than
Technical SEO | | ibex
tag to separate paragraphs.0 -
Should I use www. or not in my main URL?
I have backlinks coming into my homepage, which has both a www. URL and one that's merely http://mysite.com. Which is the preferred URL for best optimization for search engines and how do I find this out?
Technical SEO | | NetPicks0 -
Using the Canonical Tag
Hi, I have an issue that can be solve with a canonical tag, but I am not sure yet, we are developing a page full of statistics, like this: www.url.com/stats/ But filled with hundreds of stats, so users can come and select only the stats they want to see and share with their friends, so it becomes like a new page with their slected stats: www.url.com/stats/?id=mystats The problems I see on this is: All pages will be have a part of the content from the main page 1) and many of them will be exactly the same, so: duplicate content. My idea was to add the canonical tag of "www.url.com/stats/" to all pages, similar as how Rand does it here: http://www.seomoz.org/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps But I am not sure of this solution because the content is not exactly the same, page 2) will only have a part of the content that page 1) has, and in some cases just a very small part. Is the canonical tag useful in this case? Thank you!
Technical SEO | | andresgmontero0