Is use of javascript to simplify information architecture considered cloaking?
-
We are considering using javascript to format URLs to simplify the navigation of the googlebot through our site, whilst presenting a larger number of links for the user to ensure content is accessible and easy to navigate from all parts of the site. In other words, the user will see all internal links, but the search engine will see only those links that form our information hierarchy.
We are therefore showing the search engine different content to the user only in so far as the search engine will have a more hierarchical information architecture by virture of the fact that there will be fewer links visible to the search engine to ensure that our content is well structured and discoverable.
Would this be considered cloaking by google and would we be penalised?
-
Pagination is just links. Google can follow the links.
How you set up and offer your pages is important, especially for areas with a lot of pages.
If you have 40 pages of content then I would recommend a structure that offers pages something like "1,2,3,...20...40". If you don't offer a middle selection then that content will probably never be seen.
-
Does the googlebot follow pagination of search results? All our product pages are on the third tier, but their discovery would rely on google following pagination if we cannot use our original approach to infroamtion architecture (ie use javascript to channel the google bot to discover our tier 3 pages)
Thanks for your help!
-
Search engines will determine how deep to crawl a site based on it's importance. You can use the Domain Authority and Page Authority metrics to measure this factor.
In general, you want your content to be a maximum of 3 clicks from your landing page. If you have buried your content deeper, consider either flattening out your architecture or adding links to the buried content. It is very helpful to build external links to the deeper content which will help search engines discover those pages.
-
Ryan is right... you shouldn't do this. If you want to help the crawlers find their way through your site, you could submit a sitemap?
-
Hi Ryan
We use a navigation bar in the header which means that there are a large number of on page links and there is no clear way to determine our information architecture from our internal link structure. i.e. many pages at different levels in our information architecture can be accessed from every page on the site.
Is this an issue? Or will the URL structure be sufficient for the search engines to categorise our content? How can we help the search engine discover content at level 3 in our hierarchy if we insist on using a navigation bar in the header which we believe gives a good user experience?
Thanks!!
-
I have to agree with Ryan. Yes it's cloaking. ... And if you get caught, you could and most likely would be penalized.
-
The actions you describing define cloaking and would be penalized.
If that process were allowed then it would be severely abused. Sites would remove links that were less desirable such as to their privacy page. Sites might also add links.
Search engines insist upon seeing the same content that a user would see.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Brushing up on my SEO skills - how do I check my website to see if Javascript is blocking search engines from crawling the links within a javascript-enabled drop down menu?
I set my user agent in my Chrome browser to Googlebot and I disable javascript within my Chrome settings, but then what?
Technical SEO | | MagnitudeSEO0 -
Using Google Adwords is good?
I heard about that if you using adwords, google drops your ranking a little bit. Because of you already pay money for results. I think that is reasonable.
Technical SEO | | umutege0 -
Does any one have experience with SEO and .NET using 301 redirects?
A while ago I altered some of the URL's of my website. Google now thinks that I have two duplicate pages (duplicate content), I have asked my third party web developers (Who use .NET and a custom built CMS system) to simply 301 redirect the old URL to the other. However, my web developers say the following: "Solving the problems by 301 permanent re directs are out of the question as this would create infinite loops. Likely to bring down our server." They also wont do a canonical, as they say there is only one page (but two URLs) Firstly, has any one heard of this before and do they think this is true? Also, does anyone have an alternative method of getting rid of the old URL? Any thoughts would be much appreciated.
Technical SEO | | CoGri0 -
Help with onpage keyword optimization, site architecture, and how those aspects affect the SERPs.
Hey guys, I've made a post or two before, but my story is that I've been learning SEO for a while now and have only recently (in the last four months) had the opportunity to actually apply what I've been reading about. What I've learned while trying to put these things into practice is that it can be pretty tough sledding, even when it comes to basic elements like keywords and search results. Anyway, to the good stuff. I've been helping my brother's startup company in my spare time because I want them to do well. They're on the last legs of their series A funding and have no money to put towards SEO, content marketing or social, so I'm helping when and where I can for free. The company is Maluuba, a siri-like personal assistant app for Android with a ton of different domains. They launched at TechCrunch Disrupt and actually have a lot of traction and a fair amount of publicity, so I'm not exactly working with scraps, but I don't work with them in their offices and only really communicate with my brother, who is having a really hard time getting buy-in for some of the stuff I want them to do. Their initial website was pretty terrible, so my brother got the okay to redesign the site and together, we worked with a designer to implement the site I linked to. Because they have so many domains (search, social, organization) I thought creating specific pages along with a one homepage would be a good way to optimize for different things and funnel a wider audience to convert to the one macro goal of the site: getting people to download the app. The results haven't been exactly what I expected and I fear I didn't really implement what I still think is a good plan correctly. I've only tried to optimize the pages for a few keywords to start. The main keyword for the homepage and indeed the brand is 'personal assistant app' which is a fairly competitive keyword that I know have them ranking second for on Google CA. I used 'siri-alternative' as a secondary keyword, since that's how they label themselves in the Play Store. For the three other main (pages search, social, organization) I used 'personal assistant app' as a secondary keyword and tried to optimize each page for 'search app', 'social app' and 'organizer app', respectively. While I'm really quite proud that I managed to get a page ranking in the top three for our main keyword, I'm just as disappointed that it's the search page and not the homepage, mainly because I have no idea why it's happening. So, all of that to ask a few questions: Did I make a mistake by trying to add funnels to the site? Or did I just go about optimizing the pages incorrectly? Why does the search page rank really, really well for 'personal assistant app' while the other pages - including the one I intended to rank the highest for that term - lag behind? I'd guess that Google is indexing this page alone as the main representative of 'personal assistant app', but that wasn't my intention. I'm also not using any rel=canonical tags, if that matters. Also, this page has been flipping around in the 1-3 range in the SERPs for about a month, but I still haven't noticed any traffic from 'personal assistant app'. Alright, this is getting way to long. I'd very much appreciate any and all insights as to what I'm doing wrong or what I'm missing. It could be really obvious and thus make this post silly, but I really have read and tried to learn a lot. I just can't see what's going on here because I don't have any experience to compare it to. Thanks in advance for any help. Cheers, JD
Technical SEO | | JDMcNamara1 -
Can the Breadcrumb Trail be used as the H1 tag?
We hace recently discovered that the x-cart sites we have dont have H1 tags. Can the breadcrumb trail on the category and sub category pages be used as the H1 tag?
Technical SEO | | heathshowman0 -
Using Canonical URLs option in Platinum SEO for Wordpress
SEOMOZ says that my site has 150 <a title="Click for Help!">Canonical URLs and lists that as a potential problem. It's a check box in the settings for Platinum SEO and here is the description it provides:</a> <a title="Click for Help!">Choose this option to set up canonical URLs for your Home page, Single Post, Category and Tag Pages.</a> I have the option engaged. So I was trying to figure out the best thing to do. I have already instructed it to automatically make 301 redirects for any permalink changes and have instructed it to "noindex" tag archives,rss comment feeds, and rss feeds. I've only been doing this for about a year and am really confused right now. After reading most of your posts about the subject I have a much better understanding, but still very confused. Help..Please...
Technical SEO | | pressingtheissue0 -
Does google use the wayback machine to determine the age of a site?
I have a site that I had removed from the wayback machine because I didn't want old versions to show. However I noticed that in many seo tools the site now always shows a domain age of zero instead of 6 years ago when I registered it. My question is what do the actual search engines use to determine age when they factor it into the ranking algorithm? By having it removed from the wayback machine, does that make the search engines think the site is brand new? Thanks
Technical SEO | | FastLearner0 -
Using DNS & 301 redirects to gain control over a rogue site
I'd appreciate peoples' views on the following please. We have been approached by a client whose website does not rank # 1 for their own distinctive brand name due to this position being taken by a site they had developed for them by an affiliate some years back. The affiliate's site is clearly seen by Google as the definitive site for the brand - being older, having more links & in both Yahoo & DMOZ. The relationship has soured with the affiliate & the client wants to take control of the affiliate site & have it 301 redirect to the 'real' brand site. The affiliate won't cooperate (funny that). However whilst the client doesn't have control over the affiliate's website, they do own the domain. Given this, it seems that an option is to temporarily create a 1 page website on another server, change the affiliate website domain DNS settings to point to this, & in turn have that 301 re-direct to the client's website. This is a bit of a round about approach, but necessary because the affiliate won't directly 301 the site they control - despite the client owning it. (As I say the relationship has soured). If you think there's a better alternative approach to this problem (aside from litigation), I'd appreciate hearing it please. Thanks.
Technical SEO | | SureFire0