Angular.js + Crawlers
-
I am working with a site that recently deployed Angular.js on the site. From an SEO standpoint its a little more tricky than we thought. We have deployed a couple updates to render pages for the bots but we not seeing changes in Moz weekly reports.
When it comes to Angular.js, will the Moz bots read/access the site the same as the other major engines? I'm trying to figure out if our deployments are working or if there's something off in the Moz reports.
Thanks.
-
I am using prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (http://www.exambazaar.com/). Hence it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this?
List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff
Adding dotbot:
prerender.crawlerUserAgents.push('dotbot'); -
Within prerender you are able to determine which user agents will receive the HTML snapshot. It is here that you can add rogerbot. This is allowing Moz to crawl the site as if they were Google and receive the HTML snapshot version.
Additionally, you can always use the fetch as bot function within Webmaster Tools, to see exactly what is being presented/indexed.
-
With the current direction of web development this is something that needs to be addressed. Google has already confirmed that they are in fact crawling Javascript based sites.
Reference:
http://ng-learn.org/2014/05/SEO-Google-crawl-JavaScript/
https://support.google.com/webmasters/answer/174992?hl=enThe solution in this case is an HTML snapshot which, you could roll your own, but there are services like https://prerender.io/ that can do it for you.
This doesn't quite help the case for Moz Bot, maybe the HTML snapshots do work here - I haven't tested it yet. Either way, Javascript is becoming more and more a dominant language to code up websites. I hope Moz recognizes this because this toolset is awesome and I'd love to continue using it.
-
Is there still no update to this by MOZ?
A number of sites I work on are using Angularjs pushstate. Is there a way to point moz bot to the escaped fragment static pages?
-
Static rendering is not cloaking. It's a very common practice that Google actually recommends. The issue with angular js is that everything is code based. If you were to look at the code all the pages would look the same. In fact, MozBot sees this as every page is duplicate content.
https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot
It would be nice to see the MozBot act more like Google-bot.
-
What do you mean by "We have deployed a couple updates to render pages for the bots" that sounds like clocking?
-
Hello, Josh
Currently our crawlers do not process any kind of javascript found on pages (including pages created with angular.js.) I don't if the major search engines have this restriction or not.
For moz's crawlers, this means that links created through AJAX or other javascript will not be picked up. Links appearing in static content, including those within
<noscript>tags, should be noticed and indexed. Be aware that even if you've already made changes exposing links in the page's static content, it can take up to a week for the campaign crawl to catch up.</p> <p>Hopefully that answered your questions! Let us know if you have any more.</p></noscript>
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our crawler was not able to access the robots.txt file on your site.
Good morning, Yesterday, Moz gave me an error that is wasn't able to find our robots.txt file. However, this is a new occurrence, we've used Moz and its crawling ability many times prior; not sure why the error is happening now. I validated that the redirects and our robots page are operational and nothing is disallowing Roger in our robots.txt. Any advice or guidance would be much appreciated. https://www.agrisupply.com/robots.txt Thank you for your time. -Danny
Moz Pro | | Danny_Gallagher0 -
Possible Crawling Problem with Screaming Frog and Moz Crawlers
So I'm not sure if what I'm seeing is a problem or not. As of about two weeks ago the Moz crawler has only been able to see www.mysite.com, and none of the links, content, title, ect associated with the page. Essentially the report has one line, what should be the homepage, but it's not able to pull any information from the page but does show a 200 http status code. The report shows nothing blocked by robots or any errors. When I use screaming frog to crawl the site about 75% of the time it just reports one line www.mysite.com with a 200 status code, but again the crawler is not able to actually see the html. The other 25% of the time it works perfectly fine, crawls all pages and sees all meta info and content. There are no errors in Google WMT and everything looks ok there. We have seen a traffic drop the last two weeks but I don't know if this is the reason for it. I can't publicly post the page but if someone has an idea of what might be going on I'd be happy to PM them. Thanks
Moz Pro | | CJ50 -
Why seomoz crawler does not see my snapshot?
I have a web app that uses angularJS and the content is all dynamic (SPA). I have generated snapshots for the pages and write a rule to redirect ( 301) to the snapshot in case of find escaped_fragment in the URL. E.g http://plure.com/#!/imoveis/venda/rj/rio-de-janeiro Request: http://plure.com/?escaped_fragment=/imoveis/venda/rj/rio-de-janeiro is redirected to: http://plure.com/snapshots/imoveis/venda/rj/rio-de-janeiro/ The snapshot is a headless page generated by PhantomJS. Even following the guideline ( https://developers.google.com/webmasters/ajax-crawling/docs/specification) I still can't see my page crawled and I also in SEOMoz I can only see the 1st page crawled with no dynamic content on it. Am I doing something wrong? SEOMoz was supposed to get the snapshot based on same rules of GoogleBot or SEOMoz does not get snapshots?
Moz Pro | | plure_seo0 -
SEOmoz crawler not crawling my site
We set up a new campaign in SEOmoz on Friday. It is my understanding that the preliminary crawl can cover up to 250 and this has been our experience in the past. However, the preliminary crawl only went through 2 pages. This is a larger eCommerce site with many pages. Any ideas why more pages weren't crawled? We set up the campaign to track at the root domain level.
Moz Pro | | IMM0 -
What user agent is used by SEOMOZ crawler?
We have a pretty tight robots.txt file in place to only allow the major search engines. I do not want to block SEOMOZ.ORG from being able to crawl the site so I want to make sure the user agent is open.
Moz Pro | | eseider0 -
SEOmoz crawler and duplicate content
Does anybody know if the SEOmoz crawler picks up canonical tags when its looking for duplicate content? I've got a ton of errors in one of my projects even though they all have canonical tags. Thanks!
Moz Pro | | neooptic0 -
Drop in number of Pages crawled by Moz crawler
What would cause a sudden drop in the number of pages crawled/accessed by the Moz crawler? The site has about 600 pages of content. We have multiple campaigns set up in our Pro account to track different keyword campaigns- but all for the same domain. Some show 600+ pages accessed, while others only access 7 pages for the same domain. What could be causing these issues?
Moz Pro | | AllaO0 -
SEOMoz site crawlers created an issue for our servers
I have set up a number of campaigns with your pro tool. Unfortunately we have 7 sites on our server and our IT dept have said that we had an issue when your site crawlers visited for several sites at the same time - is there any way that I can retain the campaigns but have the sites crawled on request rather than automatically?
Moz Pro | | StephenALee0