Ajax4SEO and rogerbot crawling
-
Has anyone had any experience with seo4ajax.com and moz?
The idea is that it points a bot to a html version of an ajax page (sounds good) without the need for ugly urls. However, I don't know how this will work with rogerbot and whether moz can crawl this. There's a section to add in specific user agents and I've added "rogerbot".
Does anyone know if this will work or not? Otherwise, it's going to create some complications. I can't currently check as the site is in development and the dev version is noindexed currently.
Thanks!
-
Hi Philip!
This question is a bit intricate.
With AJAX content like this, I know Google's full specifications
https://developers.google.com/webmasters/ajax-crawling/docs/specification
indicate that the #! and ?escaped_fragment= technique works for their crawlers. However, Roger is a bit picky and isn't robust enough yet to use only the sitemap as the reference in this case. Luckily, one of our wonderful users came up with a solution using pushState() method. Click here:
http://www.moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
to find out how to create crawl-able content using pushState. The only other thing I can think of is to run a crawl test if the site is live. You'll have to remove the noindex tag, but updating the robots.txt to allow rogerbot but have a wildcard disallow for the other crawlers should still keep the site from being indexed.
Hopefully this will help!
Best,
Sam
Moz Helpster
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl findings 301 redirects I didn't make?
Hi, I'm new to SEOMOZ Pro and loving it so far, but was confused as to how the 51 page Crawl of my site (http://cryptophoneaustralia.com) found so many 301 redirects. 18 to be exact. It's a Wordpress site, and my htaccess file has no 301's in it, so I'm kind of confused as to where to start looking as to why they've shown up in the crawl. I've been building sites for years, and use 301's quite regularly, but this site should have none. The site was originally on a subdomain until it was ready to go live, then I moved the site to it's current domain and ran the Velvet Blues plugin to update all the URLs. I then went through and manually changed the ones in areas where this plugin tends to miss. The site still functions fine, it just bothers me why the 301's are being found in the crawl. Thank you.
Moz Pro | | TrentDrake0 -
The crawl report shows a lot of 404 errors
They are inactive products, and I can't find any active links to these product pages. How can I tell where the crawler found the links?
Moz Pro | | shopwcs0 -
Crawl Diagnostics 403 on home page...
In the crawl diagnostics it says oursite.com/ has a 403. doesn't say what's causing it but mentions no robots.txt. There is a robots.txt and I see no problems. How can I find out more information about this error?
Moz Pro | | martJ0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
Is seomoz rogerbot only crawling the subdomains by links or as well by id?
I´m new at seomoz and just set up a first campaign. After the first crawling i got quite a few 404 errors due to deleted (spammy) forum threads. I was sure there are no links to these deleted threads so my question is weather the seomoz rogerbot is only crawling my subdomains by links or as well by ids (the forum thread ids are serially numbered from 1 to x). If the rogerbot crawls as well serially numbered ids do i have to be concerned by the 404 error on behalf of the googlebot as well?
Moz Pro | | sauspiel0 -
Adjusting SEOmoz Crawling Speed
How do you adjust the SEOmoz crawling speed? SEOmoz tried to crawl 10,000 pages in 3 hours and crashed our MySQL server.
Moz Pro | | cappuccino891 -
Only one page has been crawled
I am running a campaing for three weeks now and first two crawls was ok but the last one is showing only one page crawled. the subdomain I am tracking is: www.cubaenmiami.com I have everything correct in my site. Regards Alex
Moz Pro | | esencia0 -
Blocking all robots except rogerbot
I'm in the process of working with a site under development and wish to run the SEOmoz crawl test before we launch it publicly. Unfortunately rogerbot is reluctant to crawl the site. I've set my robots.txt to disallow all bots besides rogerbot. Currently looks like this: User-agent: * Disallow: / User-agent: rogerbot Disallow: All pages within the site are meta tagged index,follow. Crawl report says: Search Engine blocked by robots.txt Yes Am I missing something here?
Moz Pro | | ignician0