Ajax4SEO and rogerbot crawling
-
Has anyone had any experience with seo4ajax.com and moz?
The idea is that it points a bot to a html version of an ajax page (sounds good) without the need for ugly urls. However, I don't know how this will work with rogerbot and whether moz can crawl this. There's a section to add in specific user agents and I've added "rogerbot".
Does anyone know if this will work or not? Otherwise, it's going to create some complications. I can't currently check as the site is in development and the dev version is noindexed currently.
Thanks!
-
Hi Philip!
This question is a bit intricate.
With AJAX content like this, I know Google's full specifications
https://developers.google.com/webmasters/ajax-crawling/docs/specification
indicate that the #! and ?escaped_fragment= technique works for their crawlers. However, Roger is a bit picky and isn't robust enough yet to use only the sitemap as the reference in this case. Luckily, one of our wonderful users came up with a solution using pushState() method. Click here:
http://www.moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
to find out how to create crawl-able content using pushState. The only other thing I can think of is to run a crawl test if the site is live. You'll have to remove the noindex tag, but updating the robots.txt to allow rogerbot but have a wildcard disallow for the other crawlers should still keep the site from being indexed.
Hopefully this will help!
Best,
Sam
Moz Helpster
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Errors and Notices drop to zero
Hi all, After setting up a campaign in Moz the crawl is successful and it showed the Errors and Warnings in crawl diagnostics (each one had about 40-50), but after a few days the number dropped to zero. Only the "notices" seems to stay normal, with a slight drop since the campaign set up, but not dropping to zero. I set this campaign up in a colleague's account and the same thing happened shortly after set up. I didn't find any Q&A already posted so any insight is appreciated!
Moz Pro | | Vanessa120 -
Problem crawling a website with age verification page.
Hy every1, Need your help very urgent. I need to crawl a website that first has a page where you need to put your age for verification and after that you are redirected to the website. My problem is that SEOmoz, crawls only that first page, not the whole website. How can I crawl the whole website?, do you need me to upload a link to the website? Thank you very much Catalin
Moz Pro | | catalinmoraru0 -
How long would a SEOMoz crawl usually take for a site with around 4000 pages?
We are working through optimising a site for one of our clients and the SEOMoz crawl progress says it has been running since the 8th Feburary. It's now almost a week later and it still hasn't finished. The first run took a few days, is there any way of restarting the process?
Moz Pro | | TJSSEO0 -
Campaign status stock in status "Next Crawl in Progress!
Has anyone else had an issue laetly where the campaign status was stock in _ **"Next Crawl in Progress!" **_? One of our campaigns has been in this status for the page 2 1/2 days and this has not happened in the past as there are only 597 pages for this campaign to crawl. I send a help ticket request to the SEOMOZ team but was wondering if this is an isolated issue or if other community members have also experienced it? Thanks.
Moz Pro | | DRTBA0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0 -
How long should the weekly crawl take
Mine started yesterday afternoon and it's now almost 11pm on Sunday. 30+ hours and still not finished (and no progress indicator). 438 pages quoted as being crawled. That's not normal - right? I have made a bunch of changes based on last weeks crawl so I have been eagerly waiting for this to finish But 30 hours?.... Thanks. Mark
Moz Pro | | MarkWill0 -
Schedule crawls for 2 subdomains every 24 hours
I saw at this link: http://pro.seomoz.org/tools/crawl-test "As a PRO member, you can schedule crawls for 2 subdomains every 24 hours, and you'll get up to 3,000 pages crawled per subdomain." However I am having trouble finding where to schedule this 24 hour crawl in my Pro Dashboard. I did not see the option for this setting in the crawl diagnostics tab or in the campaign settings section from the dashboard home page. Can you help? thanks! Michael
Moz Pro | | texmeix0 -
Initial Crawl Questions
Hello. I just joined and used the Crawl tool. I have many questions and hoping the community can offer some guidance. 1. I received an Excel file with 3k+ records. Is there a friendly online viewer for the Crawl report? Or is the Excel file the only output? 2. Assuming the Excel file is the only output, the Time Crawled is a number (i.e. 1305798581). I have tried changing the field to a date/time format but that did not work. How can I view the field as a normal date/time such as May 15, 2011 14:02? 3. I use the ™ symbol in my Title. This symbol appears in the output as a few ascii characters. Is that a concern? Should I remove the trademark symbol from my Title? 4. I am using XenForo forum software. All forum threads automatically receive a Title Tag and Meta Description as part of a template. The Crawl Test report shows my Title Tag and Meta Description as blank for many threads. I have looked at the source code of several pages and they all have clean Title tags and I don't understand why the Crawl Report doesn't show them. Any ideas? 5. In some cases the HTTP Status Code field shows a result of "3". Why does that mean? 6. For every URL in the Crawl Report there is an entry in the Referrer field. What exactly is the relationship between these fields? I thought the Crawl Tool would inspect every page on the site. If a page doesn't have a referring page is it missed? What if a page has multiple referring pages? How is that information displayed? 7. Under Google Webmaster Tools > Site Configurations > Settings > Parameter Handling I have the options set as either "Ignore" or "Let Google Decide" for various URL parameters. These are "pages" of my site which should mostly be ignored. For example a forum may have 7 headers, each on of which can be sorted in ascending or descending order. The only page that matters is the initial page. All the rest should be ignored by Google and the Crawl. Presently there are 11 records for many pages which really should only have one record due to these various sort parameters. Can I configure the crawl so it ignores parameter pages? I am anxious to get started on my site. I dove into the crawl results and it's just too messy in it's present state for me to pull out any actionable data. Any guidance would be appreciated.
Moz Pro | | RyanKent0