Roger Bot
-
Hi Mozzers,
I have a dev site that I want to run your crawl text on (Roger Bot) but I want to ensure the other engines don't crawl it.
What is the Robots.txt line that I need to make sure only Roger bot can get in and not Google etc?
Please advise
Thanks
Gareth
-
HI Gareth, Your robots.txt should look like this; User-agent: * Disallow: / User-agent: rogerbot Allow: /
-
User-agent: *
Disallow: /
User-agent: rogerbot
Allow: /
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I redeem a tweet from Roger?
Last week, I reached over 100 mozPoints and I've been waiting by my virtual mailbox for days for a tweet from Roger. Level mozPoints Benefits **Aspirant - ** ** - 100 - 199** ** - A tweet from Roger (the week you reach 100 MozPoints)** The past few days have been sad. I've been moping around the office feeling terribly alone, wondering why I've received no Twitter based recognition for my efforts. The radio has been playing 'Careless Whisper' on repeat and it hasn't stopped raining outside. Can anyone help? (Attached an image of my sadness for reference) fYHP3
Moz Pro | | seanginnaw0 -
Why SEOmoz bot consider these as duplicate pages?
Hello here, SEOmoz bot has recently marked the following two pages as duplicate: http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3 http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=pdf I don't personally see how these pages can be considered duplicate since their content is quite different. Thoughts??!!
Moz Pro | | fablau0 -
Roger keeps telling me my canonical pages are duplicates
I've got a site that's brand spanking new that I'm trying to get the error count down to zero on, and I'm basically there except for this odd problem. Roger got into the site like a naughty puppy a bit too early, before I'd put the canonical tags in, so there were a couple thousand 'duplicate content' errors. I put canonicals in (programmatically, so they appear on every page) and waited a week and sure enough 99% of them went away. However, there's about 50 that are still lingering, and I'm not sure why they're being detected as such. It's an ecommerce site, and the duplicates are being detected on the product page, but why these 50? (there's hundreds of other products that aren't being detected). The URLs that are 'duplicates' look like this according to the crawl report: http://www.site.com/Product-1.aspx http://www.site.com/product-1.aspx And so on. Canonicals are in place, and have been for weeks, and as I said there's hundreds of other pages just like this not having this problem, so I'm finding it odd that these ones won't go away. All I can think of is that Roger is somehow caching stuff from previous crawls? According to the crawl report these duplicates were discovered '1 day ago' but that simply doesn't make sense. It's not a matter of messing up one or two pages on my part either; we made this site to be dynamically generated, and all of the SEO stuff (canonical, etc.) is applied to every single page regardless of what's on it. If anyone can give some insight I'd appreciate it!
Moz Pro | | icecarats0 -
Will SEOMoz offer URL data relating to Bot visits
Does SEOMoz in the future plan to report on Bot visits for each URL, when they are spidered and when they appear in for example Google's index ?
Moz Pro | | NeilTompkins0 -
Hug From Roger
I have long since had enough Moz Points to get my hug and STILL have nothing. WHERE'S MY HUG!!!??
Moz Pro | | TheGrid1 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370 -
Seomoz Spider/Bot Details
Hi All Our website identifies a list of search engine spiders so that it does not show them the session ID's when they come to crawl, preventing the search engines thinking there is duplicate content all over the place. The Seomoz has bought a over 20k crawl errors on the dashboard due to session ID's. Could someone please give the details for the Seomoz bot so that we can add it to the list on the website so when it does come to crawl it won't show it session ID's and give all these crawl errors. Thanks
Moz Pro | | blagger1