Roger Bot
-
Hi Mozzers,
I have a dev site that I want to run your crawl text on (Roger Bot) but I want to ensure the other engines don't crawl it.
What is the Robots.txt line that I need to make sure only Roger bot can get in and not Google etc?
Please advise
Thanks
Gareth
-
HI Gareth, Your robots.txt should look like this; User-agent: * Disallow: / User-agent: rogerbot Allow: /
-
User-agent: *
Disallow: /
User-agent: rogerbot
Allow: /
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I redeem a tweet from Roger?
Last week, I reached over 100 mozPoints and I've been waiting by my virtual mailbox for days for a tweet from Roger. Level mozPoints Benefits **Aspirant - ** ** - 100 - 199** ** - A tweet from Roger (the week you reach 100 MozPoints)** The past few days have been sad. I've been moping around the office feeling terribly alone, wondering why I've received no Twitter based recognition for my efforts. The radio has been playing 'Careless Whisper' on repeat and it hasn't stopped raining outside. Can anyone help? (Attached an image of my sadness for reference) fYHP3
Moz Pro | | seanginnaw0 -
Have a Campaign, but only states 1 page has been crawled by SEOmoz bots. What needs to be done to have all the pages crawled?
We have a campaign running for a client in SEOmoz and only 1 page has been crawled per SEOmoz' data. There are many pages in the site and a new blog with more and more articles posted each month, yet Moz is not crawling anything, aside from maybe the Home page. The odd thing is, Moz is reporting more data on all the other inner pages though for errors, duplicate content, etc... What should we do so all the pages get crawled by Moz? I don't want to delete and start over as we followed all the steps properly when setting up. Thank you for any tips here.
Moz Pro | | WhiteboardCreations0 -
Where is my Hug from Roger?
I just remembered that when I got 50 mozPoints I had to receive a hug from Roger. So, I need my hug!
Moz Pro | | ditoroin2 -
What Are Roger's Super Powers?
I ended up with a box of SEOMOZ swag. (Thanks! As to how this came to pass...I shall draw a veil, as they used to say in Victorian novels.) My upstairs neighbour Max, age 5, enjoys a rich fantasy life and is very much into superheroes and costumes. Naturally, he ended up with a lot of the Roger stickers. Alas, I was unable to answer all of Max's questions. When he asked: "What does Roger do?" I replied: "Roger makes your computer work." Pretty good, I thought. But then Max asked: "What does the antenae do?" I was kind of stumped. Then it got worse. Max asked what Roger's superpowers are and if he could beat Spiderman. I tried to change the subject. Max wasn't impressed. What are the answers? Enquiring five year old minds want to know!
Moz Pro | | DanielFreedman6 -
Roger keeps telling me my canonical pages are duplicates
I've got a site that's brand spanking new that I'm trying to get the error count down to zero on, and I'm basically there except for this odd problem. Roger got into the site like a naughty puppy a bit too early, before I'd put the canonical tags in, so there were a couple thousand 'duplicate content' errors. I put canonicals in (programmatically, so they appear on every page) and waited a week and sure enough 99% of them went away. However, there's about 50 that are still lingering, and I'm not sure why they're being detected as such. It's an ecommerce site, and the duplicates are being detected on the product page, but why these 50? (there's hundreds of other products that aren't being detected). The URLs that are 'duplicates' look like this according to the crawl report: http://www.site.com/Product-1.aspx http://www.site.com/product-1.aspx And so on. Canonicals are in place, and have been for weeks, and as I said there's hundreds of other pages just like this not having this problem, so I'm finding it odd that these ones won't go away. All I can think of is that Roger is somehow caching stuff from previous crawls? According to the crawl report these duplicates were discovered '1 day ago' but that simply doesn't make sense. It's not a matter of messing up one or two pages on my part either; we made this site to be dynamically generated, and all of the SEO stuff (canonical, etc.) is applied to every single page regardless of what's on it. If anyone can give some insight I'd appreciate it!
Moz Pro | | icecarats0 -
Will SEOMoz offer URL data relating to Bot visits
Does SEOMoz in the future plan to report on Bot visits for each URL, when they are spidered and when they appear in for example Google's index ?
Moz Pro | | NeilTompkins0 -
SEOmoz bot and "noindex"
As a recent newbie to SEOmoz, I've been implementing some suggestions and doing a general tidy up. I removed URL's from our robots txt, and rolled out instead the noindex meta tag to pages we don't want indexed. But surprised to see issues that are now flagged from the last crawl by the moz bot from pages that have this meta tag? Does the SEOmoz bot not ignore this tag? Just want to make sure I've implemented it correctly, so the google bot does ignore it. Meta tag syntax is and is placed below the title tag. cheers Steve
Moz Pro | | sjr4x40 -
Help with Roger finding phantom links
It Monday and Roger has done another crawl and now I have a couple of issues: I have two pages showing 404->302 or 500 because these links do not exist. I have to fix the 500 but the 404 is trapped correctly. http://www.oznappies.com/nappies.faq & http://www.oznappies.com/store/value-packs/\ The issue is when I do a site scan there is no anchor text that contains these links. So, what I would like to find out is where is Roger finding them. I cannot see any where in the Crawl Report that tells me where the origin of these links is. I also created a blog on Tumblr and now every tag and rss feed entry is producing a duplicate content error in the crawl stats. I cannot see anywhere in Tumblr to fix this issue. Any Ideas?
Moz Pro | | oznappies0