Will robots.txt override a Firewall for Rogerbot?
-
Hey everybody.
Our server guy, who is sorta difficult, has put these ridiculous security measures in place which lock people out of our website all the time. Basically if I ping the website too many times I get locked out, and that's just on my own, doing general research.
Regardless, all of our audits are coming back with 5xx errors and I asked if we could add rogerbot to the robots.txt. He seems to be resistant to the idea and just wants to adjust the settings to his firewall...
Does anybody know if putting that in the Robots.txt will override his firewall/ping defense he has put in place? I personally think what he has done is WAY too overkill, but that is besides the point.
Thanks everybody.
-
So I spoke with our host. Basically he has been adjusting the port flood settings because of a DDoS attack we had roughly 9 months ago.
We have roughly 1000 domains on the same server, all with wordpress. I went through and changed the nameserver on around 800 of them to bring us down. In the long run, I want to bring us to 1 website. There is no reason for us to have 200, or 5 for that matter. They are redundant websites that were build simply to bolster our main website by blackhat tactics.
Our host stated that the only way to keep things kosher would be to switch all 1,000 domains to a new server every 2 years because once the "hackers" find out that there is a cluster of 1,000 domains in the same place, they will blast it.
Anyway, I'm working on cutting the domains in the safest way possible, and switching servers as soon as possible!
-
Yes it does thank you!
When I asked out dev (who also hosts our domains) to adjust the settings for the rogerbot he said
"3 pages per second is basically me undoing the portflood setting completely, thus rending the site very insecure to brute force attempts, which would inevitably drive the server load very high in anywhere from 3-24 hours."
I am glad that he is concerned about the security of our website. At the same time, I find it hard to believe we need anything near this intense. We do not have any online store, we do not collect credit card data or anything like that.
It seems overkill...
-
Unfortunately, no. The security he has in place will block the crawler access before it ever gets a chance to see the robots file.
If you have a dev who's making business-limiting decisions, you have a major problem and need to address that first.
Hope that helps?
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will Removing or Disavowing Toxic Links Improve MOZ Domain Authority?
The vast majority of the 140 domains that link to our website are very low quality directories or and other toxic links. Only about 20-30 domains are not toxic (according to Link Research Tools confirmed by out manual inspection of these links). Would removing some of these links improve of MOZ Domain Rank? What if we cannot remove them, can NOZ detect a disavow file? In general would improving the ratio between good quality and poor quality links improve domain authority? Thanks,
Moz Bar | | Kingalan1
Alan2 -
Limit MOZ crawl rate on Shopify or when you don't have access to robots.txt
Hello. I'm wondering if there is a way to control the crawl rate of MOZ on our site. It is hosted on Shopify which does not allow any kind of control over the robots.txt file to add a rule like this: User-Agent: rogerbot Crawl-Delay: 5 Due to this, we get a lot of 430 error codes -mainly on our products- and this certainly would prevent MOZ from getting the full picture of our shop. Can we rely on MOZ's data when critical pages are not being crawled due to 430 errors? Is there any alternative to fix this? Thanks
Moz Bar | | AllAboutShapewear2 -
Does "Disallow: /xmlrpc.php" in robots.txt affect moz tools ability to fetch DA?
Just checked a website for Domain Authority using Moz' tool, however it returned 1 for DA, which should be unlikely. I have been trying to find the problem and found "Disallow: /xmlrpc.php" in robots.txt. Could this affect Moz' tools ability to get the required data?
Moz Bar | | Foli0 -
How non-US Moz customers will use Keyword Explorer after the Keyword Difficulty tool is retired?
The new Moz Keyword Explorer looks good but its search volume is US based and completely useless for non-US websites. This is from Rand's post: "while the tool can search any Google domain in any country, the volume numbers will always be for US-volume. In the future, we hope to add volume data for other geos as well." In the Keyword Difficulty tool, Moz shows Google search volume data, which is similar to what I see in the Google Keyword Planner and Google Search Console. For example, keyword X in the Australian search market has 6-7k searches in the Google Keyword Planner and 8k searches in Moz. The very same keyword has 118k-300k search volume in the new Keyword Explorer! Obviously this new search volume is not useful in the Australian market. I often used the Keyword Difficulty tool to identify new keyword opportunities but what can I do to complete the same tasks after they retire the tool?
Moz Bar | | Gyorgy.B2 -
Cannot Crawl ... 612 : Page banned by error response for robots.txt.
I tried to crawl www.cartronix.com and I get this error: 612 : Page banned by error response for robots.txt. I have a robots.txt file and it does not appear to be blocking anything www.cartronix.com/robots.txt Also, Search Console is showing "allowed" in the robots.txt test... I've crawled many of our other sites that are similarly set up without issue. What could the problem be?
Moz Bar | | 1sixty80 -
Why RogerBot can't crawl site https://unplag.com
Hello Please help me to solve the problem. The on-page grader and Crawl Test are not working for Unplag.com website. Both said that they can't access the url. Yes, I've tried different variants like unplag.com, http://unplag.com One more thing - RogerBot was disallowed in robots.txt file. I deleted it from the file a week ago so maybe moz index haven't been renewed.
Moz Bar | | Targeras0 -
Omega8.cc decided to block rogerbot
My host decided to block rogerbot because "it's too agreessive... and doesn't follow the Crawl-limit... so we blocked them". And now I can't get crawl reports on my site. Any advice?
Moz Bar | | JayShoe0 -
Will the videos for the 2013 Moz Con be available for purchase in a bundle like they were in 2012? If so when? Thanks!
Will the videos for the 2013 Moz Con be available for purchase in a bundle like they were in 2012? If so when and how much? I would like to purchase. Thanks!
Moz Bar | | larahill0