Robots.txt was set to disallow for 14 days
-
We updated our website and accidentally overwrote our robots file with a version that prevented crawling ( "Disallow: /") We realized the issue 14 days later and replaced after our organic visits began to drop significantly and we quickly replace the robots file with the correct version to begin crawling again. With the impact to our organic visits, we have a few and any help would be greatly appreciated -
Will the site get back to its original status/ranking ?
If so .. how long would that take?
Is there anything we can do to speed up the process ?
Thanks
-
Thank you for the response.
We have been watching over the past week and there has been a very small change in the number of indexed urls in GSC and no change in the stats on the MOZ dashboard.
Is that normal? How often does MOZ update the stats?
-
This is commonly done intentionally, when launching a site on a new domain. Once the disallow is removed, the general practice is to request reindexing of the root domain page (and possibly some key pages with paths not likely to be found through navigation) in GSC, and also submitting (or re-submitting) your sitemaps directly in GSC (even though they also may/should be in your robots.txt file).
I'm not sure how long you can expect the search engines to take, since your situation is a bit unique where the site was indexed, and then disallowed temporarily. Just guessing based on launching brand new domains, the process should be quick to be indexed (perhaps a few days) but might be slower on regaining previous ranking positions (unsure on timing of this).
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Lost homepage rankings for one day
We had a strange thing happen to our website. We have a website that ranks for top keywords for some years. Last week we lost alle of our rankings for one day. The strange thing was, that the rankings were only lost on the homepage, the homepage didnt rank anymore. Even when i googled for 'homepage.com' google showed 'homepage.com/page' and not the homepage. After some reading i checked for the following: no-index sources in the source code - but no results checked for http respons code - 200 ok status checked for downtime in pingdom - no downtime With no rankings on the homepage i was losing 90% of the traffic, i went to GWT and did a fetch as google request, the website looked okay. After that i request a new indexation request. After this the website was back in google with all the old rankings. But what has happend, is still a question for me. Can this be an hack or a wordpress problem or is this realtime penguin with a hit. I also read things about Google flux, but it didnt happen before and it was just the homepage, with no index at all. What happend?
Intermediate & Advanced SEO | | remkoallertz0 -
Setting up the right Geo targeting/language targeting settings and not to brake the SEO
Hello the great Moz Community! Gev here from BetConstruct, a leading gaming and betting software provider in the world. Our company website is performing great on SERP. We have 20+ different dedicated pages for our 20+ softwares, event section, different landing pages for different purposes. We also run a blog section, Press section, and more... Our website's default language is EN. 4 months ago we opened the /ru and /es versions of the website! I have set the correct hreflang tags, redirects, etc.. generated correct sitemaps, so the translated versions started to rank normally! Now our marketing team is requesting different stuff to be done on the website and I would love to discuss this with you before implementing! There are different cases! For example: They have created a landing page under a url betconstruct.com/usa-home and want me to set that page as the default website page(ie homepage), if the user visits our website from a US based IP. This can be done in 2 different ways: I can set the /usa-home page as default in my CMS, in case the visitor is from US and the address will be just betconstruct.com(without /use-home). In this case the same URL (betconstruct.com) will serve different content for only homepage. I can check the visitor IP, if he is from US, I can redirect him to betconstruct.com/usa-home. In this case user can click on the logo and go to the homepage betconstruct.com and see the original homepage. Both of the cases seems to be dangerous, because in the 1st case I am not sure what google will think when he sees different homepage from different IPs. And in the 2nd case I am not sure what should be that redirection. Is it 301 or 303, 302, etc... Because Google will think I don't have a homepage and my homepage redirects to a secondary page like /usa-home After digging a lot I realised that my team is requesting from me a strange case. Because the want both language targeting(/es, /ru) and country targeting (should ideally be like /us), but instead of creating /us, they want it to be instead of /en(only for USA) Please let me know what will be the best way to implement this? Should we create a separate version of our website for USA under a /us/* URLs? In this case, is it ok to have /en as a language version and /us as a country targeting? What hreflangs to use? I know this is a rare case and it will be difficult for you to understand this case, but any help will be much appreciated! Thank you! Best,
Intermediate & Advanced SEO | | betconstruct
Gev0 -
Question about Syntax in Robots.txt
So if I want to block any URL from being indexed that contains a particular parameter what is the best way to put this in the robots.txt file? Currently I have-
Intermediate & Advanced SEO | | DRSearchEngOpt
Disallow: /attachment_id Where "attachment_id" is the parameter. Problem is I still see these URL's indexed and this has been in the robots now for over a month. I am wondering if I should just do Disallow: attachment_id or Disallow: attachment_id= but figured I would ask you guys first. Thanks!0 -
Using Meta Header vs Robots.txt
Hey Mozzers, I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes. For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense). I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues. Thoughts?
Intermediate & Advanced SEO | | evan890 -
Can I use the same set of social media accounts on two sites?
I have a client who's company name / main site name is not his name. All his social accounts connected to his site are in his name. The site is verified with Google Places, etc. Now he asked for me to create a site for him in his own name with some similar info and a lot of new info. The million dollar question is do I use the same social media accounts on the new site? Facebook, twitter, youtube, etc? Will that hurt the organic rankings of the main site? I've seen this similar situation before. You may have someone who works at a large corporation who is mentioned on the main site has their own personal profile site just about them. Where they can go more in depth about things they are doing. If the other set of social account where in the company name it would be a no brainer to create new social accounts in his name. The issue is the main companies social accounts are already in his name.
Intermediate & Advanced SEO | | markpine3600 -
Issue with Robots.txt file blocking meta description
Hi, Can you please tell me why the following error is showing up in the serps for a website that was just re-launched 7 days ago with new pages (301 redirects are built in)? A description for this result is not available because of this site's robots.txt – learn more. Once we noticed it yesterday, we made some changed to the file and removed the amount of items in the disallow list. Here is the current Robots.txt file: # XML Sitemap & Google News Feeds version 4.2 - http://status301.net/wordpress-plugins/xml-sitemap-feed/ Sitemap: http://www.website.com/sitemap.xml Sitemap: http://www.website.com/sitemap-news.xml User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Other notes... the site was developed in WordPress and uses that followign plugins: WooCommerce All-in-One SEO Pack Google Analytics for WordPress XML Sitemap Google News Feeds Currently, in the SERPs, it keeps jumping back and forth between showing the meta description for the www domain and showing the error message (above). Originally, WP Super Cache was installed and has since been deactivated, removed from WP-config.php and deleted permanently. One other thing to note, we noticed yesterday that there was an old xml sitemap still on file, which we have since removed and resubmitted a new one via WMT. Also, the old pages are still showing up in the SERPs. Could it just be that this will take time, to review the new sitemap and re-index the new site? If so, what kind of timeframes are you seeing these days for the new pages to show up in SERPs? Days, weeks? Thanks, Erin ```
Intermediate & Advanced SEO | | HiddenPeak0 -
SEOMOZ crawler is still crawling a subdomain despite disallow
This is for our client with a subdomain. We only want to analyze their main website as this is the one we want to SEO. The subdomain is not optimized so we know it's bound to have lots of errors. We added the disallow code when we started and it was working fine. We only saw the errors for the main domain and we were able to fix them. However, just a month ago, the errors and warnings spiked up and the errors we saw were for the subdomain. As far as our web guys are concerned. the disallow code is still there and was not touched. User-agent: rogerbot Disallow: / We would like to know if there's anything we might have unintentionally changed or something we need to do so that the SEOMOZ crawler will stop going through the subdomain. Any help is greatly appreciated!
Intermediate & Advanced SEO | | TheNorthernOffice790 -
How long will Google take to read my robots.txt after updating?
I updated www.egrecia.es/robots.txt two weeks ago and I still haven't solved Duplicate Title and Content on the website. The Google SERP doesn't show those urls any more but SEOMOZ Crawl Errors nor Google Webmaster Tools recognize the change. How long will it take?
Intermediate & Advanced SEO | | Tintanus0