Googlebot stopped crawling
-
Hi All, One of my website stopped showing in SERP, after analysing in webmaster, found that Googlebot is not able to crawl. However it was working alright few days back. Try to investigate for panelisation, but no intimation found. I checked robot.txt for no follow etc but all seems to be ok. I resubmitted Sitemap in webmaster again, it crawled 250 pages out of 500 but it still site is not available in SERP (google), in bing it is ok.
Pl suggest the best possible solutions to try.
Thx
-
This might be a shot in the dark not knowing much about your site - can you check in Google Webmaster to see if you accidentally removed your website using the Remove URLs tool? I know of someone that accidentally did this when copying a pasting a URL, but accidentally only copied their main website address and not the full URL (oops!) and their site dropped out of Google SERPs rather quickly. Just a thought...
-
Very hard to say without more details. Does your site have unique, high quality content? If it's just duplicate content, Google may crawl it but won't necessarily show it in the SERPs.
Also, what does your backlink profile look like? Google allocates crawl budget based on your PageRank, so if Google isn't crawling all your pages, then you will want to acquire more external backlinks.
-
There may be many technical things going on with your robot.txt file, no-index tags, etc.
But where I would start first is with your website hosting company.
My guess - not having seen your site - is that you may be hosted on a site with a low-cost hosting provider. And you are experiencing downtime at random times that are affecting Google's ability to crawl your site.
The other clue that points me to your web hosting service is that Google tried to crawl 500 pages, but it was only able to handle 250.
What I would do is first look and see if your site is timing out under heavy loads / lots of visitors.
That's most likely the culprit.
I'd subscribe to a free site monitoring service that will ping the site every 5 minutes or so, and email you if it goes down.
Hope this helps!
-- Jeff
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl anamoly issue on Search Console
Has anyone checked the crwal anamoly issue under the index section on Search console? We recently move to a new site and I'm seeing a huge list of excluded urls which are classified as crawl anamoly (they all lead to 404 page). Does anyone know that if we need to 301 redirect all the links? Is there any other smarter/ more efficiently way to deal with them like set up canonical link (I thought that's what they're used for isn't it?) Thanks!
White Hat / Black Hat SEO | | greenshinenewenergy0 -
Does Google and Other Search Engine crawl meta tags if we call it using react .js ?
We have a site which is having only one url and all other pages are its components. not different pages. Whichever pages we click it will open show that with react .js . Meta title and meta description also will change accordingly. Will it be good or bad for SEO for using this "react .js" ? Website: http://www.mantistechnologies.com/
White Hat / Black Hat SEO | | RobinJA0 -
Googlebot crawling AJAX website not always uses _escaped_fragment_
Hi, I started to investigate googlebot crawl log of our website, and it appears that there is no 1:1 correlation between a crawled URL with escaped_fragment and without it.
White Hat / Black Hat SEO | | yohayg
My expectation is that each time that google crawls a URL, a minute or so after, it suppose to crawl the same URL using an escaped_fragment For example:
Googlebot crawl log for https://my_web_site/some_slug Results:
Googlebot crawled this URL 17 times in July: http://i.imgur.com/sA141O0.jpg Googlebot crawled this URL additional 3 crawls using the escaped_fragment: http://i.imgur.com/sOQjyPU.jpg Do you have any idea if this behavior is normal? Thanks, Yohay sOQjyPU.jpg sA141O0.jpg0 -
Moz was unable to crawl your site? Redirect Loop issue
Moz was unable to crawl your site on Jul 25, 2017. I am getting this message for my site: It says "unable to access your homepage due to a redirect loop. https://kuzyklaw.com/ Site is working fine and last crawled on 22nd July. I am not sure why this issue is coming. When I checked the website in Chrome extension it saysThe server has previously indicated this domain should always be accessed via HTTPS (HSTS Protocol). Chrome has cached this internally, and did not connect to any server for this redirect. Chrome reports this redirect as a "307 Internal Redirect" however this probably would have been a "301 Permanent redirect" originally. You can verify this by clearing your browser cache and visiting the original URL again. Not sure if this is actual issue, This is migrated on Https just 5 days ago so may be it will resolved automatically. Not sure, can anybody from Moz team help me with this?
White Hat / Black Hat SEO | | CustomCreatives0 -
Excluding Googlebot From AB Test - Acceptable Sample Size To Negate Cloaking Risk?
My company uses a proprietary AB testing platform. We are testing out an entirely new experience on our product pages, but it is not optimized for SEO. The testing framework will not show the challenger recipe to search bots. With that being said, to avoid any risks of cloaking, what is an acceptable sample size (or percentage) of traffic to funnel into this test?
White Hat / Black Hat SEO | | edmundsseo0 -
Why have bots (including googlebot) categorized my website as adult?
How do bots decide whether a website is adult? For example, I have a gifting portal, but strangely here, it is categorized as 'Adult'. Also, my google adsense application to run ads on my site got rejected - I have a feeling this is because googlebot categorized my site as adult. And there are good chances that other bots also consider it an adult website, rather than a gifting website. Can anyone please go through the site and tell me why this is happening? Thanks in advance.
White Hat / Black Hat SEO | | rahulkan0 -
"Via this intermediate Link" how do I stop the madness?
Hi, -1- I have an old site which had a manual spam action placed against it several years ago, this is the corporate site and unfortunately has its name placed on all business cards etc, therefore I am unable to get rid of this site entirely.. -2- I created a brand new site with a new domain name for which white hat SEO marketing has been done and very little of it... everything was doing well up until last week when I dropped from bottom of page one to top of page 11 for my keyword in question. -3- I changed the old sites ( the one with the manual spam action ) to mimic the look of the FIRST PAGE of the new domain I am using, and I have the main menu items on this first page linked to the appropriate sections within the new domain site, i.e About US etc. On this page I'm the following: <link rel="<a class="attribute-value">canonical</a>" href="[http://www.mynewsite.com](view-source:http://www.norsteelbuildings.ca/)" /> and am linking as such: <li><a href="http://www.mynewsite.com/about/" class="" rel="<a class="attribute-value">nofollow</a>">ABOUT USa>li> using this approach I was hoping that I was doing the correct and not passing along any link juice good or bad however when I view the "Webmaster Tools->Links to your site" I find 1000+ links from my old site and then when I click on it I see all the spammy links that my old site got banned for pointing to my old site and accompanied by a header "Via this imtermediate Link>myoldSite.com". Can someone please sehd some light on what I should e doing or if even these link are effecting my new site, something is telling me there are but how do I resolve this issue.. Thanks in advance.. ```
White Hat / Black Hat SEO | | robdob120 -
How to resolve - Googlebot found an extremely high number of URLs
Hi, We got this message from Google Webmaster “Googlebot found an extremely high number of URLs on your site”. The sample URLs provided by Google are all either noindex or have a canonical. http://www.myntra.com/nike-stylish-show-caps-sweaters http://www.myntra.com/backpacks/f-gear/f-gear-unisex-black-&-purple-calvin-backpack/162453/buy?src=tn&nav_id=541 http://www.myntra.com/kurtas/alma/alma-women-blue-floral-printed-kurta/85178/buy?nav_id=625 Also we have specified the parameters on these URLs as representative URL in Google Webmaster - URL parameters. Your comments on how to resolve this issue will be appreciated. Thank You Kaushal Thakkar
White Hat / Black Hat SEO | | Myntra0