What is a good crawl budget?
-
Hi Community!
I am in the process of updating sitemaps and am trying to obtain a standard for what is considered "strong" crawl budget? Every documentation I've found includes how to make it better or what to watch out for. However, I'm looking for an amount to obtain for (ex: 60% of the sitemap has been crawled, 100%, etc.)
-
@blueprintmarketing I have a large website with Wordpress image folders going back to 2009.
I am currently redesigning my website, and I am trying to determine if there is any benefit to trying to shrink down / delete those images and image folders which I am no longer using.
I really do not have time to go through all of those image folders, and see which ones I am still using, and which ones I am not using anymore. I am hoping this does not matter?
Does anyone here know if this matters when it comes to Google's Crawl Budget?
All of the images are completely optimized and crunched, however, my question is whether it would be worth the time investment to go through every single folder and thousands of images and try to delete the ones which are not being referenced on any of my pages?
Does anyone have a definitive answer regarding Crawl Budget?
-
Can you give some inputs about the site [https://indiapincodes.net/](link url) I tried all recommendations, only 30% of the url is been indexed. would appreciate your time.
-
@yaelslater
Unless you have a huge site, I'm talking about half a million to one million pages. I Would not worry about True Google crawl budge anymore.However, if only 60% of URLs in your XML site map are being indexed, make sure they are indexable URLs if they're not index value, or else you should be able to click in the coverage section of Search Console. It will give you a reason why your URL was submitted by an XML site map or not noindex.
A recent study showed about 20% of URLs on all websites across the study were not indexed for one reason or another.But make sure there are only 200 URLs, no redirects 301, 302, or 404's or noindex nofollow URLs in the XML sitemap because obviously, Google does not put them into the index if the Search Console does not tell you the issue & you would like to share your domain with me, I'm sure I could figure it out.
I don't know if you're using a CDN and if you could share a little more with me especially the domain I can be a lot more helpful.
You could also use a tool like screaming frog and generate a new site map and make sure that is not the issue. If you're using Yoast, you can turn it on and off if you wanted to create a new site map.
You can create up to 500 pages for free using Screaming Frog SEO Spider it is paid after that https://www.screamingfrog.co.uk/xml-sitemap-generator/
Or if you want it or you can generate over 1000 URLs for free online I would recommend https://www.sureoak.com/seo-tools/google-xml-sitemap-generator
However, please keep in mind the sureoak tool has things like a "keyword density checker" that makes me feel like this site is giving out that information because that's not a real thing that Google considers unless you use the same word for every word in the document. Keyword density is one of those things that are not real
But the XML site map generator works just fineI hope this was of help,
Tom
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I disallow crawl of my Job board?
MOZ crawler is telling me we have loads of duplicate content issues. We use a Job Board plugin on our Wordpress site and we have allot of duplicate or very similar jobs (usually just a different location), but the plugin doesn't allow us to add any rel canonical tags to the individual jobs. Should I disallow the /jobs/ url in the robots.txt file? This will solve the duplicate content issue but then Google wont be able to crawl any of the individual job listings Has anyone had any experience working with a job board plugin on Wordpress and had a similar issue, or can advise on how best to solve our duplicate content?? Thanks 🙂
Technical SEO | | O2C0 -
404s effecting crawl rate?
We made a change to our site where we all of a sudden we are creating a large number of 404 pages. Is this effecting the crawl/indexing rate? Currently we've submitted 3.4 million pages, have over 834K indexed but have over and 330K pages not found. Since the large increase in 404s we've noticed a decrease in pages crawled per day. I found this Q & A in Webmasters (http://googlewebmastercentral.blogspot.com/2011/05/do-404s-hurt-my-site.html) but it seems like the 404s should not have an effect. Is this article out of date? What do you think fellow Moz-ers? Is this a problem?
Technical SEO | | JoshKimber0 -
During my last crawl suddenly no errors or warnings were found, only one, a 403 error on my homepage.
There were no changes made and all my old errors dissapeard, i think something went wrong. Is it possible to start another crawl earlyer then scheduled?
Technical SEO | | KnowHowww0 -
Nofollow links appear to be still included in SEOMOZ crawl and Google
I have added the nofollow tag to links throughout my site to hide duplicate content from Google but these pages are still being shown in my SEOMOZ crawl. I also fetched an example page with the Googlebot within Webmaster tools and it showed all nofollow links. An example is http://www.adventurepeaks.com/news All News tags have nofollow but each tag is appearing in my SEOMOZ crawl report as duplicate content. Any suggestions on whether this is a problem or if i have applied the tag incorrectly? Many thanks in advance
Technical SEO | | adventure340 -
Crawl reveals hundreds of urls with multiple urls in the url string
The latest crawl of my site revealed hundreds of duplicate page content and duplicate page title errors. When I looked it was from a large number of urls with urls appended to them at the end. For example: http://www.test-site.com/page1.html/page14.html or http://www.test-site.com/page4.html/page12.html/page16.html some of them go on for a hundred characters. I am totally stymied, as are the people at my ISP and the person who talked to me on the phone from SEOMoz. Does anyone know what's going on? Thanks So much for any help you can offer! Jean
Technical SEO | | JeanYates0 -
When is the last time Google crawled my site
How do I tell the last time Google crawled my site. I found out it is not the "Cache" which I had thought it was.
Technical SEO | | digitalops0 -
Domain Crawl Question
We have our domain hosted by two providers - web.com for the root and godaddy for the subdomain. Why SEOMOZ is not picking up the total pages of the entire domain?
Technical SEO | | AppleCapitalGroup0 -
E-Commerce Site Crawling Problem
Our website displays all of the products in our website If you attempt to visit a category or page that doesn't exist but conforms to our site url structure. Somehow google crawled these pages and indexed them, and they have TONS of duplicate content that hurt us. How do I deal with this problem?
Technical SEO | | 13375auc30