Crawl depth seems off?
-
I'm reviewing my site crawl data and am seeing some very strange things such as:
- The homepage URL has a listed crawl depth of 2.
- Pages that are featured in the main site navigation (which is present on all pages, including homepage) are ranking at a crawl depth of 3.
What am I missing here? Shouldn't my homepage have a crawl depth of 0 or 1? Why would pages linked directly from my homepage have a crawl depth other than 1? (Single click from homepage to that page)?
Thank you!
-
Hi Samantha,
I set up a new campaign using the https:// version of the site and ran a new crawl, but I'm running into the same issue as before. Perhaps this is a bigger question of how site redirects work? I was under the impression that any large-scale redirects (such as from non-www to www or http to https across all pages) can affect crawl time/load time. Rereading your comment, it sounds like what you're saying is those redirects count as layers of crawl depth, as well. By the same token, I'm assuming any redirects (301's in particular) also add a layer of crawl depth.
So, my larger question then is: how can I maximize crawl depth if my site has been redirected from http to https? Will that "extra layer" of crawling always be there as long as the redirect is in place, or is there a way to compress/expedite how the crawl happens?
Thanks for your input on this!
-
Hi Samantha,
That makes sense, thank you. I'll set up a new campaign tracking with "https://" instead!
-
Hey there,
Sam from Moz's Help Team here!
So the thing to keep in mind when you set up a campaign at the root domain level is that we'll be starting the crawl from the http protocol (non-www). In this case - http://logic2020.com/. If you filter by crawl depth in your Site Crawl you'll see that URL with a crawl depth of 0.
It redirects to http://www.logic2020.com/ which has a crawl depth of 1. That URL then redirects again to https://www.logic2020.com/, which is listed with a crawl depth of 2 - hence why links we found on that page have a crawl depth of 3.
I hope this helps to clarify but let me know if you have any other questions!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are my trackable links appearing in my site crawls?
Our Google-created trackable campaign URLs are appearing in site crawls and often triggering error warnings.
Link Explorer | | K.Reeves0 -
Angular SPA & MOZ Crawl Issues
Website: https://www.exambazaar.com/ Issue: Domain Authority & Page Authority 1/100 I am using Prerender to cache/render static pages to crawl agents but MOZ is not able to crawl through my website (https://www.exambazaar.com/). Hence I think it has a domain authority of 1/100. I have been in touch with Prerender support to find a fix for the same and have also added dotbot to the list of crawler agents in addition to Prerender default list which includes rogerbot. Do you have any suggestions to fix this? List: https://github.com/prerender/prerender-node/commit/5e9044e3f5c7a3bad536d86d26666c0d868bdfff Adding dotbot to Express Server:
Link Explorer | | gparashar
prerender.crawlerUserAgents.push('dotbot');0 -
Internal Equity-Passing Links not getting crawled in Moz Open Site Explorer?
Internal Equity-Passing Links not getting crawled in Moz Open Site Explorer. What is the cause of this? We've checked the robots.txt and htaccess file, but so far we can't find anything that would be blocking Moz from crawling the internal links. We manage loads of other clients on this platform and this is the first time we've run into this issue. What else can I check?
Link Explorer | | OozleMedia0 -
804 error preventing website being crawled
Hi For both subdomains https://us.sagepub.com and https://uk.sagepub.com crawling is being prevented by a 804 error. I can't see any reason why this should be so as all content is served through https. Thanks
Link Explorer | | philmoorse0 -
Moz cannot crawl domain. Also OSE does not work properly on this specific domain?
Hi all, Moz cannot crawl the domein http://www.hoesjescases.nl.
Link Explorer | | Guapa_zwolle
When I open the crawl report I only see one line: <colgroup><col width="229"><col width="287"><col width="420"><col width="370"><col width="141"></colgroup>
| URL | Time Crawled | Title Tag | Meta Description | HTTP Status Code |
| http://www.hoesjescases.nl | 2015-10-05T12:20:48Z | 404 : Received 404 (Not Found) error response for page. | Error attempting to request page; see title for details. | 404 | Also when running OSE on this domain, Moz only can find 4 root domains while Majestic can find 91 domains. Google seems not to have any problems. What can be the problem for MOZ? Greetings!0 -
Incorrect crawl errors
A crawl of my websites has indicated that there are some 5XX server errors on my website: Error Code 608: Page not Decodable as Specified Content Encoding
Link Explorer | | LiamMcArthur
Error Code 803: Incomplete HTTP Response Received
Error Code 803: Incomplete HTTP Response Received
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 902: Network Errors Prevented Crawler from Contacting Server The five pages in question are all in fact perfectly working pages and are returning HTTP 200 codes. Is this a problem with the Moz crawler?1 -
Is there some way to tell the Moz crawler not to crawl URL's with particular dynamic tags such as "?redirect-to:http//" ?
We are encountering an issue where the crawler is finding a ton of pages from our wordpress login url that has this dynamic tag in it to kinds of different blog entries. It's madness. I can't figure out what is causing these URLs to generate to be crawled in the first place! Does this sound familiar to anyone out there, any constructive suggestions? Robots text or maybe meta robots tags that would resolve this crawl issue?
Link Explorer | | RegistrarCorp0 -
Does OSE crawl our site often? Or do we have an other problem? (internal links)
I am wondering if i can find out when OpenSiteExplorer crawled our entire website for the last time.
Link Explorer | | wilcoXXL
A few months ago we added some internal textlinks to all of our productpages. Those are links to the brandpage of that particular product. Some of our brands do have up to 1500 products, so there should be at least 1500 internal links pointing to that brandpage. But OSE still gives me a wrong count. It says there is only 1 internal link pointing to that brandpage.
Is it because OSE never crawled our site again? Or am i missing something here?(maybe too many internal links to one specific brandpage is not okay?) I'll hope you guys can help me out with this one.0