Possible Crawling Problem with Screaming Frog and Moz Crawlers
-
So I'm not sure if what I'm seeing is a problem or not.
As of about two weeks ago the Moz crawler has only been able to see www.mysite.com, and none of the links, content, title, ect associated with the page. Essentially the report has one line, what should be the homepage, but it's not able to pull any information from the page but does show a 200 http status code. The report shows nothing blocked by robots or any errors.
When I use screaming frog to crawl the site about 75% of the time it just reports one line www.mysite.com with a 200 status code, but again the crawler is not able to actually see the html. The other 25% of the time it works perfectly fine, crawls all pages and sees all meta info and content.
There are no errors in Google WMT and everything looks ok there. We have seen a traffic drop the last two weeks but I don't know if this is the reason for it.
I can't publicly post the page but if someone has an idea of what might be going on I'd be happy to PM them.
Thanks
-
Thank you for the response.
I've ran two MOZ crawl reports today, one with mysite.com and one www.mysite.com. Both returned 1 result for mysite.com and www.mysite.com respectively, with a 200 status code, but no meta data. I know that I've successfully crawled www.mysite.com about a month ago with no problems. I have made small changes here and there but nothing is jumping out at me as wrong.
Screaming Frog is currently crawling my site successfully about 1/10 tries. The successful tries it sees 163 Total URL Encountered (its a small site) and the other 9/10 times it shows exactly 1 URL (the one i entered) and no meta data. There doesn't seem to be any pattern when it successfully crawls and when it doesn't make it past the first page.
Google WMT is currently showing No Data Available for both internal links and links to your site which is a little concerning. Everything else in WMT looks ok.
-
Two possible simple key-in items to consider: make sure the URL is inputted w/ the full url (not just mysite.com) and/or ensure to click any options for including root or sub-domains so its not just looking at a single page.
-
If you PM me the domain I can take a look myself.
Does the robots.txt have anything funny in there?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why did Moz crawl our development site?
In our Moz Pro account we have one campaign set up to track our main domain. This week Moz threw up around 400 new crawl errors, 99% of which were meta noindex issues. What happened was that somehow Moz found the development/staging site and decided to crawl that. I have no idea how it was able to do this - the robots.txt is set to disallow all and there is password protection on the site. It looks like Moz ignored the robots.txt, but I still don't have any idea how it was able to do a crawl - it should have received a 401 Forbidden and not gone any further. How do I a) clean this up without going through and manually ignoring each issue, and b) stop this from happening again? Thanks!
Moz Pro | | MultiTimeMachine0 -
Why does Moz show 302s that I previously resolved?
So Moz was flagging thousands of 302 errors in the Redirect Issues section. It was due to the URL extensions containing /directory/currency/switch/currency/ I added this to my Robots.txt file as I don't want these indexed. I marked the 302 errors as fixed and after the next crawl they came back and I now get the message: One or more previously fixed issues continue to persist. We have found one or more issues that were marked as fixed previously in the last crawl. Do I just ignore the errors or is there something wrong that I may be doing? Any help would be much appreciated. Thank you.
Moz Pro | | lbagley0 -
Moz Local
Hey Does anyone know when Moz Local will be available in the UK? Please share your experience. I'd like to know how it's actually helped your business and client base. Thanks so much Gary
Moz Pro | | GaryVictory0 -
Question for Moz developers - Highcharts?
So, I see that Moz is using Highcharts as it's charting display engine. What made you decide to use them instead of some of the other solutions out there, like FusionCharts or Google Charts, even creating your own home-made creation?
Moz Pro | | MrSchadow
Our company is starting over from scratch with reports/charts and are looking at other solutions than what we currently are using (fusioncharts/fusion widgets). And I wanted to get feedback on why you chose this route over any other. Thanks!!0 -
Crawl diagnostics up to date after Magento ecommerce site crawl?
Howdy Mozzers, I have a Magento ecommerce website and I was wondering if the data (errors/warnings) from the Crawl diagnostics are up to date. My Magento website has 2.439 errors, mainly 1.325 duplicate page content and 1.111 duplicate page title issues. I already implemented the Yoast meta data plugin that should fix these issues, however I still see there errors appearing in the crawl diagnostics, but when going to the mentioned URL in the crawl diagnostics for e.g.: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and checking the source code and searching for 'canonical' I do see: http://domain.com/babyroom/productname.html" />. Even I checked the google serp for url: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and I couldn't find the url indexed in Google. So it basically means the Yoast meta plugin actually worked. So what I was wondering is why I still see the error counted in the crawl diagnostics? My goal is to remove all the errors and bring it all to zero in the crawl diagnostics. And now I am still struggling with the "overly-dynamic URL" (1.025) and "too many on-page links" (9.000+) I want to measure whether I can bring the warnings down after implementing an AJAX-based layered navigation. But if it's not updating it here crawl diagnostics I have no idea how to measure the success of eliminating the warnings. Thanks for reading and hopefully you all can give me some feedback.
Moz Pro | | videomarketingboys0 -
Crawlers crawl weird long urls
I did a crawl start for the first time and i get many errors, but the weird fact is that the crawler tracks duplicate long, not existing urls. For example (to be clear): there is a page: www.website.com/dogs/dog.html but then it is continuing crawling:
Moz Pro | | r.nijkamp
www.website.com/dogs/dog.html
www.website.com/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dogs/dog.html what can I do about this? Screaming Frog gave me the same issue, so I know it's something with my website0 -
Links & page authority crawl
I see the links and page authority have not been updated in over a month... does anyone know how often it gets updated?
Moz Pro | | nazmiyal0 -
Crawl Diagnostic Errors
Hi there, Seeing a large number of errors in the SEOMOZ Pro crawl results. The 404 errors are for pages that look like this: http://www.example.com/2010/07/blogpost/http:%2F%2Fwww.example.com%2F2010%2F07%2Fblogpost%2F I know that t%2F represents the two slashes, but I'm not sure why these addresses are being crawled. The site is a wordpress site. Anyone seen anything like this?
Moz Pro | | rosstaylor0