Exclude status codes in Screaming Frog
-
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration.
The site has approximately 190,000 pages according to the results of a Google site: command.
- The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root.
- There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters.
- There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs.
Does anyone know how to exclude files using status codes? I know that would help.
If it helps, the site is kodylighting.com.
Thanks in advance for any guidance you can provide.
-
Thanks for your help. It literally was just the fact that it had to be done before the crawl began and could not be changed during the crawl. Hopefully this is changed because sometimes during a crawl you find things you want to exclude that you may have not known of their existence before hand.
-
Are you sure it's just on Mac,have you tried on PC? Do you have any other rules in include or perhaps a conflicting rule in exclude? Try running a single exclude rule, also on another small site to test.
Also from support if failing on all fronts:
- Mac version, please make sure you have the most up to date version of the OS which will update Java.
- Please uninstall, then reinstall the spider ensuring you are using the latest version and try again.
To be sure - http://www.youtube.com/watch?v=eOQ1DC0CBNs
-
does the exclude function work on mac. i have tried every possible way to exclude folders and have not been successful while running an analysis
-
That's exactly the problem, the redirects are disbursed randomly throughout the site. Although, and the job's still running, it now appears as though there's almost a 1-2-1 correlation between pages and redirects on the site.
I also heard from Dan Sharp via Twitter. He said "You can't, as we'd have to crawl a URL to see the status code You can right click and remove after though!"
Thanks again Michael. Your thoroughness and follow through is appreciated.
-
Took another look, also looked at documentation/online and don't see any way to exclude URLs from crawl based on response codes. As I see it you would only want to exclude on name or directory as response code is likely to be random throughout a site and impede a thorough crawl.
-
Thank you Michael.
You're right. I was on a 64 bit machine running a 32 bit verson of java. I updated it and the scan has been running for more than 24 hours now without hanging. So thank you.
If anyone else knows of a way to exclude files using status codes I'd still like to learn about it. So far the scan is showing me 20,000 redirected files which I'd just as soon not inventory.
-
I don't think you can filter out on response codes.
However, first I would ensure you are running the right version of Java if you are on a 64bit machine. The 32bit version functions but you cannot increase the memory allocation which is why you could be running into problems. Take a look at http://www.screamingfrog.co.uk/seo-spider/user-guide/general/ under Memory.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Question about a Screaming Frog crawling issue
Hello, I have a very peculiar question about an issue I'm having when working on a website. It's a WordPress site and I'm using a generic plug in for title and meta updates. When I go to crawl the site through screaming frog, however, there seems to be a hard coded title tag that I can't find anywhere and the plug in updates don't get crawled. If anyone has any suggestions, thatd be great. Thanks!
Technical SEO | | KyleSennikoff0 -
Redirect chain error free htaccess code for website
i want to redirect domain, example.com to https://www.example.com, is anyone can help me to provide redirect chain error free ht-access code. I implemented this htaccess code on the website and mhy site show on the moz redirect chain error RewriteCond %{HTTP_HOST} !=""
Technical SEO | | truehab
RewriteCond %{THE_REQUEST} ^[A-Z]+\s//+(.)\sHTTP/[0-9.]+$ [OR]
RewriteCond %{THE_REQUEST} ^[A-Z]+\s(./)/+\sHTTP/[0-9.]+$
RewriteRule .* http://%{HTTP_HOST}/%1 [R=301,L]0 -
Help! How to Remove Error Code 901: DNS Errors (But to a URL that doesn't exist!)
I have 2 urgent errors saying there are 2 x error code 909's detected. These don't link to any page - but I can tell there is a mistake somewhere - I just don't know what needs changing. http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/printed-promotional-keyrings http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/blank-unassembled-keyrings Could someone help please? screen-shot-2015-08-11-at-13.18.17.png?t=1439292942
Technical SEO | | FullSteamBusiness0 -
Exclude Noindex, Followed pages from sitemap?
Hello Everyone! This is a question about my site, which is running on WordPress. Currently, I have category page to have the noindex, follow attributes, as they have little unique content. I do have them currently in my sitemap.xml file, however. Should I remove them from the sitemap since Google technically shouldn't index them? Thanks for your help!
Technical SEO | | Zachary_Russell0 -
Re-using site code.
Hi, I'm looking at launching a new website, and am keen to understand whether re-using the basic code behind one of my other sites will cause me an issue. I'll be changing the directory structure/ file names, etc - but it will basically leave me with a very similar-looking site to another in my portfolio - using code thats all ready out there, etc. Thanks, David
Technical SEO | | newstd1000 -
Omniture tracking code URLs creating duplicate content
My ecommerce company uses Omniture tracking codes for a variety of different tracking parameters, from promotional emails to third party comparison shopping engines. All of these tracking codes create URLs that look like www.domain.com/?s_cid=(tracking parameter), which are identical to the original page and these dynamic tracking pages are being indexed. The cached version is still the original page. For now, the duplicate versions do not appear to be affecting rankings, but as we ramp up with holiday sales, promotions, adding more CSEs, etc, there will be more and more tracking URLs that could potentially hurt us. What is the best solution for this problem? If we use robots.txt to block the ?s_cid versions, it may affect our listings on CSEs, as the bots will try to crawl the link to find product info/pricing but will be denied. Is this correct? Or, do CSEs generally use other methods for gathering and verifying product information? So far the most comprehensive solution I can think of would be to add a rel=canonical tag to every unique static URL on our site, which should solve the duplicate content issues, but we have thousands of pages and this would take an eternity (unless someone knows a good way to do this automagically, I’m not a programmer so maybe there’s a way that I don’t know). Any help/advice/suggestions will be appreciated. If you have any solutions, please explain why your solution would work to help me understand on a deeper level in case something like this comes up again in the future. Thanks!
Technical SEO | | BrianCC0 -
Using alt-codes such as ? in META title / description
Noticed a search result recently that really caught my eye and certainly stood out among the other 10 results on the page, the META description contained the following snippet: "Learn more about our ★★★★★ rated service..." Any opinions on how using such alt-chars might effect search positioning when used in either the title or description META tags? The starts certainly looked very different to anything else on the page... The claim of 5 star rated service was pretty much accurate so it was genuine and fair to use it...
Technical SEO | | digitalarts0 -
301 Redirect for homepage with language code
In my multilingual Magento store, I want to redirect the hompage URL with an added language code to the base URL. For example, I want to redirect http://www.mysite.com/tw/ to http://www.mysite.com/ which has the exact same content. Using a canonical URL will help with search engines, but I would just rather nip the problem in the butt by not showing http://www.mysite.com/tw/ to visitors in the first place. Problem is that I don't want (can't have) all /tw/ removed from URLs due to Magento limitations, so I just want to know how to redirect this single URL. Since rewrites are on, adding Redirect 301 /tw http://www.88kbbq.com would redirect all URLs with the /tw/ language code to ones without. Not an option. Hope folks can lend a hand here.
Technical SEO | | kwoolf0