Disallowed "Search" results with robots.txt and Sessions dropped
-
Hi
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON).I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site.
I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them.
Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.
-
If you have a site with, at least 30k URLs, looking at only 300 keywords won't reflect the general status of the whole site. If you are looking for a 10% loss in traffic, I'd start by chasing the pages that lost more traffic, then analyzing whether they lost rankings or if there are some other issues.
Another way to find where there is traffic loss is in search Console, looking at keywords that aren't in the top300. There might be a lot to analyze.
It's not a big deal having a lot of pages blocked in robots.txt when what's blocked is correctly blocked. Keep in mind that GSC will flag those pages with warnings as they were previously indexed and now are blocked. That's just how they've set up flags.
Hope it helps.
Best luck.
Gaston -
If you have a general site which happens to have a search facility, blocking search results is quite usual. If your site is all 'about' searching (e.g: Compare The Market, stuff like that) then the value-add of your site is how it helps people to find things. In THAT type of situation, you absolutely do NOT want to block all your search URLs
Also, don't rule out seasonality. Traffic naturally goes up and down, especially at this time of year when everyone is on holiday. How many people spend their holidays buying stuff or doing business stuff online? They're all at the beach - mate!
-
Hi Gaston
"Search/" pages were getting a small amount of traffic, and a tiny bit of revenue, but I definitely don't think they need to be indexed or are important to users. We're down in mainly "Sale" & "Brand" pages, and I've heard the Sale in general across the store isn't going well, but don't think I can go back management with that excuse
I think my sitemaps are sorted now, I've broken them down into 6 x 5,000 URL files, and all the canonical tags seem to be fine and pointing to these URL's. I am a bit concerned that URL's "blocked by robots.txt" shot up from 12M to 73M, although all the URLs Search Console are showing me look like they need to be blocked!
We've also tracking nearly 300 Keywords, and they've actually had good improvements in the same period. Finding it hard to explain it!
-
Hi Frankie,
My guess is that the traffic you were losing was because of its traffic driven by /search pages.
The questions you should be asking are:
- Are those /search pages getting traffic?
- Are them important to users?
- After being disallowed, which pages were losing traffic?
As a general rule, Google doesn't want to crawl nor index internal search pages, unless they have some value to users.
On another matter, the crawlability of your product pages can be easily solved with a sitemap file. If you are worried about the size of it, remember that it can contain up to 50k URLs and you can create several sitemaps and list them in a sitemap index.
More info about that here: Split up your large sitemaps - Google Search Console HelpHope it helps.
Best luck,
Gaston
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is rel="canonical" pointing at a URL with parameters bad?
Context Our website has a large number of crawl issues stemming from duplicate page content (source: Moz). According to an SEO firm which recently audited our website, some amount of these crawl issues are due to URL parameter usage. They have recommended that we "make sure every page has a Rel Canonical tag that points to the non-parameter version of that URL…parameters should never appear in Canonical tags." Here's an example URL where we have parameters in our canonical tag... http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/ rel="canonical" href="http://www.chasing-fireflies.com/costumes-dress-up/womens-costumes/?pageSize=0&pageSizeBottom=0" /> Our website runs on IBM WebSphere v 7. Questions Why it is important that the rel canonical tag points to a non-parameter URL? What is the extent of the negative impact from having rel canonicals pointing to URLs including parameters? Any advice for correcting this? Thanks for any help!
Intermediate & Advanced SEO | | Solid_Gold1 -
Block subdomain directory in robots.txt
Instead of block an entire sub-domain (fr.sitegeek.com) with robots.txt, we like to block one directory (fr.sitegeek.com/blog).
Intermediate & Advanced SEO | | gamesecure
'fr.sitegeek.com/blog' and 'wwww.sitegeek.com/blog' contain the same articles in one language only labels are changed for 'fr' version and we suppose that duplicate content cause problem for SEO. We would like to crawl and index 'www.sitegee.com/blog' articles not 'fr.sitegeek.com/blog'. so, suggest us how to block single sub-domain directory (fr.sitegeek.com/blog) with robot.txt? This is only for blog directory of 'fr' version even all other directories or pages would be crawled and indexed for 'fr' version. Thanks,
Rajiv0 -
An improved search box within the search results - Results?
Hello~ Does anyone have any positive traffic results to share since implementing this? Thanks! MS
Intermediate & Advanced SEO | | MargaritaS0 -
What is the difference between link rel="canonical" and meta name="canonical"?
Hi mozzers, I would like to know What is the difference between link rel="canonical" and meta name="canonical"? and is it dangerous to have both of these elements combined together? One of my client's page has the these two elements and kind of bothers me because I only know link rel="canonical" to be relevant to remove duplicates. Thanks!
Intermediate & Advanced SEO | | Ideas-Money-Art0 -
Is my landing page "over-optimized"? Please help
Hello out there My website www.painterdublin.com and www.tilers-dublin.com were heavily hit by google panda update on 27.9.2012 and EMD update few days after. I lost about 70% of the traffic mainly from combination of the keywords from my domain name (painter dublin and tilers dublin) and never managed to recover from it. I am wondering if I should also concentrate on rewriting the content of both home landing pages in the terms of "KEYWORD DENSITY". Do you think my content is "OVER OPTIMIZED" for my main keywords? (painter dublin, tilers-dublin). What is the correct use? Is there any tool to guide me? I am aware I am using those terms quite often. I don't want to start deleting those terms before I know the right way to do it. Is there anybody willing to have a look at my sites and give me advice please? kind regards Jaro
Intermediate & Advanced SEO | | jarik0 -
Will disallowing in robots.txt noindex a page?
Google has indexed a page I wish to remove. I would like to meta noindex but the CMS isn't allowing me too right now. A suggestion o disallow in robots.txt would simply stop them crawling I expect or is it also an instruction to noindex? Thanks
Intermediate & Advanced SEO | | Brocberry0 -
Should Site Search results be blocked from search engines?
What are the advantages & disadvantages of letting Google crawl site search results? We currently have them blocked via robots.txt, so I'm not sure if we're missing out on potential traffic. Thanks!
Intermediate & Advanced SEO | | pbhatt0 -
How long will Google take to read my robots.txt after updating?
I updated www.egrecia.es/robots.txt two weeks ago and I still haven't solved Duplicate Title and Content on the website. The Google SERP doesn't show those urls any more but SEOMOZ Crawl Errors nor Google Webmaster Tools recognize the change. How long will it take?
Intermediate & Advanced SEO | | Tintanus0