Skip indexing the search pages
-
Hi,
I want all such search pages skipped from indexing
So i have this in robots.txt (Disallow: /search/)
Now any posts that start with search are being blocked and in Google i see this message
A description for this result is not available because of this site's robots.txt – learn more.
How can i handle this and also how can i find all URL's that Google is blocking from showing
Thanks
-
Sure - you have urls that are being blocked by robots - you have this line in your robots.txt -
Disallow: /questions/search
It is thus preventing urls from within that folder, questions, which start with the word search from being crawled. What are you trying to accomplish with this block? If it's the folder search, within questions, it should be /questions/search/.
And the other warning is telling you these pages take a long time to load - check your server or these individual pages and see why that is taking so long.
-
-
As Saijo said above, the meta robots noindex tag is the way to go. When you block a folder via robots.txt, you prevent Google from visiting and crawling that folder and any content within it. If Google has already crawled the content, they won't remove the content from their index just if you block it with robots.txt. The old version they have of the page will be stored and saved in their index, and they just won't be able to show you an updated snippet of the page due to the robots.txt block.
To remove the pages from the index completely, you can do one of 2 things -
- in webmaster tools, go to the url removal section, and remove that folder from the index - this will only work when it's blocked via robots.txt
- you can add a meta robots noindex tag to the pages/page template, and remove the robots.txt block - you need to remove the robots.txt block so the search engines can recrawl the pages, see the meta robots directive, and follow the noindex guide to remove the page.
In general, I would recommend using the meta robots noindex directive over the robots.txt, because it should work for all search engines, and you won't have to go into webmaster tools for each one. You also will ensure that you don't accidentally block other urls.
From your example above, if you just blocked the folder /search/, a page that includes the word search in the url but isn't in the blocked folder shouldn't be blocked from the search engines because of that line - I would check in webmaster tools the robots.txt section, because it doesn't look to me, based on your robots.txt file, that any url with search in it should be blocked.
Good luck,
Mark
-
I guess i was not clear with my question.
So i have this in robots.txt (Disallow: /search/)
My intension yo place /search/ is to stop Google indexing any of my search posts
Now whats happened is
www.somesite.com/questions/search-the-internet
Posts like above are also being blocked
-
To Block search pages from the index you can try adding the META NOINDEX tag in the head section of the search pages
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is my inner pages ranking higher than main page?
Hi everyone, for some reason lately i have discovered that Google is ranking my inner pages higher than the main subfolder page. www.domain.com/subfolder --> Target page to be ranked
Technical SEO | | davidboh
www.domain.com/subfolder/aboutus ---> page that is currently ranking Also in the SERP most of the time, it is showing both links in this manner. www.domain.com/subfolder/aboutus
-----------www.domain.com/subfolder Thanks in advance.1 -
Google indexes page elements
Hello We face this problem that Google indexes page elements from WordPress as single pages. How can we prevent these elements from being indexed separately and being displayed in the search results? For example this project: www.rovana.be When scrolling down the search results, there are a lot of elements that are indexed separately. When clicking on the link, this is wat we see (see attachements) Does anyone have experience with this way of indexing and how can we solve this problem? Thanks! LlAWG4w.png C7XDDYS.png gVroomx.png
Technical SEO | | conversal0 -
My SEO friend says my website is not being indexed by Google considering the keywords he has placed in the page and URL what does that mean?
My SEO friend says my website is not being indexed by Google considering the keywords he has placed in the page and URL what does that mean? We have added some text in the pages with keywords thats related the page
Technical SEO | | AlexisWithers0 -
SEOMOZ and Webmaster Tools showing Different Page Index Results
I am promoting a jewelry e-commerce website. The website has about 600 pages and the SEOMOZ page index report shows this number. However, webmaster tools shows about 100,000 indexed pages. I have no idea why this is happening and I am sure this is hurting the page rankings in Google. Any ideas? Thanks, Guy
Technical SEO | | ciznerguy1 -
Noindex search result pages Add Classifieds site
Dear All, Is it a good idea to noindex the search result pages of a classified site?
Technical SEO | | te_c
Taking into account that category pages are also search result pages, I would say it is not a good idea, but the whole information is in the sitemap, google can index individual listings (which are index, follow) anyway. What would you do? What kind of effects has in the indexing of the site, marking the search result pages as "search results" with schema.org microdata? Many thanks for your help, Best Regards, Daniel0 -
Disallow: /search/ in robots but soft 404s are still showing in GWT and Google search?
Hi guys, I've already added the following syntax in robots.txt to prevent search engines in crawling dynamic pages produce by my website's search feature: Disallow: /search/. But soft 404s are still showing in Google Webmaster Tools. Do I need to wait(it's been almost a week since I've added the following syntax in my robots.txt)? Thanks, JC
Technical SEO | | esiow20130 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
How to get Google to index another page
Hi, I will try to make my question clear, although it is a bit complex. For my site the most important keyword is "Insurance" or at least the danish variation of this. My problem is that Google are'nt indexing my frontpage on this, but are indexing a subpage - www.mydomain.dk/insurance instead of www.mydomain.dk. My link bulding will be to subpages and to my main domain, but i wont be able to get that many links to www.mydomain.dk/insurance. So im interested in making my frontpage the page that is my main page for the keyword insurance, but without just blowing the traffic im getting from the subpage at the moment. Is there any solutions to do this? Thanks in advance.
Technical SEO | | Petersen110