Robots.txt Question
-
In the past, I had blocked a section of my site (i.e. domain.com/store/) by placing the following in my robots.txt file: "Disallow: /store/" Now, I would like the store to be indexed and included in the search results. I have removed the "Disallow: /store/" from the robots.txt file, but approximately one week later a Google search for the URL produces the following meta description in the search results: "A description for this result is not available because of this site's robots.txt – learn more"
Is there anything else I need to do to speed up the process of getting this section of the site indexed?
-
Thanks for the "Good Answer" flag, David! I reformatted & added a little extra info to make the process a little clearer.
Paul
-
To help speed up the process of getting re-included, use the "Fetch as Googlebot" and "Fetch as Bingbot" tools in Webmaster Tools for a page in the blocked section - this significantly helps jumpstart indexing of pages.Once you see a successful Fetch status, click Submit to Index, and then specify to submit URL and all linked pages.
In addition
- make certain your new pages are listed in your sitmap.xml file, and then resubmit the sitemap to the search engines using Google and Bing Webmaster Tools
- make sure your own internal pages (especially a few strong ones) link to the newly unblocked content
- see if you can get a couple good new incoming links to some of the pages in the new section - even if they're no-follow, they can help guide the crawlers to the newly available pages
Essentially you're trying to give the SEs as many hints as possible that there are new pages to crawl and hopefully index.
Paul
[edited for additional clarity]
-
Thanks. I figured this was the case, but was not sure if I was missing any "best practices" about getting the previously blocked URL included faster.
-
David, If I am correct this is an old message sitting in the index. Give it another week or so and I am sure this message will vanish. I had this with one of my sites that I went live with but forget to allow in the robots.txt file.
shivun
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I add my html sitemap to Robots?
I have already added the .xml to Robots. But should I also add the html version?
Technical SEO | | Trazo0 -
Robots User-agent Query
Am I correct in saying that the allow/disallow is only applied to msnbot_mobile? mobile robots file User-agent: Googlebot-Mobile User-agent: YahooSeeker/M1A1-R2D2 User-agent: MSNBOT_Mobile Allow: / Disallow: /1 Disallow: /2/ Disallow: /3 Disallow: /4/
Technical SEO | | ThomasHarvey1 -
Is there any value in having a blank robots.txt file?
I've read an audit where the writer recommended creating and uploading a blank robots.txt file, there was no current file in place. Is there any merit in having a blank robots.txt file? What is the minimum you would include in a basic robots.txt file?
Technical SEO | | NicDale0 -
Site command / Footprint Question
Hi All, I am looking for websites with keywords in the domain and I am using: inurl:keyword/s The results that come back include sub-pages and not only domains with the keywords in the root domain. example of what i mean: www.website.com/keyword/ What I want displayed only: www.keyword/s.com Does anyone know of a site command i can use to display URL's with keywords in the root domain only? Thanks in Advance Greg
Technical SEO | | AndreVanKets0 -
Can't find mistake in robots.txt
Hi all, we recently filled our robots.txt file to prevent some directories from crawling. Looks like: User-agent: * Disallow: /Views/ Disallow: /login/ Disallow: /routing/ Disallow: /Profiler/ Disallow: /LILLYPROFILER/ Disallow: /EventRweKompaktProfiler/ Disallow: /AccessIntProfiler/ Disallow: /KellyIntProfiler/ Disallow: /lilly/ now, as Google Webmaster Tools hasn't updated our robots.txt yet, I checked our robots.txt in some ckeckers. They tell me that the User agent: * contains an error. **Example:** **Line 1: Syntax error! Expected <field>:</field> <value></value> 1: User-agent: *** **`I checked other robots.txt written the same way --> they work,`** accordign to the checkers... **`Where the .... is the mistake???`** ```
Technical SEO | | accessKellyOCG0 -
Robots.txt to disallow /index.php/ path
Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/
Technical SEO | | Mikkehl0 -
Robots.txt and canonical tag
In the SEOmoz post - http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts, it's being said - If you have a robots.txt disallow in place for a page, the canonical tag will never be seen. Does it so happen that if a page is disallowed by robots.txt, spiders DO NOT read the html code ?
Technical SEO | | seoug_20050 -
301 Redirect "wildcard" question
I have been looking at the SEOmoz redirect guide for some advice but I can't seem to find the answer : http://www.seomoz.org/learn-seo/redirection I have lots of URLs from a previous version of a site that look like the following: sitename.com/-c-25.html?sort=2d&page=1 sitename.com/-c-25.html?sort=3a&page=1 etc etc. I want to write a redirect so whenever a URL with the terms "-c-25.html" is requested it redirects to a specified page, regardless of what comes after the question mark. These URLs were created by our previous ecommerce software. The 'c' is for category, and each page of the cateogry created a different URL. I want to do these so I can rediect all of these URLs to the appropraite new cateogry page in a single redirect. Thanks for any help.
Technical SEO | | craigycraig0