Get a list of robots.txt blocked URL and tell Google to crawl and index it.
-
Some of my key pages got blocked by robots.txt file and I have made required changes in robots.txt file but how can I get the blocked URL's list.
My webmaster page Health>blocked URL's shows only number not the blocked URL's.My first question is from where can I fetch these blocked URL's and how can I get them back in searches,
One other interesting point I see is that blocked pages are still showing up in searches.Title is appearing fine but Description shows blocked by robots.txt file.
I need urgent recommendation as I do not want to see drop in my traffic any more.
-
"changing the lastmod of those pages to today".
How can I make these changes?
Right now the news is that Resubmitted the Sitemap and no warnings this time.
-
I imagine that since you've got a robots txt error you'll probably ended closing a whole directory to bots which you wanted to be indexed. You can easily spot the directory and resubmit a sitemap to google changing the lastmod of those pages to today and the priority to 1 but only of those pages.
If you still receive warnings it may be due to errors in your sitemap. You're probably including some directory you don't want. You can try it in GWT putting in the box at the bottom the url you want to maintain in the index and then trying to see if some urls are being blocked by your robots.
If you want you can post here your robots and the URIs you want to be indexed without knowing the domain so that won't be public. Hope this may help you
-
Ok Resubmitted it.but even with updated file it gives a lot of errors.I think it takes some time.20,016 warnings
I have not added no index attribute in my header region.It was all messy stuff with robots.txt file.It means that with site showing up in SERP the rank will probably be the same or it was deranked?
-
Go into GWMT and resubmit sitemap.xml files (with the URLs you want indexed) for recrawling and Google will digest the sitemaps again, instead of waiting for Googlebot to come around on their own, you are requesting it to come around, also include those new sitemap files in your robots.txt file.
-
In Google Webmaster Tools, go to Health -> Fetch As Google. Then add the previously blocked URL and click Fetch. Once you've done that, refresh the page and click "Submit to index". That should get Google indexing those pages again.
Getting external links to your pages also helps get pages crawled & indexed, so it may be worth submitting your pages to social bookmarking sites, or get other types of backlinks to your previously blocked pages if possible.
-
Since you fixed your robots.txt file you should be good to. It will probably take a few days for Google to recrawl your site and update the index with the URLs they are now allow to crawl.
Blocked URLs can still show up in SERPs if you haven't defined the no-index attribute in your section.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Buffer Link and Google Impressions
Afternoon, I noticed a spike in impressions over a couple of days in April, so I investigated Analytics to see where these were coming from. It appears these impressions were split between two URLs; one was a blog post, the other was the Buffer link to the blog post that we used on Twitter and Facebook. According to Analytics, this Buffer URL received 1000 impressions over two days, with an average SERP position of 16. This surely can't be right, can it? Is this just another Analytics quirk? After two days of a decent amount of impressions to this Buffer link, the amount of impressions dropped to pretty much zero. I know Tweets are now starting to rank, but this would be the Twitter URL, not the Buffer link to our blog post? Any ideas, Cheers, Lewis
Reporting & Analytics | | PeaSoupDigital0 -
Google Analytics
How can you see the conversion rates for the referral traffic? For example I want to see if any of the facebook visits to my site resulted in a purchase.
Reporting & Analytics | | EcomLkwd0 -
Robots.txt file issue.
Hi, Its my third thread here and i have created many like it on many webmaster communities.I know many pro are here so badly needs help. Robots.txt blocked 2k important URL's of my blogging site http://Muslim-academy.com/ Especially of my blog area which are bringing good number of visitors daily.My organic traffic declined from 1k daily to 350. I have removed the robots.txt file.Resubmitted existing Sitemap.Used all Fetch to index options and 50 URL submission option in Bing Webmaster Tool. What Can I do know to have these blocked URL's back in Google index? 1.Create a NEW sitemap and submit it again in Google webmaster and bing webmaster tool? 2.Bookmark,linkbuilding or share the URL's.I did a lot of bookmarking for blocked URL's. I fetch the list of blocked URLS Using BING WEBMASTER TOOLS.
Reporting & Analytics | | csfarnsworth0 -
How Google measure website bounce rate ?
Bounce rate is a SEO signal, but how Google measures it ? There is any explanation about this ? Does Google uses Analytics ? Maybe time between 2 clics in search results ? Thanks
Reporting & Analytics | | Max840 -
Increase number of pages crawled
Only one page is being crawled, how do I increase the number to include most of our site?
Reporting & Analytics | | NorthCoast0 -
What is s.ytimg.com in google analytics?
My clients GA reports 273 visits from s.ytimg.com. I go to the site, it doesn't exist. I googled it, there were some code with s.ytimg.com in it, but nothing I could understand. Anybody have an idea where this comes from?
Reporting & Analytics | | endlessrange0 -
Google Webmaster not accounting for internal links
Hi SEO gurus! All my websites in GWT show the website in question at the top of the "Links to your site", in the form of: Domains Total links my-site.com 1,000 third-party-1.com 500 third-party-2.com 300 third-party-3.com 200 etc.com 100 However, I have a specific account that suddenly (a few weeks back) disappeared its own link count: Domains Total links third-party-1.com 500 third-party-2.com 300 third-party-3.com 200 etc.com 100 Has this happened to any of you? Any ideas how to solve it? The website is www.gmvbodybuilding.com which you can see has plenty of properly formed links.
Reporting & Analytics | | hectorpn0 -
Search within search? Weird google URLs
Good morning afternoon, how are you guys doing today? I'm experiencing a few Panda issues I'm trying to fix, and I was hoping I could get some help here about one of my problems. I used Google analytics to extract pages people land on after a Google search. I'm trying to identify thin pages that potentially harm my website as a whole. It turns out I have a bunch of pages in the likes of the following: /search?cd=15&hl=en&ct=clnk&gl=uk&source=www.google .co.uk, and so on for a bunch of countries (.fi, .com, .sg, .pk, and so on, maybe 50 of them) My question is: what are those pages? their stats are awful, usually 1 visitor, 100% bounce rate, and 0 links. Do you think they can explain my dramatic drop in traffic following Panda? If so, what should I do with them? NOINDEX? Deletion? What would you suggest? I also have a lot of links in the likes of the following: /google-search?cx=partner-pub-6553421918056260:armz8yts3ql&cof=FORID:10&ie=ISO-8859-1&sa=Search&siteurl=www.mysite.com/content/article They lead to custom search pages. What should I do with them? Almost two weeks ago, Dr. Pete posted an article untitled Fat Panda and Thin Content in which he deals with "search within search" and how they might be targeted by Panda. Do you think this is the issue I'm facing? Any suggestion/help would be much appreciated! Thanks a lot and have a great day 🙂
Reporting & Analytics | | Ericc220