Can I use a "no index, follow" command in a robot.txt file for a certain parameter on a domain?
-
I have a site that produces thousands of pages via file uploads. These pages are then linked to by users for others to download what they have uploaded.
Naturally, the client has blocked the parameter which precedes these pages in an attempt to keep them from being indexed. What they did not consider, was they these pages are attracting hundreds of thousands of links that are not passing any authority to the main domain because they're being blocked in robots.txt
Can I allow google to follow, but NOT index these pages via a robots.txt file --- or would this have to be done on a page by page basis?
-
Since you have those pages blocked via robots.txt, the bots would never even crawl these pages in theory...which means the Noindex,follow is not helping.
Also, if you do a report on the domain on opensiteexplorer and dig, you should be able to find tons of those links already showing up. So if my site is linking to a page on that site, that page may not be cached/indexed because of the robots.txt exclusion, but that as long as my site is follow, your domain is still getting the credit for the link.
Does that make sense ?
-
Answered my own question.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using same copy on different domain
I have a client that currently has a .com domain (not using hreflang) . They have a new partner in the UK and they want to replicate the website and use a .co.uk domain. It will be a different brand name. Will this cause any SEO issues?
Intermediate & Advanced SEO | | bedynamic0 -
SSL and robots.txt question - confused by Google guidelines
I noticed "Don’t block your HTTPS site from crawling using robots.txt" here: http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html Does this mean you can't use robots.txt anywhere on the site - even parts of a site you want to noindex, for example?
Intermediate & Advanced SEO | | McTaggart0 -
Pages are being dropped from index after a few days - AngularJS site serving "_escaped_fragment_"
My URL is: https://plentific.com/ Hi guys, About us: We are running an AngularJS SPA for property search.
Intermediate & Advanced SEO | | emre.kazan
Being an SPA and an entirely JavaScript application has proven to be an SEO nightmare, as you can imagine.
We are currently implementing the approach and serving an "escaped_fragment" version using PhantomJS.
Unfortunately, pre-rendering of the pages takes some time and even worse, on separate occasions the pre-rendering fails and the page appears to be empty. The problem: When I manually submit pages to Google, using the Fetch as Google tool, they get indexed and actually rank quite well for a few days and after that they just get dropped from the index.
Not getting lower in the rankings but totally dropped.
Even the Google cache returns a 404. The question: 1.) Could this be because of the whole serving an "escaped_fragment" version to the bots? (have in mind it is identical to the user visible one)? or 2.) Could this be because we are using an API to get our results leads to be considered "duplicate content" and that's why? And shouldn't this just result in lowering the SERP position instead of a drop? and 3.) Could this be a technical problem with us serving the content, or just Google does not trust sites served this way? Thank you very much! Pavel Velinov
SEO at Plentific.com1 -
Weird indexing problem - Can it be solved?
Hi Been building and optimising sites for 15 years and this is one of the hardest problems I ever came across. So any help would be very much appreciated. Here we go: For some mysterious reason this URL http://weekend.visitsweden.com/no/ has been indexed as http://weekend.visitsweden.com even if we tried all we can to correct it. The problem is that since the latter points to the first URL with a 301 it refuses to get any page rank. Also it does not get visible in Google at all. Just a recap of what we have tried so far: Add site to webmaster tools Add proper sitemap.xml Add 301 redirect to the correct URL An easy way to locate the problem is to search for the main content of the site. As you can see it returns the wrong URL and the correct URL does not even get listed. Again, any help is very much appreciated. Kind regards Fredrik
Intermediate & Advanced SEO | | Resultify0 -
Blocking out specific URLs with robots.txt
I've been trying to block out a few URLs using robots.txt, but I can't seem to get the specific one I'm trying to block. Here is an example. I'm trying to block something.com/cats but not block something.com/cats-and-dogs It seems if it setup my robots.txt as so.. Disallow: /cats It's blocking both urls. When I crawl the site with screaming flog, that Disallow is causing both urls to be blocked. How can I set up my robots.txt to specifically block /cats? I thought it was by doing it the way I was, but that doesn't seem to solve it. Any help is much appreciated, thanks in advance.
Intermediate & Advanced SEO | | Whebb0 -
Same Branding, Same Followers, New Domain After Penalty... Your Opinion Please
I know I've asked a similar question in the past but I'm still trying to figure out what to do with my website. I've got a website at thewebhostinghero.com that's been penalized by both Panda and Penguin. I cleaned up the link profile and submitted a reconsideration request but it was denied. I finally found a handful of additional bad links and I submitted a new disavow + reconsideration request a few days ago and I am still waiting. That said, after submitting the initial disavow request, the traffic has completely gone and while I expected a drop in traffic, I also expected my penalty to be lifted but it was not the case. Even though the penalty might be lifted this time, I think that making the website profitable again could be harder than creating a new website. So here's my questioning: The website's domain is thewebhostinghero.com but I also happen to own webhostinghero.com which I bought later for $5000 (yes you read that right). The domain "webhostinghero.com" is completely clean as it's only redirecting to thewebhostinghero.com. I would like to use webhostinghero.com as a completely new website and not redirect any traffic from thewebhostinghero.com as to not pass any bad link juice. Pros: Keeping the same branding image (which cost me $$$) Keeping the 17,000+ Facebook followers Keeping the same Google+ and Twitter accounts Keeping and monetizing a domain that cost me $5000 webhostinghero.com is a better domain than thewebhostinghero.com Cons: Will create confusion between the 2 websites Any danger of being flagged as duplicate or something? Do you see any other potential issues with this? What's your opinion/advice? P.S. Sorry for my english...
Intermediate & Advanced SEO | | sbrault740 -
How important is a good "follow" / "no-follow" link ratio for SEO?
Is it very important to make sure most of the links pointing at your site are "follow" links? Is it problematic to post legitimate comments on blogs that include a link back to relevant content or posts on your site?
Intermediate & Advanced SEO | | BlueLinkERP0 -
New server update + wrong robots.txt = lost SERP rankings
Over the weekend, we updated our store to a new server. Before the switch, we had a robots.txt file on the new server that disallowed its contents from being indexed (we didn't want duplicate pages from both old and new servers). When we finally made the switch, we somehow forgot to remove that robots.txt file, so the new pages weren't indexed. We quickly put our good robots.txt in place, and we submitted a request for a re-crawl of the site. The problem is that many of our search rankings have changed. We were ranking #2 for some keywords, and now we're not showing up at all. Is there anything we can do? Google Webmaster Tools says that the next crawl could take up to weeks! Any suggestions will be much appreciated.
Intermediate & Advanced SEO | | 9Studios0