Robot.txt help
-
Hi,
We have a blog that is killing our SEO.
We need to
Disallow
Disallow: /Blog/?tag*
Disallow: /Blog/?page*
Disallow: /Blog/category/*
Disallow: /Blog/author/*
Disallow: /Blog/archive/*
Disallow: /Blog/Account/.
Disallow: /Blog/search*
Disallow: /Blog/search.aspx
Disallow: /Blog/error404.aspx
Disallow: /Blog/archive*
Disallow: /Blog/archive.aspx
Disallow: /Blog/sitemap.axd
Disallow: /Blog/post.aspxBut Allow everything below /Blog/Post
The disallow list seems to keep growing as we find issues. So rather than adding in to our Robot.txt all the areas to disallow. Is there a way to easily just say Allow /Blog/Post and ignore the rest. How do we do that in Robot.txt
Thanks
-
These: http://screencast.com/t/p120RbUhCT
They appear on every page I looked at, and take up the entire area "above the fold" and the content is "below the fold"
-Dan
-
Thanks Dan, but what grey areas, what url are you looking at?
-
Ahh. I see. You just need to "noindex" the pages you don't want in the index. As far as how to do that with blogengine, I am not sure, as I have never used it before.
But I think a bigger issue is like the giant box areas at the top of every page. They are pushing your content way down. That's definitely hurting UX and making the site a little confusing. I'd suggest improving that as well
-Dan
-
Hi Dan, Yes sorry that's the one!
-
Hi There... that address does not seem to work for me. Should it be .net? http://www.dotnetblogengine.net/
-Dan
-
Hi
The blog is www.dotnetblogengine.com
The content is only on the blog once it is just it can be accessed lots of different ways
-
Andrew
I doubt that one thing made your rankings drop so much. Also, what type of CMS are you on? Duplicate content like that should be controlled through indexation for the most part, but I am not recognizing that type of URL structure as any particular CMS?
Are just the title tags duplicate or the entire page content? Essentially, I would either change the content of the pages so they are not duplicate, or if that doesn't make sense I would just "noindex" them.
-Dan
-
Hi Dan,
I am getting duplicate content errors in WMT like
This is because tag=ABC and page=1 are both different ways to get to www.mysite.com/Blog/Post/My-Blog-Post.aspx
To fix this I have remove the URL's www.mysite.com/Blog/?tag=ABC and www.mysite.com/Blog/?Page=1from GWMT and by setting robot.txt up like
User-agent: *
Disallow: /Blog/
Allow: /Blog/post
Allow: /Blog/PostI hope to solve the duplicate content issue to stop it happening again.
Since doing this my SERP's have dropped massively. Is what I have done wrong or bad? How would I fix?
Hope this makes sense thanks for you help on this its appreciated.
Andrew
-
Hi There
Where are they appearing in WMT? In crawl errors?
You can also control crawling of parameters within webmaster tools - but I am still not quite sure if you are trying to remove these from the index or just prevent crawling (and if preventing crawling, for what reason?) or both?
-Dan
-
Hi Dan,
The issue is my blog had tagging switched on, it cause canonicalization mayhem.
I switched it off, but the tags still appears in Google Webmaster Tools (GWMT). I Remove URL via GWMT but they are still appearing. This has also caused me to plummet down the SERPs! I am hoping this is why my SERPs had dropped anyway! I am now trying to get to a point where google just sees my blog posts and not the ?Tag or ?Author or any other parameter that is going to cause me canoncilization pain. In the meantime I am sat waiting for google to bring me back up the SERPs when things settle down but it has been 2 weeks now so maybe something else is up?
-
I'm wondering why you want to block crawling of these URLs - I think what you're going for is to not index them, yes? If you block them from being crawled, they'll remain in the index. I would suggest considering robots meta noindex tags - unless you can describe in a little more detail what the issue is?
-Dan
-
Ok then you should be all set if your tests on GWMT did not indicate any errors.
-
Thanks it goes straight to www.mysite.com/Blog
-
Yup, I understand that you want to see your main site. This is why I recommended blocking only /Blog and not / (your root domain).
However, many blogs have a landing page. Does yours? In other words, when you click on your blog link, does it take you straight to Blog/posts or is there another page in between, eg /Blog/welcome?
If it does not go straight into Blog/posts you would want to also allow the landing page.
Does that make sense?
-
The structure is:
www.mysite.com - want to see everything at this level and below it
www.mysite.com/Blog - want to BLOCK everything at this level
www.mysite.com/Blog/posts - want to see everything at this level and below it
-
Well what Martijn (sorry, I spelled his name wrong before) and I were saying was not to forget to allow the landing page of your blog - otherwise this will not be indexed as you are disallowing the main blog directory.
Do you have a specific landing page for your blog or does it go straight into the /posts directory?
I'd say there's nothing wrong with allowing both Blog/Post and Blog/post just to be on the safe side...honestly not sure about case sensitivity in this instance.
-
"We're getting closer David, but after reading the question again I think we both miss an essential point ;-)" What was the essential point you missed. sorry I don't understand. I don;t want to make a mistake in my Robot.txt so would like to be 100% sure on what you are saying
-
Thanks guys so I have
User-agent: *
Disallow: /Blog/
Allow: /Blog/post
Allow: /Blog/Postthat works. My Home page also works. I there anything wrong with including both uppercase "Post" and lowercase "post". It is lowercase on the site but want uppercase "P" just incase. Is there a way to make the entry non case sensitive?
Thanks
-
Correct, Martijin. Good catch!
-
There was a reason that I said he should test this!
We're getting closer David, but after reading the question again I think we both miss an essential point ;-). As we know also exclude the robots from crawling the 'homepage' of the blog. If you have this homepage don't forget to also Allow it.
-
Well, no point in a blog that hurts your seo
I respectfully disagree with Martijin; I believe what you would want to do is disallow the Blog directory itself, not the whole site. It would seem if you Disallow: / and _Allow:/Blog/Post _ that you are telling SEs not to index anything on your site except for /Blog/Post.
I'd recommend:
User-agent: *
Disallow: /Blog/
Allow: /Blog/PostThis should block off the entire Blog directory except for your post subdirectory. As Maritijin stated; always test before you make real changes to your robots.txt.
-
That would be something like this, please check this or test this within Google Webmaster Tools if it works because I don't want to screw up your whole site. What this does is disallowing your complete site and just allows the /Blog/Post urls.
User-agent: *
Disallow: /
Allow: /Blog/Post
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help with structure for optimizing Photography Website SEO
Hey guys , I am building a photography website and currently I have it setup the following way for my image galleries : https://ricfrancophotography.com/portfolio/norway-landscape-photography/#!gallery6618-6577 This provides me with an individual link for each of the images in my photography gallery but as a gallery these have obviously no content and I figured the best way would be to add the images I want to work my SEO for to individual blog posts . So here is what I did so far : - Added a link to the caption of each image inside the lightbox that is linking to the individual blog post - In order to not break the navigation I made the post with the content for each image open in a modal popup (it changes the link in the top bar but once closed it goes back to the gallery) . - I made the image inside the post link back to the fullsize image in the lightbox gallery when clicked instead of linking to the .jpg in wp-content/uploads. Now, I have some questions regarding whether this is a good practice in terms of SEO and if the fact of having duplicate images or this structure is going to hurt my SEO any way . Although both images are in different urls they ultimately link to each other this way : Blog Image --> Gallery Image url --> wp-content/uploads/file.jpg Is there a better approach for this ? Thanks
Intermediate & Advanced SEO | | ricfranco0 -
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
Help with facet URLs in Magento
Hi Guys, Wondering if I can get some technical help here... We have our site britishbraces.co.uk , built in Magento. As per eCommerce sites, we have paginated pages throughout. These have rel=next/prev implemented but not correctly ( as it is not in is it in ) - this fix is in process. Our canonicals are currently incorrect as far as I believe, as even when content is filtered, the canonical takes you back to the first page URL. For example, http://www.britishbraces.co.uk/braces/x-style.html?ajaxcatalog=true&brand=380&max=51.19&min=31.19 Canonical to... http://www.britishbraces.co.uk/braces/x-style.html Which I understand to be incorrect. As I want the coloured filtered pages to be indexed ( due to search volume for colour related queries ), but I don't want the price filtered pages to be indexed - I am unsure how to implement the solution? As I understand, because rel=next/prev implemented ( with no View All page ), the rel=canonical is not necessary as Google understands page 1 is the first page in the series. Therefore, once a user has filtered by colour, there should then be a canonical pointing to the coloured filter URL? ( e.g. /product/black ) But when a user filters by price, there should be noindex on those URLs ? Or can this be blocked in robots.txt prior? My head is a little confused here and I know we have an issue because our amount of indexed pages is increasing day by day but to no solution of the facet urls. Can anybody help - apologies in advance if I have confused the matter. Thanks
Intermediate & Advanced SEO | | HappyJackJr0 -
Ratings Snippets Gone? ( Help! )
Hello We had good traffic from ratings ( stars ) . I have added Offer details in the rich snippets in various currencies - the snippet testing tool likes it , but for some reason the stars on my site have completely dissapeared and been gone for almost a week. I need the offer information in there for google shopping automatic updates and google told me that it's implemented correctly for the shopping part.. but I really don't know what to do about this. Any ideas why would be really appreciated. http://www.return2health.net/yeast-imbalance/threelac-candida-defence/ Thanks 🙂
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Canonical Help (this is a nightmare)
Hi, We're new to SEO and trying to fix our domain canonical issue. A while back we were misusing the "link canonical" tag such that Google was tracking params (e.g. session ids, tagging ) all as different unique urls. This created a nightmare as now Google thinks there's millions of pages associated with our domain when the reality is really a couple thousand unique links. Since then, we've tried to fix this by: 1) specifying params to ignore via SEO webmasters 2) properly using the canonical tag. However, I'm still recognizing there's a bunch of outsanding search results that resulted from this mess. Any idea on expectation on when we'd see this cleaned up? I'm also recognizing that google is looking at http://domain.com and https://domain.com as 2 different pages even though we specify to only look at "http://domain.com" via the link canonical tag. Again, is this just a matter of waiting for Google to update its results? We submitted a site map but it seems like it's taking forever for the results of our site to clear up... Any help or insight would greatly be appreciated!
Intermediate & Advanced SEO | | sfgmedia0 -
Help! Unnatural Linking Partial Manual Penalty
A friend was hit with a manual penalty for unnatural links-impacts links. (see attached) I'm thinking it may be because they copied their entire wordpress.com site over to site.org/blog. (without redirecting it, so they have duplicate content as well) Out of 76+k links, nearly 11,000 are from their wordpress.com blog. If that's the case is the problem solved by upgrading within wordpress.com to redirect to site.org/blog? (then making a reconsideration request?) Or do I risk negatively affecting their site somehow? They saw a significant increase in traffic when they moved the content over but I'm thinking that was more a matter of increasing content on their site than increasing backlinks. The .org site ranks relatively well, whereas the wordpress.com blog doesn't really rank at all.Worth noting: it's a partial match, not a sitewide match. Does that negate my theory about the wordpress.com blog being the cause in any way? Since many of the links from it are sitewide? The wordpress.com blog has a header link to the .org homepage, plus individual links to it in posts. There are also three links in the header to pages on their .com website which redirects to three corresponding pages on the main .org site (the whole .com redirects). There are 23 footer links from the blog to the targeted .org pages as well. In the attached screenshot of who links most from Google Webmaster Tools, note that martindale.com links most, but it's a lawyer's site so they naturally have referring content there. Could that be a problem?Thanks everyone! 🙂M8JVEI6.jpg?1 M6gYE90.jpg
Intermediate & Advanced SEO | | kimmiedawn0 -
Heavy Internal Linking Help
One of the sites I work on is a home improvement ecommerce website that does fairly well for its niche. One of the biggest problems that we're not sure how to adequately handle is a heavy internal linking issue. The homepage (http://www.fauxpanels.com/) has approx. 226 internal links which is mainly due to the navigation structure. There are far worse pages though (the Samples page http://www.fauxpanels.com/samples.php has over 800 internal links). For the most part, management doesn't want any massive changes to the navigation layout. The Top navigation bar has a number of dropdown menus when you hover, the Left Navigation Bar expands to show more choices, and the Bottom navigation bar in many instances is just repeats of links that can be found elsewhere. Also, the product links in the body of the page can be found linked in the Left Navigation. This is not what I would personally consider the best way to handle navigation but the Customer Service Department has gotten numerous calls and emails over the years about how much people love our navigation and how easy it is to find things. My thought was trying to lessen the amount of links by having things grouped more often into Category pages/hub pages where applicable so we can remove some of the links. We've also considered NoFollowing links but my understanding is that even if you NoFollow the link equity is still divided by the number of on-page links. So, any of you much more experienced SEOs have any idea how I can lessen the heavy internal linking without completely re-doing the site's navigation layout and not harming link equity, ranking, etc.? Or, conversely, would you consider having an average 200-300 internal links per page not to be a real issue given the positive effect it has apparently had on user experience?
Intermediate & Advanced SEO | | MikeRoberts0 -
I can't help but think something is wrong with my SEO
So we re-launched our site about a month ago, and ever since we've seen a dramatic drop in search results (probably due to some errors that were made) when changing servers and permalink structure. But, I can't help but think something else is at play here. When we write something, I can check 24 hours later, and if I copy the Title verbatim, but we don't always show up in SERPs. In fact, I looked at a post today, and the meta description showing is not the same, but when I check the source code, it's right. What shows up in Google: http://d.pr/i/jGJg What's actually in the source code: http://d.pr/i/p4s8 Why is this happening? Website is The Tech Block
Intermediate & Advanced SEO | | ttb0