Robots.txt
-
Google Webmaster Tools say our website's have low-quality pages, so we have created a robots.txt file and listed all URL’s that we want to remove from Google index.
Is this enough for the solve problem?
-
Ah, it's difficult to see anything on the page because i can't read Turkish.
The only thing you should know is that every single page in a website should have unique content. So if two pages are exactly or almost exactly the same then Google will think it's duplicate content.
-
Yeah that's definitely a duplicate content issue you're facing.
However, did you know that each of your pages have this little tag right at the top of them? name="robots" content="noindex" />
...Seems like it's already done.
-
Thank You Wesley,
Here our pages but language is Turkish,
http://www.enakliyat.com.tr/detaylar/besiktas-basaksehir-ev-esyasi-tasinma-6495
http://www.enakliyat.com.tr/detaylar/ev-tasima-6503
http://www.enakliyat.com.tr/detaylar/evden-eve-nakliyat-6471
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered same content.
-
Well now I'm confused on the problem.. If the issue is duplicate content then the answer is definitely to block them with robots and/or use a rel=canonical tag on each.
However, the Google notice you are referencing has nothing to do with duplicate content notices to my knowledge.
There is always a way to improve your content. Filling out a form auto-generates a page, per my understanding. Great. Have it auto-generate a better looking page!
-my 2 cents. hope it's helpful.
-
I agree with Jesse and Allen.
Of course the problems in Google Webmaster Tools will disappear by no-indexing it.
Low quality pages isn't a good thing for visitors either.It's difficult to give you any other advice then the very broad advise: Improve the quality of the pages.
If you could give us some links to let us know which website and which pages we're talking about then we could give you a better advice on how exactly you can improve those pages. -
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered same content.
-
Iskender.
Our experience has been YES. Google does follow your Robots.txt file and will ignore indexing those pages. If they have a problem, the problem will disappear.
My concern is, what is causing the "Low-quality" error message? In the long run, wouldn't it be better to correct the page to improve the quality? I look at each page as a way to qualify for a greater number of keywords, hence attracting more attention for your website.
We have had several pages flagged as duplicate content, when we never wanted the duplicate page indexed anyway. Once we included the page in the Robots.txt file the flagged error disappeared.
-
Why not improve the pages, instead?
If Google says they are low quality, what makes you think any viewer will stick around? Bet the bounce rate is exceptionally high on those pages, maybe even site-wide.
Always remember to design pages for readers and not Google. If Google tells you your pages suck, they are probably just trying to help you and give you a hint that it's time to improve your site.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website URL, Robots.txt and Google Search Console (www. vs non www.)
Hi MOZ Community,
Technical SEO | | Badiuzz
I would like to request your kind assistance on domain URLs - www. VS non www. Recently, my team have moved to a new website where a 301 Redirection has been done. Original URL : https://www.example.com.my/ (with www.) New URL : https://example.com.my/ (without www.) Our current robots.txt sitemap : https://www.example.com.my/sitemap.xml (with www.)
Our Google Search Console property : https://www.example.com.my/ (with www.) Question:
1. How/Should I standardize these so that Google crawler can effectively crawl my website?
2. Do I have to change back my website URLs to (with www.) or I just need to update my robots.txt?
3. How can I update my Google Search Console property to reflect accordingly (without www.), because I cannot see the options in the dashboard.
4. Is there any to dos such as Canonicalization needed, or should I wait for Google to automatically detect and change it, especially in GSC property? Really appreciate your kind assistance. Thank you,
Badiuzz0 -
Shopify robots blocking stylesheets causing inconsistent mobile-friendly test results?
One of our shopify sites suffered an extreme rankings drop. Recent Google algorithm updates include mobile first so I tested the site and our team got different mobile-friendly test results. However, search console is also flagging pages as not mobile friendly. So, while us end-users see the site as OK on mobile, this may not be the case for Google? I researched more about inconsistent mobile test results and found answers that say it may be due to robots.txt blocking stylesheets. Do you recognise any directory blocked that might be affecting Google's rendering? We can't edit shopify robots.txt unfortunately. Our dev said the only thing that stands out to him is Disallow: /design_theme_id and the rest shouldn't be hindering Google bots. Here are some of the files blocked: Disallow: /admin
Technical SEO | | nhhernandez
Disallow: /cart
Disallow: /orders
Disallow: /checkout
Disallow: /9103034/checkouts
Disallow: /9103034/orders
Disallow: /carts
Disallow: /account
Disallow: /collections/+
Disallow: /collections/%2B
Disallow: /collections/%2b
Disallow: /blogs/+
Disallow: /blogs/%2B
Disallow: /blogs/%2b
Disallow: /design_theme_id
Disallow: /preview_theme_id
Disallow: /preview_script_id
Disallow: /discount/*
Disallow: /gift_cards/*
Disallow: /apple-app-site-association0 -
Can I Block https URLs using Host directive in robots.txt?
Hello Moz Community, Recently, I have found that Google bots has started crawling HTTPs urls of my website which is increasing the number of duplicate pages at our website. Instead of creating a separate robots.txt file for https version of my website, can I use Host directive in the robots.txt to suggest Google bots which is the original version of the website. Host: http://www.example.com I was wondering if this method will work and suggest Google bots that HTTPs URLs are the mirror of this website. Thanks for all of the great responses! Regards,
Technical SEO | | TJC.co.uk
Ramendra0 -
Guys & Gals anyone know if urllist.txt is still used?
I'm using a tool which generates urllist.txt and looking on the SEO Forums it seems that Yahoo used to use this. What I'd like to know is is it still used anywhere and should we have it on the site?
Technical SEO | | danwebman0 -
Robots.txt anomaly
Hi, I'm monitoring a site thats had a new design relaunch and new robots.txt added. Over the period of a week (since launch) webmaster tools has shown a steadily increasing number of blocked urls (now at 14). In the robots.txt file though theres only 12 lines with the disallow command, could this be occurring because a line in the command could refer to more than one page/url ? They all look like single urls for example: Disallow: /wp-content/plugins
Technical SEO | | Dan-Lawrence
Disallow: /wp-content/cache
Disallow: /wp-content/themes etc, etc And is it normal for webmaster tools reporting of robots.txt blocked urls to steadily increase in number over time, as opposed to being identified straight away ? Thanks in advance for any help/advice/clarity why this may be happening ? Cheers Dan0 -
RegEx help needed for robots.txt potential conflict
I've created a robots.txt file for a new Magento install and used an existing site-map that was on the Magento help forums but the trouble is I can't decipher something. It seems that I am allowing and disallowing access to the same expression for pagination. My robots.txt file (and a lot of other Magento site-maps it seems) includes both: Allow: /*?p= and Disallow: /?p=& I've searched for help on RegEx and I can't see what "&" does but it seems to me that I'm allowing crawler access to all pagination URLs, but then possibly disallowing access to all pagination URLs that include anything other than just the page number? I've looked at several resources and there is practically no reference to what "&" does... Can anyone shed any light on this, to ensure I am allowing suitable access to a shop? Thanks in advance for any assistance
Technical SEO | | MSTJames0 -
How to allow one directory in robots.txt
Hello, is there a way to allow a certain child directory in robots.txt but keep all others blocked? For instance, we've got external links pointing to /user/password/, but we're blocking everything under /user/. And there are too many /user/somethings/ to just block every one BUT /user/password/. I hope that makes sense... Thanks!
Technical SEO | | poolguy0 -
Should search pages be disallowed in robots.txt?
The SEOmoz crawler picks up "search" pages on a site as having duplicate page titles, which of course they do. Does that mean I should put a "Disallow: /search" tag in my robots.txt? When I put the URL's into Google, they aren't coming up in any SERPS, so I would assume everything's ok. I try to abide by the SEOmoz crawl errors as much as possible, that's why I'm asking. Any thoughts would be helpful. Thanks!
Technical SEO | | MichaelWeisbaum0