Robots.txt issue for international websites
-
In Google.co.uk, our US based (abcd.com) is showing:
A description for this result is not available because of this site's robots.txt – learn more
But UK website (uk.abcd.com) is working properly. We would like to disappear .com result totally, if possible. How to fix it?
Thanks in advance.
-
Can you share any information about your robots.txt?
-
My main problem is in the homepage. Both host similar type of products and brands.
You may check the screenshot. Sorry, I had to blanked out the text.
Thanks in advance.
-
Is it showing that for every page, or only some pages? If so, which types of pages? What's the contents of your robots.txt file for the US site?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I have multiple websites for my different brands or one main website with different tabs/areas?
My client creates apps. As well as the apps they create themselves, they have made some of their own that cover various different topics. Currently they have individual websites for each of these apps, and a website for their app making business. They are asking whether they should just have one website - their app building site, which also includes information about the two apps they've built themselves. My feeling is it's better to keep them separate. The app building site is trying to appeal to a B2B audience and gain business to build new apps. AppA is trying to help carehomes and carers to streamline their business, and AppB is trying to help workplace and employee welfare. Combining them all will mean lots of mixed messaging/keywords even if we have dedicated areas on the site. I also think it will limit how much content we can create on each without being completely overwhelming for the user. If we keep them all separate then we can have a very clear user journey. I would of course recommend having blog posts or some sort of landing page to link to AppA and AppB's websites. Thoughts? Thank you!
Intermediate & Advanced SEO | | WhitewallGlasgow0 -
Please have a look at my website. I am stuck here.
Here might be the reason. I had loads of unnecessary content so I given them the noindex tag. I tried to change the robot.txt file but that shouldn't be a problem in SEO. First my site had a country specific domain and then a year later I changed it to .Com, as to target globally (Mainly US). My site is ranking well in that specific country (never been close to page 1) on page 3 almost every time. It's not ranking in other countries, despite the fact that I've not targeted it to any specific country since the domain was changed. A month ago, I deleted 404 pages and all the thin content which was indexed in the SERP and also deleted the duplicated contents and as well as the copied contents. Meanwhile I've also tried changing the headings in some of the products articles as they were causing the duplicate heading issue. I've recently switched my hosting from the UK based server to the Us based server because the last hosting has bad downtime. So far until now nothing seems to be working in my favor. I'm just tired of resolving issues and in return finding a zero result. This is my devil site: 10stuffs.com plz check it out and tell me why my site is not ranking at all and what sould I do.
Intermediate & Advanced SEO | | anshu14320 -
Robots.txt blocked internal resources Wordpress
Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted. I've created the following new one: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Allow: /
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/cache/
Disallow: /wp-content/themes/
Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush, I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!2 -
Website cache has removed
Hi Team, I am facing an issue with cache of the website, despite various r&d I couldn't able to find the solution as code seems to be ok to me. Can any one of you check and let me know why home page and some of the product pages removed from the caching. See here: https://bit.ly/2Kna3PD Appreciate a quick response! Thanks
Intermediate & Advanced SEO | | Devtechexpert0 -
Duplicate content issue
Hello! We have a lot of duplicate content issues on our website. Most of the pages with these issues are dictionary pages (about 1200 of them). They're not exactly duplicate, but they contain a different word with a translation, picture and audio pronunciation (example http://anglu24.lt/zodynas/a-suitcase-lagaminas). What's the better way of solving this? We probably shouldn't disallow dictionary pages in robots.txt, right? Thanks!
Intermediate & Advanced SEO | | jpuzakov0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
My website is not indexing
Hello Experts As i search site :http://www.louisvuittonhandbagss.com or just entering http://www.louisvuittonhandbagss.com on Google i am not getting my website . I have done following steps 1. I have submitted sitemaps and indexed all the site maps 2.i have used GWT feature fetch as Google . 3. I have submitted my website to top social book marking websites and to some classified sites also . Pleae
Intermediate & Advanced SEO | | aschauhan5210 -
Will disallowing in robots.txt noindex a page?
Google has indexed a page I wish to remove. I would like to meta noindex but the CMS isn't allowing me too right now. A suggestion o disallow in robots.txt would simply stop them crawling I expect or is it also an instruction to noindex? Thanks
Intermediate & Advanced SEO | | Brocberry0