No descripton on Google/Yahoo/Bing, updated robots.txt - what is the turnaround time or next step for visible results?
-
Hello,
New to the MOZ community and thrilled to be learning alongside all of you! One of our clients' sites is currently showing a 'blocked' meta description due to an old robots.txt file (eg: A description for this result is not available because of this site's robots.txt)
We have updated the site's robots.txt to allow all bots. The meta tag has also been updated in WordPress (via the SEO Yoast plugin)
See image here of Google listing and site URL: http://imgur.com/46wajJw
I have also ensured that the most recent robots.txt has been submitted via Google Webmaster Tools.
When can we expect these results to update? Is there a step I may have overlooked?
Thank you,
Adam -
Great, the good news is following submission of a sitemap via Webmaster Tools, things appear to be remedied on Google! It does seem, however, that the issue still persists on Bing/Yahoo.
Some of the 404's are links from an old site that weren't carried over following my redesign; so that will be handled shortly as well.
I've submitted the sitemap via Bing Webmaster Tools, as such I presume it's a similar matter of simply 'waiting on Bing'?
Many thanks for your valuable insight!
-
Hi There
It seems like there are some other issues tangled up in this.
- First off it looks like some non-www URLs indexed in Google are 301 redirecting to www but then 404'ing. It's good they redirect to www, but they should end up on active pages.
- The NON-www homepage is the one showing the robots.txt message. This should hopefully resolve in a week or two when Google re-crawled the NON-www URL, sees the 301 - the actual solution is getting the non-www URL out of the index, and having them rank the www homepage instead. The www homepage description shows up just fine.
- You may want to register the non-www version of the domain in webmaster tools, and make sure to clean up any errors that pop up there as well.
-
I just got this figured out, let's try dropping this into Google!
-
The 404 error could be around a common error experienced with Yoast sitemaps: http://kb.yoast.com/article/77-my-sitemap-index-is-giving-a-404-error-what-should-i-do
1st step is to try and reset the permalink structure, it could resolve the 404 error you're seeing. You definitely want to resolve your sitemap 404 error to submit a crawlable sitemap to Google.
-
Thanks! It would seem that the Sitemap URL http://www.altaspartners.com/sitemap_index.xml brings up a 404 page, so I'm a bit confused with that step - but otherwise it appears to be very clear!
-
In WordPress, go to the Yoast plugin and locate the sitemap URL / settings. Plug the sitemap URL into your browser and make sure that it renders properly.
Once you have that exact URL, drop it into Google Webmaster Tools and let it process. Google will let you know if they found any errors that need correcting. Once submitted, you just need to wait for Google to update its index and reflect your site's meta description.
Yoast has a great blog that goes in depth about its sitemap features: https://yoast.com/xml-sitemap-in-the-wordpress-seo-plugin/
-
Sounds great Ray, how would I go about checking these URLs for the Yoast siteap?
-
Yoast sets up a pretty efficient sitemap. Make sure the sitemap URL settings are correct, load it up in the browser to confirm, and submit your sitemap through GWT - that will help get a new crawl of the site and hopefully an update to their index so your meta descriptions begins to show in the SERPs.
-
Hi Ray,
With fetch as Googlebot, I see a redirection for the non-www, and a correct fetch for the www.Using SEO Yoast, it would seem the sitemap link leads to a 404?
-
Ha, that's exactly what I did.
I'm not showing any restrictions in your robots.txt file and the meta tag is assigned appropriately.
Have you tried to fetch the site with the Webmaster Tools 'fetch as googlebot' tool? If there is an issue, it should be apparent there. Doing this may also help get your page re-crawled more quickly and the index updated.
If everything is as it should be and you're only waiting on a re-index, that usually takes no longer than two weeks (for very infrequently indexed websites). Fetching with the Google bot may speed things up and getting an external link on a higher trafficked page could help as well.
Have you tried resubmitting a sitemap through GWT as well? That could be another trick to getting the page re-crawled more quickly.
-
Hello Ray,
Specifically, the firm name, which is spelled a-l-t-a-s p-a-r-t-n-e-r-s (it is easy to confuse with "Atlas Partners" which is another company altogether
-
What was the exact search term you used to bring up those SERPs?
When i search 'atlastpartners' and 'atlastpartners.com' it brings up your site with a meta description.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search console says 'sitemap is blocked by robots?
Google Search console is telling me "Sitemap contains URLs which are blocked by robots.txt." I don't understand why my sitemap is being blocked? My robots.txt look like this: User-Agent: *
Technical SEO | | Extima-Christian
Disallow: Sitemap: http://www.website.com/sitemap_index.xml It's a WordPress site, with Yoast SEO installed. Is anyone else having this issue with Google Search console? Does anyone know how I can fix this issue?1 -
HTTP Status showing up in opensiteexplorer top pages as blocked by robot.txt file
I am trying to find an answer to this question it has alot of url on this page with no data when i go into the data source and search for noindex or robot.txt but the site is visible in the search engines ?
Technical SEO | | ReSEOlve0 -
No Ranking for bathroom vanities in organic results for google
My website is www.tanyas.ca and I noticed that I can't find a result in the organic directory of google for my main keyword, "bathroom vanities". Your help is greatly appreciated. Cam
Technical SEO | | camc0 -
Google place listings and search results- quick question.
Has anybody else noticed that they are ranking better on 'places' yet they have dropped off in the actual search results? We've had no message through webmaster tools. The same seems to have happened to our competitors.
Technical SEO | | onlinechester0 -
Best use of robots.txt for "garbage" links from Joomla!
I recently started out on Seomoz and is trying to make some cleanup according to the campaign report i received. One of my biggest gripes is the point of "Dublicate Page Content". Right now im having over 200 pages with dublicate page content. Now.. This is triggerede because Seomoz have snagged up auto generated links from my site. My site has a "send to freind" feature, and every time someone wants to send a article or a product to a friend via email a pop-up appears. Now it seems like the pop-up pages has been snagged by the seomoz spider,however these pages is something i would never want to index in Google. So i just want to get rid of them. Now to my question I guess the best solution is to make a general rule via robots.txt, so that these pages is not indexed and considered by google at all. But, how do i do this? what should my syntax be? A lof of the links looks like this, but has different id numbers according to the product that is being send: http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167 I guess i need a rule that grabs the following and makes google ignore links that contains this: view=send_friend
Technical SEO | | teleman0 -
Is my robots.txt file working?
Greetings from medieval York UK 🙂 Everytime to you enter my name & Liz this page is returned in Google:
Technical SEO | | Nightwing
http://www.davidclick.com/web_page/al_liz.htm But i have the following robots txt file which has been in place a few weeks User-agent: * Disallow: /york_wedding_photographer_advice_pre_wedding_photoshoot.htm Disallow: /york_wedding_photographer_advice.htm Disallow: /york_wedding_photographer_advice_copyright_free_wedding_photography.htm Disallow: /web_page/prices.htm Disallow: /web_page/about_me.htm Disallow: /web_page/thumbnails4.htm Disallow: /web_page/thumbnails.html Disallow: /web_page/al_liz.htm Disallow: /web_page/york_wedding_photographer_advice.htm Allow: / So my question is please... "Why is this page appearing in the SERPS when its blocked in the robots txt file e.g.: Disallow: /web_page/al_liz.htm" ANy insights welcome 🙂0 -
Robots.txt issue - site resubmission needed?
We recently had an issue when a load of new files were transferred from our dev server to the live site, which unfortunately included the dev site's robots.txt file which had a disallow:/ instruction. Bad! Luckily I spotted it quickly and the file has been replaced. The extent of the damage seems to be that some descriptions aren't displaying and we're getting a message about robots.txt in the SERPs for a few keywords. I've done a site: search and generally it seems to be OK for 99% of our pages. Our positions don't seem to be affected right now but obviously it's not great for the CTRs on those keywords affected. My question is whether there is anything I can do to bring the updated robots.txt file to Google's attention? Or should we just wait and sit it out? Thanks in advance for your answers!
Technical SEO | | GBC0 -
Blocking other engines in robots.txt
If your primary target of business is not in China is their any benefit to blocking Chinese search robots in robots.txt?
Technical SEO | | Romancing0