International Targeting - Google Search Console not recognizing the tags
-
Hi,
We are facing a problem with international targeting not being recognized by the Google Search Console.
This is the URL to which we added the following tags:
URL:
http://kilgray.com/memoq/2015-100/help-en/index.html
TAGS:
Flang tool Result: http://screencast.com/t/rrBgcr1X
Search Console result: http://screencast.com/t/fP45ZR2c
I am a bit lost here, as the tags were validated also from different members of the community. Is this because of the frames? (Yes, the site is built in frames).
Thanks for your help!
-
Hi Aleyda. I just stumbled on this thread because I'm having the exact same problem as Kilgray Marketing was – Google Search Console isn't recognizing the hreflang tags on my client's site:
cbisonline.com/eu
https://cbisonline.com/euI realize this thread is closed since you've already answered the original question, but I was hoping you might be able to provide some insight on my situation. Any help would be greatly appreciated! Thanks in advance...
-
Hi there,
The problem are the frames indeed. If you take a look at the cache version in Google of that page: http://kilgray.com/memoq/2015-100/help-en/index.html doesn't show where you are adding the hreflang annotations and none of the content of that page in fact: http://webcache.googleusercontent.com/search?q=cache:1jRFSsVC_MwJ:kilgray.com/memoq/2015-100/help-en/index.html%3Fproject_home&num=1&hl=en&gl=es&strip=0&vwsrc=0
The issue is that with the frames is not only the hreflang annotations that Google won't be able to identify, but the all the content you have in there is not visible and therefore won't be able to rank the pages that use it... which is much more critical than the hreflang annotations.
My recommendation would be to replace the usage of frames, make sure that all of the content is shown through one piece of HTML at a time... and if you include then the hreflang annotations in the head area of the HTML Google will be able to identify them... as well as your content.
Thanks,
Aleyda
-
Hi,
Thanks for the reply.
I do know about the noindex, but this is actually something new. We set up that up today as this specific version is old and we do not want to have it indexed anymore. The page which you suggest is actually the new version, and we will sure add hreflang-tag there. But what is the point if google can't just read it?
I added the hreflang-tag more two months ago (where there were no noidex-tags) and even so Google couldn't drill the page. I am not sure if it is worth doing.
-
1st - if you don't know - you've got a meta-robots noindex-tag, so the site couldn't be indexed.
2nd- The page with the content you try to add the the hreflang-tag is a frameset.
There is no text, not a single word, just two iframes.For Google, the frameset is not the page wich it would show in search results, it takes the frame: memoq/current/help-de/memoq_help_title_page.html
(maybe you could ad the right hreflang-tags here to - google has indexed it)
I have never used Framesets, and really never in more than one language.
Maybe anybody has now an idea wich is the best practice for you - for me it's all I ca say about that case
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Search Console Not Sending Messages
One of our sites received a Manual Penalty for unnatural links by Google. However, we never received a message in Google Search Console or an email about the manual action. The only reason we knew about the penalty is by the obvious drop in rankings, then signing into search console to look for any manual actions, which we found. Since then, we have submitted a disavow file and a reconsideration request. However, once again we did not receive an email or message in search console that shows confirmation of the disavow or that they received the reconsideration request. The disavow file does show up after I upload it, and it says it was successfully uploaded... but no messages or emails. After many hours of investigating the various canonical versions of our website on Search Console, we found out that there were several “owners” of the various canonical versions of our site that had “could not find the email address” as a site owner. We found out that these were previous employees who no longer worked with the company and their email address was deleted. After unverifying these site owners, (all the ones that had “could not find the email address” as the site owner), the notifications, emails and messages in Search Console started to appear. However, the only place they did not appear, is the main canonical version of our site. Of course, the main canonical version of our site (https://www) is the version that we uploaded the disavow and reconsideration request. This is the canonical version of the site that we need to receive these messages to know if our reconsideration request was granted! We’ve just reuploaded the disavow file and reconsideration request to all of the other canonical versions (2 of the 3 received the message about the penalty)…. and we are currently awaiting a response. Has anybody else had problems with not receiving notifications in search console due to deleted email addresses?
Technical SEO | | Fiyyazp0 -
Google Site Search
I'm considering to implement google site search bar into my site.
Technical SEO | | JonsonSwartz
I think I probably choose the version without the ads (I'll pay for it). does anyone use Google Site Search and can tell if it's a good thing? does it affects in any way on seo? thank you0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Google.com
Hi We are managing a .com site for a client working on getting the site ranking. The site is hosted in the US. The content is rich, deep and unique. The site is in a competitive market but had begun ranking top 50 for a selection of keywords and we could see many more in the top 100. The site is now going backwards and only has a few keywords ranking top 50 and all the others have disappeared from the rankings all together. Any thought as to what could cause this. The site is managed from the Uk but as mentioned is hosted in the US. No penguin issues as all content unique, rich, relevant and fresh. SEO is also managed from the UK. Thoughts
Technical SEO | | SEOwins0 -
Targeting US search traffic
Hello, I've noticed the site I'm working on gets about 30-40% of Google organic search traffic from the US and the rest comes from around the world. All the site's customers are in the US and so the thought is to focus getting traffic more from the US. I know google webmaster tools has a geo targeting mechanism for the site in question but what I don't want to do is turn that on and then traffic from non-US sources goes away; I suppose that's not so bad if traffic from the US bumps up accordingly. Do you have any experience on this area? thanks -Mike
Technical SEO | | mattmainpath0 -
Google indexing thousands crazy search results with %25253
In GWT I started seeing very strange pages indexed a few weeks, and Google is no reporting over 21,000 of pages (blocked by robots.txt) with weird URLs like this: http://www.francesphotography.com/?s=no-results:no-results%25252525252525253Ano-results%2525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%25252525252525252525253Ano-results%25252525252525252525253Ano-results%2525252525252525252525253Adanna&cat=no-results http://www.francesphotography.com/?s=no-results:no-results%2525253Ano-results%25252525253Ano-results%25252525253Ano-results%25252525253Ano-results%2525252525253Ano-results%25252525252525253Ano-results%25252525252525253Ano-results%25252525252525253Adanna&cat=no-results The current robots.txt looks like this: User-agent: *
Technical SEO | | BoulderJoe
Disallow: /wp-content Disallow: /wp-admin Disallow: /wp-includes
Disallow: /data
Disallow: /slideshows
Disallow: /page/*/?s=
Disallow: /?s=
Disallow: /search This website is running an up to date WP install with Yoast's Google Analytics and SEO plug-in. I can't point to anything specific that happened with the site when these URLs started appearing even after I modified the robots.txt. What can be done to try and stop Google from creating and indexing these goofy URLs? I see lots of sites having this issue when I search in Google, but no one seems to have a solution.0 -
Google showing former meta tags in search results inspite of new tags being crawled by it
I had changed the meta tags for a site www.aztexsodablast.com.au about a month back and Google has also crawled those new tags but in search results when I search for the term 'Aztex Sodablast' it is continuing to show the old tags while on the site, the new tags are being displayed. What may be the issue and how could I correct the problem?
Technical SEO | | pulseseo0 -
Google search result going to a page that I did not put on my site
Hi, I am seeing a very strange result in google for my site. When doing a search for the term "london reflexology" my site comes up 18th in the results. But when I click the link or check the URL it shows up as: http://www.reflexologyonline.co.uk/reflexologyonline.php?Action=Webring This is not right at all. It looks like some sort of cloaking but I am not sure. I am new to SEO and I do not know why goole is showing this URL that does not exist on my site and of witch the content is totally wrong. Can anyone please help with this? See the 2 linked images for more details. It seems to me the site might be hacked or something to that effect. Please help.... jyJdP.png 71Mf4.png
Technical SEO | | RupDog0