How come some local 7 pack listings link to site and some link to the G+ page?
-
Does anyone know how to fix this issue?
Even though a site profile has had the website added to it Google continues to link the main "title tag" link to the G+ page and not the actual website domain. Thanks for any info in advance!
-
Hi Irving, the G+ L listing for Eyeglasses does not have a website listed, so that's why the link in G search only goes to G+. Here is the G+ L page search links to:
https://plus.google.com/112052218407614442300/about?hl=enHowever there may be an issue with that listing. If you view their maps page, the link shows but it does not show up on the Google+ Local page.
Looks like someone just made an update in Map Maker on May 23rd and added the website there. If you manage the listing I'd be sure the website is also listed in the dashboard and I would not typically recommend editing in MM.
-
Agree. "Mess" is a descriptor!
-
Thanks Tim, what a fantastic confusing mess G has been making as they figure it out as they go along.
-
In this new example, the query was for the actual domain name since it is also the business name: eyeglasses.com. You are already being served the cluster of direct domain listings, so when the Google+ listing is finally shown, it shows the Google+ listing link (in a single pack).
Typically, the main reason this occurs in a larger local pack is because of the disconnect between Google+ and the business website. There is quite a bit of confusion going on with this now since the Google+ Business page and the Google+ local (Places) page are not fully merged in many cases. What has happened in these situations is the business owner (or representative agency) set up a Google+ business page, but failed to claim/ verify and optimize the Google+ Local (Places) page, which is still sitting there unlinked to the main website.
-
Sorry maybe that was a bad example, this example for eyeglasses.com - it's verified and has a website but is still showing the G+ URL
-
In PetSitters Plus example, Google hasn't made the connection with a website. The most common reason you will see this issue is when there is confusion and Google hasn't made this connection or the website doesn't exist, so it defaults to the Google+ page. This is fairly common in unclaimed/ unverified listings where Google has only pulled the info from it's data sources which do not have the actual website info.
There are some other issues occurring due to the Google+ merge transition, but I don't think those issues are causing this example you have shown.
-
What is the site Petsitters Plus? coz if it is i dnt see the website on the G+ page
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Index Issue - Indexing pages that don't exhist
Hi All, I have noticed a weird issue when performing a search on Google to show me all the pages it is indexing of our site. site:www.one2create.co.uk It brings up most of our website pages but then is also brings up a few HTTPS urls (our site has not been converted to HTTPS yet) but also the URL path, Title, and Meta Description are from one of our clients websites (an Automotive Job site). When clicked they take you to a generic 404 server error page, not our branded 404 page. The site that it has taken the url, title and meta description from is on a different server completely so I don't see how it has even managed to get that information and linked it to our site? Has anyone seen anything like this before? And what is the best way to fix it? We have asked Google to re-index the site but still no luck.
Search Behavior | | Jvickery0 -
Is it better to find a page without the desired content, or not find the page?
Are there any studies that show which is best? If you find my page but not the specific thing you want on it, you may still find something of value. But, if you don't you may associate my site with poor results, which can be worse than finding what you want at a competitor site. IOW maybe it is best to have pages that ONLY and ALWAYS have the content desired. What do the studies suggest? I'm asking because I have content that maybe 1/3 of the time exists and 2/3 of the time doesn't...think 'out of stock' products. So, I'm wondering if I should look into removing the page from being indexed during the 2/3 or should keep it. If I remove it then my concern is whether I lose the history/age factor that I've read Google finds important for credibility. Your thoughts?
Search Behavior | | friendoffood0 -
UK & Ireland Websites displaying same page differently in serps
Hi there, I have a Magento Enterprise store based in the UK and we have an Irish version which we created after finding an increasing amount of traffic from Ireland. One thing that I don't understand, is that if I search for one keyword on Google, the websites are displayed differently, when effectively they are identical websites. Here is how a result looks like for my UK store: http://i.imgur.com/NKSt4Qq.png And here is how a result looks like for my IE store: http://i.imgur.com/Cynv8Mz.png They both have the same Meta description, barring a few geographical words and terms. Both have on page descriptions as well as products. Yet the IE results display the description and the UK results display products and a tiny snippet of the description. Any ideas on how I can make my UK pages display like the IE one? Or, just why they are both displayed differently? Thanks
Search Behavior | | tomhall900 -
WMT reporting big drop in links today
Has anyone else noticed a huge drop in inbound links in WMT tools today? I've seen a 50% slump overnight in 2 of my projects. Definitely something still stirring over at the Google data centres!
Search Behavior | | FDFPres0 -
Using Google Analytics to See What Time of Day Visitors View My Site
Hi folks, My company has Google Analytics setup for all of our websites, but I am a bit stumped on something. Now, this may not be possible, but am I able to see what time of day visitors most frequently view my blog? I would like to optimize blog post publishing for when I know we have in influx of visitors, yet I cannot find this information on GA. Any input would be much appreciated. Regards,
Search Behavior | | Instabill
Meghan0 -
Does Page Load Time Affect SEO Rankings?
I was curious about how much page load times affect rankings. Here's what I did: I put together a lot of interactive media on specific landing pages Time-on-Page from organic visitors went from 50 seconds to average of 34 minutes Bounce Rate decreased by 20% Page Load time increased from 1 second to 6 seconds and at peak times to 8 seconds (on 56KB test) In the meantime the page was re-indexed and re-cached My question is three-fold: Would the time on page give higher rankings for keyword Would decreased bounce rate enhance rankings? Would the page load time decrease rankings? Did anyone do a similar test? What were the results?
Search Behavior | | HMCOE0 -
Bounce Rate and Time on Site
What would be the best way to decrease bounce rate and increase time on site for a Real Estate Website?
Search Behavior | | bronxpad0 -
Blog posts not getting indexed and being outranked by scrapper sites.
Our Google traffic has dropped significantly over the last year and now we're struggling to even get our blog posts indexed. It's been extremely discouraging and we're trying to do what ever we can to fix it. I've included a screenshot of our Google traffic as well as Pages Indexed according to WebmasterTools. http://i.imgur.com/Wu1D8.jpg The Problem Our blog posts are frequently not getting indexed. Many times they are outranked by low authority scraper sites, our Twitter/FB account, etc. Sometimes our homepage will rank instead of the blog post. Sometimes we'll break a news story, get tons of quality backlinks, and still be nowhere in Google. Pretty much the only Google traffic we see is from existing posts. Still 3,200 pages indexed when we have only 1,600 posts. I guess this isn't really a problem... just waiting for the meta noindex to take effect. More details We've seen no duplicate content or other warnings from WebmasterTools. We've been constantly acquiring quality backlinks from credible sites. We deleted the useless content and fixed the canonical issues that were a result of switching servers. History Our site is a news/entertainment blog. The traffic usually has spikes depending on what's going on in the news. Nov 1, 2011 - Site kept maxing out at 30k+ visits so we switched servers. Jan 30, 2012 - Hired a writer so we could focus on other aspects of the site. Apr 19, 2012 - Noticed our posts weren't getting indexed like they used to. Suspected our writer was spinning articles but couldn't find any evidence. 90% of our blog posts were nowhere to be found in Google. Scrapper sites would outrank us for our own stories... even our Twitter account was ranking ahead of us. IF our story would show up in Google it would usually be the home page instead of the blog post. Sep 2012 - Finally got more serious about addressing the problem. Noticed a couple potentially big problems and started making changes. Canonical Issues non-www site didn't redirect to www. It showed 2 different link profiles according to OpenSiteExplorer and 0 backlinks according to Webmaster Tools. Wordpress shortlinks weren't redirecting to the actual permalink. For instance http://www.domain.com/?p=123 and http://www.domain.com/post-example were both getting indexed. For every post there were 4 different versions that Google had to choose from. http://domain.com/?p=123, http://www.domain.com/?p=123, http://domain.com/post-example, and http://www.domain.com/post-example I figured the canonical issues must have happened when we switched servers which was the reason for the drop in WebmasterTools indexed pages and increase in Not Selected pages. FIXED (Sep 15): One we fixed the canonical issue the Indexed Pages went back up however the Not Selected is still the same. Duplicate Content When we first created our site we wanted to have tons of images for each musician/athlete/actor/etc. so we uploaded about 5-10 for each person. We created a blog post for each image with no writing and the exact same post titles. As a result there were TONS of low-quality, similar posts, with virtually identical permalinks. e.g. http://www.domain.com/james-smith1, http://www.domain.com/james-smith2, http://www.domain.com/james-smith3, etc. A crawl on Sep 26 showed over 550 duplicate content warnings. FIXED (Oct 1): We deleted/301 redirected the useless pages (they weren't getting traffic anyways) and by the next crawl the number was almost to 0... which it's at now. We also had TONS of tags (since there're constantly new names in the media) that were getting indexed so we had meta robots noindex them. Questions: Why aren't a majority of our posts getting indexed? Were we penalized or just stuck because of a filter? How long should it take for meta robots to noindex the tags pages? (I did it on Sep 25 but they are still there) If a site is scraping our content (same title, image, excert) but linking to us, should we contact them and tell them to remove it? Is there anything else we need to do start getting our blog posts indexed like they used to? Should we try contacting Google to re-evaluate our site? Sorry, that was a LOT of writing. If anyone wants the URL please let me know so I can PM it to you. Any help would be greatly appreciated! Wu1D8.jpg
Search Behavior | | gfreeman230