Strange recovery from Panda
-
I have 2 business sites. www.affordable-uncontested-divorce.com is a homestead template site which is old and clunky but has given me steady traffic despite little maintenance. It was unafected by the various Panda updates. It does load very fast. www.uncontesteddivorce-nyc I put up about 18 months ago it is a Thesis Theme Wordpress site with the usual bells and whistles. I put a lot of work into it and around May its traffic finally surpassed my old site. In June traffic to the new site started tanking, ultimately about 30% off. A friendly SEO thought that there was some duplication between the 2 sites and Google might have seen the older site as the authority site and the newere as the scraper. I tried the usual fixes and the decline finally bottomed out but no recovery. I read someone who said that Wordpress sites are problamatical with Panda because of inherent duplicate content issues unless you don't use them as blogs, just as CMS. So I got rid of all the blog posts save one. Around about 3 months ago my traffic started to go up again and now it once again has surpassed the older site. The strange thing about it is that since the recovery my Analytic numbers like bounce rate number of page views and time on site have gone down and are much worse on the new site than they are on the old site. Does anyone have any idea of what' s up?
Thx
Paul
-
Actually my question (should have made it clearer) is why is my site ranking so much better in the past few months even though all the important analytics numbers have simultaneously been changing for the worse? It would appear that the various pundies were wrong about how Panda really works.
Paul
-
Hi Paul
Have you looked deeper into your analytics - how do your traffic sources for the two sites compare- are they different? Are you getting traffic from keywords on your new site that you aren't on your old site? If so what is the bounce rate on these terms - my thoughts are you might be ranking well for some keywords that don't provide visitors with the information they are looking for. Alternatively your page could be so relevant for that search keyword that visitors don't need to dig any deeper into the new site. In this case time on page should give you a clue as to which of these it may be in this situation - is the time on page too short to read the content for you average person or not - how long does it take you to read? It might not be this but I always find looking deeper into the analytics of a website tends to hold the key, especially when you are comparing it to another site that you own in the same niche as you are in the fortunate position of having access to both sites analytics. Hope this helps.
-
One thing I want to comment on with wordpress and google is how wordpress makes blog post urls. They use dates. Google is all about freshness and this is one of the things I would change how the urls are structured in the permalink area of admin. Make sure not to use dates unless you want to for any material you do not wish to get long term traffic for.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Strange Cross Domain Canonical Issue...
We have 2 identical ecommerce sites. Using 301 is not an option since both are major brands. We've been testing cross domain canonicals for about 2 dozen products, which were pretty successful. Our rankings generally increased. Then things got weird. For the most part, canonicaled pages appeared to have passed link juice since the rankings significantly improved on the other site. The clean URLs (www.domain.com/product-name/sku.cfm) disappeared from the rankings, as they are supposed to, but some were replaced by urls with parameters that Google had indexed (apparently duplicate content). ex: (www.domain.com/product-name/sku.cfm?clicksource?3diaftv). The parametered URLs have the correct canonical tags. In order to try and remove these from Google's index, we: 1. Had the pages fetched in GWT assuming that Google hadn't detected the canonical tage. 2. After we discovered a few hundred of these pages indexed on both sites, we built sitemaps of the offending pages and had the sitemaps fetched. If anyone has any other ideas, please share.
Intermediate & Advanced SEO | | AMHC0 -
Strange 404s in GWT - "Linked From" pages that never existed
I’m having an issue with Google Webmaster Tools saying there are 404 errors on my site. When I look into my “Not Found” errors I see URLs like this one: Real-Estate-1/Rentals-Wanted-228/Myrtle-Beach-202/subcatsubc/ When I click on that and go to the “Linked From” tab, GWT says the page is being linked from http://www.myrtlebeach.com/Real-Estate-1/Rentals-Wanted-228/Myrtle-Beach-202/subcatsubc/ The problem here is that page has never existed on myrtlebeach.com, making it impossible for anything to be “linked from” that page. Many more strange URLs like this one are also showing as 404 errors. All of these contain “subcatsubc” somewhere in the URL. My Question: If that page has never existed on myrtlebeach.com, how is it possible to be linking to itself and causing a 404?
Intermediate & Advanced SEO | | Fuel0 -
Penguin Recovery Problem - Weird
I had an old URL and the link profile of this URL wasn't good - I had been using article syndication and Penguin threw me to the wolves. I decided to start over with a new URL and build a new natural link profile. I specifically did NOT do a 301 redirect to the new URL and did not make any request to Google to transfer domain as I didn't want old site being associated to the new one. To redirect our old users, I put a link on the old URL index page (nofollowed) that say that we have moved. I was very surprised to find that in GWT all the links of the old URL have now been associated to the new URL....why is that? I started over to have a clean natural profile and follow Google guidelines.Has anyone heard of this before? All I can guess is that Google itself "decided" to do its own pseudo-301, since the site was the same, page for page.This has Major implications for anyone attempting a "clean start" to recover from Penguin.
Intermediate & Advanced SEO | | veezer0 -
Panda Recovery - What is the best way to shrink your index and make Google aware?
We have been hit significantly with Panda and assume that our large index with some pages holding thin/duplicate content being the reason. We have reduced our index size by 95% and have done significant content development on the remaining 5% pages. For the old, removed pages, we have installed 410 responses (Page does not exist any longer) and made sure that they are removed from the sitempa submitted to Google; however after over a month we still see Google spider returning to the same pages and the webmaster tools shows no indicator that Google is shrinking our index size. Are there more effective and automated ways to make Google aware of a smaller index size in hope of Panda recovery? Potentially using the robots.txt file, GWT URL removal tool etc? Thanks /sp80
Intermediate & Advanced SEO | | sp800 -
Strange situation - Started over with a new site. WMT showing the links that previously pointed to old site.
I have a client whose site was severely affected by Penguin. A former SEO company had built thousands of horrible anchor texted links on bookmark pages, forums, cheap articles, etc. We decided to start over with a new site rather than try to recover this one. Here is what we did: -We noindexed the old site and blocked search engines via robots.txt -Used the Google URL removal tool to tell it to remove the entire old site from the index -Once the site was completely gone from the index we launched the new site. The new site had the same content as the old other than the home page. We changed most of the info on the home page because it was duplicated in many directory listings. (It's a good site...the content is not overoptimized, but the links pointing to it were bad.) -removed all of the pages from the old site and put up an index page saying essentially, "We've moved" with a nofollowed link to the new site. We've slowly been getting new, good links to the new site. According to ahrefs and majestic SEO we have a handful of new links. OSE has not picked up any as of yet. But, if we go into WMT there are thousands of links pointing to the new site. WMT has picked up the new links and it looks like it has all of the old ones that used to point at the old site despite the fact that there is no redirect. There are no redirects from any pages of the old to the new at all. The new site has a similar name. If the old one was examplekeyword.com, the new one is examplekeywordcity.com. There are redirects from the other TLD's of the same to his (i.e. examplekeywordcity.org, examplekeywordcity.info), etc. but no other redirects exist. The chances that a site previously existed on any of these TLD's is almost none as it is a unique brand name. Can anyone tell me why Google is seeing the links that previously pointed to the old site as now pointing to the new? ADDED: Before I hit the send button I found something interesting. In this article from dejan SEO where someone stole Rand Fishkin's content and ranked for it, they have the following line: "When there are two identical documents on the web, Google will pick the one with higher PageRank and use it in results. It will also forward any links from any perceived ’duplicate’ towards the selected ‘main’ document." This may be what is happening here. And just to complicate things further, it looks like when I set up the new site in GA, the site owner took the GA tracking code and put it on the old page. (The noindexed one that is set up with a nofollowed link to the new one.) I can't see how this could affect things but we're removing it. Confused yet? I'd love to hear your thoughts.
Intermediate & Advanced SEO | | MarieHaynes0 -
How to compete with duplicate content in post panda world?
I want to fix duplicate content issues over my eCommerce website. I have read very valuable blog post on SEOmoz regarding duplicate content in post panda world and applied all strategy to my website. I want to give one example to know more about it. http://www.vistastores.com/outdoor-umbrellas Non WWW version: http://vistastores.com/outdoor-umbrellas redirect to home page. For HTTPS pages: https://www.vistastores.com/outdoor-umbrellas I have created Robots.txt file for all HTTPS pages as follow. https://www.vistastores.com/robots.txt And, set Rel=canonical to HTTP page as follow. http://www.vistastores.com/outdoor-umbrellas Narrow by search: My website have narrow by search and contain pages with same Meta info as follow. http://www.vistastores.com/outdoor-umbrellas?cat=7 http://www.vistastores.com/outdoor-umbrellas?manufacturer=Bond+MFG http://www.vistastores.com/outdoor-umbrellas?finish_search=Aluminum I have restricted all dynamic pages by Robots.txt which are generated by narrow by search. http://www.vistastores.com/robots.txt And, I have set Rel=Canonical to base URL on each dynamic pages. Order by pages: http://www.vistastores.com/outdoor-umbrellas?dir=asc&order=name I have restrict all pages with robots.txt and set Rel=Canonical to base URL. For pagination pages: http://www.vistastores.com/outdoor-umbrellas?dir=asc&order=name&p=2 I have restrict all pages with robots.txt and set Rel=Next & Rel=Prev to all paginated pages. I have also set Rel=Canonical to base URL. I have done & apply all SEO suggestions to my website but, Google is crawling and indexing 21K+ pages. My website have only 9K product pages. Google search result: https://www.google.com/search?num=100&hl=en&safe=off&pws=0&gl=US&q=site:www.vistastores.com&biw=1366&bih=520 Since last 7 days, my website have affected with 75% down of impression & CTR. I want to recover it and perform better as previous one. I have explained my question in long manner because, want to recover my traffic as soon as possible.
Intermediate & Advanced SEO | | CommercePundit0 -
What's the best practise for adding a blog to your site post panda? subdomain or subdirectory???
Should i use a subdomain or a subdirectory? i was going to use a subdirectory however i have been reading a lot of articles on the use of subdomains post panda and the advantages of using them instead of using subdirectories. Thanks Ari
Intermediate & Advanced SEO | | dublinbet0 -
Panda Prevention Plan (PPP)
Hi SEOMOzers, I'm planning to prepare Panda deployment, by creating a check-list from thinks to do in SEO to prevent mass trafic pert. I would like to spread these ideas with SEOMoz community and SEOMoz staff in order to build help ressources for other marketers. Here are some ideas for content website : the main one is to block duplicate content (robots.txt, noindex tag, according to the different canonical case) same issue on very low quality content (questions / answers, forums), by inserting canonical redirect or noindex on threads with few answers
Intermediate & Advanced SEO | | Palbertus1