Anybody else seeing Penguin corrections?
-
Hi,
Over the past few days, I have noticed that a few of my pages that were hit by the Google Penguin update come back from the dead and return to the #1 spot for the main keywords. I still don't see any change for secondary keywords I used to rank for, but hey at least there is something.Has anybody else noticed this?
NOTE: I did not make any changes to my pages. I had never done any black-hat (just greyish) so I took the advice of many and just waited.
-
No, I have not seen this happen to any pages as of yet. I will however keep an eye out. Congratulations on your semi recovery, I am sure there are many out there who would like to be in your shoes right now!!
Cheers
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is Google able to see child pages in our AJAX pagination?
We upgraded our site to a new platform the first week of August. The product listing pages have a canonical issue. Page 2 of the paginated series has a canonical pointing to page 1 of the series. Google lists this as a "mistake" and we're planning on implementing best practice (https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html) We want to implement rel=next,prev. The URLs are constructed using a hashtag and a string of query parameters. You'll notice that these parameters are ¶meter:value vs ¶meter=value. /products#facet:&productBeginIndex:0&orderBy:&pageView:grid&minPrice:&maxPrice:&pageSize:& None of the URLs are included in any indexed URLs because the canonical is the page URL without the AJAX parameters. So these results are expected. Screamingfrog only finds the product links on page 1 and doesn't move to page 2. The link to page 2 is AJAX. ScreamingFrog only crawls AJAX if its in Google's deprecated recommendations as far as I know. The "facet" parameter is noted in search console, but the example URLs are for an unrelated URL that uses the "?facet=" format. None of the other parameters have been added by Google to the console. Other unrelated parameters from the new site are in the console. When using the fetch as Google tool, Google ignores everything after the "#" and shows only the main URL. I tested to see if it was just pulling the canonical of the page for the test, but that was not the case. None of the "#facet" strings appear in the Moz crawl I don't think Google is reading the "productBeginIndex" to specify the start of a page 2 and so on. One thought is to add the parameter in search console, remove the canonical, and test one category to see how Google treats the pages. Making the URLs SEO friendly (/page2.../page3) is a heavy lift. Any ideas how to diagnose/solve this issue?
Intermediate & Advanced SEO | | Jason.Capshaw0 -
Is this the correct way of using rel canonical, next and prev for paginated content?
Hello Moz fellows, a while ago (3-4 years ago) we setup our e-commerce website category pages to apply what Google suggested to correctly handle pagination. We added rel "canonicals", rel "next" and "prev" as follows: On page 1: On page 2: On page 3: And so on, until the last page is reached: Do you think everything we have been doing is correct? I have doubts on the way we have handled the canonical tag, so, any help to confirm that is very appreciated! Thank you in advance to everyone.
Intermediate & Advanced SEO | | fablau0 -
Starting over after a Penguin Penalty
Hi, Has anyone tried starting a new domain after being hit with a Penguin penalty? I'm considering the approach outlined here: https://searchenginewatch.com/sew/how-to/2384644/can-you-safely-redirect-users-from-a-penguin-hit-site-to-a-new-domain. In a nutshell, de-index the OLD site completely via Google's Removal Tool, and then relaunch old content under new domain. This seems to have merit, unless Google keeps a hidden cache of content (or uses other sources like Wayback Machine). My concern is doing the above listed approach, but Google still passes the old links to the new domain. We have great content, but too much spam (despite me removing a lot of the links + disavow). Any feedback based on experience would be appreciated. Thanks.
Intermediate & Advanced SEO | | mrodriguez14401 -
Is google seeing "all" my homepage?
Hello All 🙂 Since launching my new website design - www.advanced-driving.co.uk I am not convinced Google is seeing all the content on the page. I took a long extract of text and did a search on Google and nothing was found. Also although in the search results for "advanced driving course" I can see the new title tag, the snippet isn't showing.. Is there anyway I can check this? As a scroll down I can see the URL changes ie: www.advanced-driving.co.uk
Intermediate & Advanced SEO | | robert78
then:
http://www.advanced-driving.co.uk/#da-page_in_widget-3
then:
http://www.advanced-driving.co.uk/#da-page_in_widget-4
then:
http://www.advanced-driving.co.uk/#da-page_in_widget-5 Is this right? Thanks in advance..0 -
Google isn't seeing the content but it is still indexing the webpage
When I fetch my website page using GWT this is what I receive. HTTP/1.1 301 Moved Permanently
Intermediate & Advanced SEO | | jacobfy
X-Pantheon-Styx-Hostname: styx1560bba9.chios.panth.io
server: nginx
content-type: text/html
location: https://www.inscopix.com/
x-pantheon-endpoint: 4ac0249e-9a7a-4fd6-81fc-a7170812c4d6
Cache-Control: public, max-age=86400
Content-Length: 0
Accept-Ranges: bytes
Date: Fri, 14 Mar 2014 16:29:38 GMT
X-Varnish: 2640682369 2640432361
Age: 326
Via: 1.1 varnish
Connection: keep-alive What I used to get is this: HTTP/1.1 200 OK
Date: Thu, 11 Apr 2013 16:00:24 GMT
Server: Apache/2.2.23 (Amazon)
X-Powered-By: PHP/5.3.18
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 11 Apr 2013 16:00:24 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1365696024"
Content-Language: en
Link: ; rel="canonical",; rel="shortlink"
X-Generator: Drupal 7 (http://drupal.org)
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8 xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:og="http://ogp.me/ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:sioc="http://rdfs.org/sioc/ns#"
xmlns:sioct="http://rdfs.org/sioc/types#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"> <title>Inscopix | In vivo rodent brain imaging</title>0 -
How does Google index pagination variables in Ajax snapshots? We're seeing random huge variables.
We're using the Google snapshot method to index dynamic Ajax content. Some of this content is from tables using pagination. The pagination is tracked with a var in the hash, something like: #!home/?view_3_page=1 We're seeing all sorts of calls from Google now with huge numbers for these URL variables that we are not generating with our snapshots. Like this: #!home/?view_3_page=10099089 These aren't trivial since each snapshot represents a server load, so we'd like these vars to only represent what's returned by the snapshots. Is Google generating random numbers going fishing for content? If so, is this something we can control or minimize?
Intermediate & Advanced SEO | | sitestrux0 -
Crawling error or somthing else that male my page unvisible ( Simple problem, no solved yet )
Hi, my problem isn't solved and nobody was able to answer my question: why isn't my page poltronafraubrescia.zenucchi.it indexed for the keyword poltrona frau Brescia? The same page on another domain was four on the ranking reluts... And now it redirects to the new one... An you explain me how to proceed? I trust you... Help me...
Intermediate & Advanced SEO | | guidoboem0 -
Anybody know good SEO success stories in the field of small business directories?
We are helping a small business directory in their SEO. They address 20 service categories(300 subcategories) with 60000 profiles. We are focusing on following elements: 1. Cutting the flab (they have 3.4 million pages indexed), but they get only 30000 visitors on the website. This will be done by removing long list of cities & by using "Nofollow". 2. Improve internal navigation & use Anchor texts 3. Focus SEO (Backlinks) at business category pages 4. Clean URLs, Titles 5. Implementation of Rich Snippets (Schema.org) 6. Cleaning data If we can not take traffic volume to 300000 in a month, this project will be considered a failure. Does any directory has achieved this recently? We are in first 2 weeks of the project and It will help us our "To do" list 😉
Intermediate & Advanced SEO | | UnyscapeInfocom0