Is Google able to determine duplicate content every day/ month?
-
A while ago I talked to somebody who used to work for MSN a couple of years ago within their engineering department. We talked about a recent dip we had with one of our sites.We argued this could be caused by the large amount of duplicate content we have on this particular website (+80% of our site).
Then he said, quoted: "Google seems only to be able to determine every couple of months instead of every day if the content is actually duplicate content". I clearly don't doubt that duplicate content is a ranking factor. But I would like to know you guys opinions about Google being only able to determine this every couple of X months instead of everyday.
Have you seen or heard something similar?
-
Sorting out Google's timelines is tricky these days, because they aren't the same for every process and every site. In the early days, the "Google dance" happened about once a month, and that was the whole mess (index, algo updates, etc.). Over time, index updates have gotten a lot faster, and ranking and indexation are more real-time (especially since the "Caffeine" update), but that varies wildly across sites and pages.
I think you also have to separate a couple of impacts of duplicate content. When it comes to filtering - Google excluding a piece of duplicate content from rankings (but not necessarily penalizing the site), I don't see any evidence that this takes a couple of months. It can Google days or weeks to re-cache any given page, and to detect a duplicate they would have to re-cache both copies, so that may take a month in some cases, realistically. I strongly suspect, though, that the filter itself happens in real-time. There's no good way to store a filter for every scenario, and some filters are query-specific. Computationally, some filters almost have to happen on the fly.
On the other hand, you have updates like Panda, where duplicate content can cause something close to a penalty. Panda data was originally updated outside of the main algorithm, to the best of our knowledge, and probably about once/month. Over the more than a year since Panda 1.0 rolled out, though, it seems that this timeline accelerated. I don't think it's real-time, but it may be closer to 2 weeks (that's speculation, I admit).
So, the short answer is "It's complicated" I don't have any evidence to suggest that filtering duplicates takes Google months (and, actually, have anecdotal evidence that it can happen much faster). It is possible that it could take weeks or months to see the impact of duplicates on some sites and in some situations, though.
-
Hi Donnie,
Thanks for your reply, but I was already aware of the fact that Google had/ has a sandbox. I had to mention this within my question. I'm looking more for an answer around the fact if Google is able to determine on what basis if pages are duplicate.
Because I saw dozens of cases where our content was indexed and we linked/ linked not back to the 'original' source.
Also want to make clear that in all of these cases the duplicate content was in agreement with the original sources just to be sure.
-
In the past google had a sandbox period before any page (content) would rank. However, now everything is instant. (just learned this today @seomoz)
If you release something, Google will index it as fast as possible. If that info gets duplicated Google will only count the first one indexed. Everyone else loses brownie points unless they trackback/link back to the main article (first indexed).
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 Status Codes in Google Search Console
Hi all, I've noticed in Google Search Console under 'Crawl errors' - 1. Why does the status code '410' come up as an 'error' in the crawl report? 2. Why are some articles labelled as '404' error when they have been completely deleted and should be a '410' - there are roughly around 1000-2000 of these. Thanks!
Reporting & Analytics | | lucwiesman0 -
Google Analytics to sub domains
Hi, I have a site xyz.com and two separate sites on sub domains xyz.com/abc and xyz.com/def. What's the best way to go about with GA such that I can get all the data in the same place. Should I use the GA code for xyz.com on sub domains as well? Or should I create separate profiles?
Reporting & Analytics | | mayanksaxena0 -
Weird Google Analytics tracking question
I have a client that has a market place site, where people list goods and sell them, think something like Etsy. Instead of developing a system to show the users page views and things like that, does it sound reasonable to let them enter a Google Analytics property on the pages they list on, then let them monitor through GA? Does anyone see any fatal flaws in this thinking?
Reporting & Analytics | | LesleyPaone0 -
Tips for migrating Google News traffic?
We are about to relaunch a site that gets a lot of Google News traffic. We are not changing domain, but the site structure is changing greatly, with the URLs of both news index pages and articles being shaken up. Obviously, we've 301-ed every page to its closest equivalent on the new site. We've also got a news sitemap. As we are not changing domain, is there anything further we need to do to help protect our Google News traffic. On a related note, does anyone have a relaible way of measuring traffic from Google News listings in universal search?
Reporting & Analytics | | Dennis-529610 -
Analytics/Google Keyword comparison
Hi I'm trying to establish a methodology to best show the gap between potential and realised organic keyword traffic. To obtain potential keyword traffic I'm using the Google Adwords keyword tool to derive local monthly search volumes for exact keyword matches. However, I'm confused as to which is the best way of getting a comparable metric from Google Analytics (GA). I was using custom reports and the 'organic searches' metric. However, this provides different values to a standard report selecting non-paid search in the default advanced segments. What is the best report/metric in GA to use for both organic and paid search volumes that would be comparable to the Google Adwords keyword tool. Also, I'm having problems getting my kids to eat their greens, any advice! 😉 Thanks Neil
Reporting & Analytics | | mccormackmorrison0 -
How Can I Record .zip Traffic in Google Analytics
The company I am with shows traffic coming in to .zip files in Google Analytics. The traffic being recorded stopped a while back, but I know people are still downloading/visiting these .zip ULRs. I'm not sure how they were recording/tracking the .zip URLs (in Analytics) before, but I'd like to track them once again. Does anyone know how to do this? Thanks
Reporting & Analytics | | poolguy0 -
Weird drop in ranks... google.fr
Hello folks, Before our domain: <cite>www.convertisseurvideo.net</cite> Keyword: convertisseur video ( convert your video ) is at the first(page) google.fr for this keyword search. Right now, its at second page, really dropped. And our CTR = 1% . How can we improve the CTR? Any toughts ? Thanks. ss.JPG
Reporting & Analytics | | augustos0