Captcha wall to access content and cloaking sanction
-
Hello, to protect our website against scrapping, visitor are redirect to a recaptcha page after 2 pages visited.
But for a SEO purpose Google bot is not included in that restriction so it could be seen as cloaking.
What is the best practice in SEO to avoid a penalty for cloaking in that case ?
I think about adding a paywall Json shema NewsArticle but the content is acceccible for free so it's not a paywall but more a captcha protection wall.What do you recommend ?
Thanks,Describe your question in detail. The more information you give, the better! It helps give context for a great answer.
-
In general, Google cares only about cloaking in the sense of treating their crawler differently to human visitors - it's not a problem to treat them differently to other crawlers.
So: if you are tracking the "2 pages visited" using cookies (which I assume you must be? there is no other reliable way to know the 2nd request is from the same user without cookies?) then you can treat googlebot exactly the same as human users - every request is stateless (without cookies) and so googlebot will be able to crawl. You can then treat non-googlebot scrapers more strictly, and rate limit / throttle / deny them as you wish.
I think that if real human users get at least one "free" visit, then you are probably OK - but you may want to consider not showing the recaptcha to real human users coming from google (but you could find yourself in an arms race with the scrapers pretending to be human visitors from google).
In general, I would expect that if it's a recaptcha ("prove you are human") step rather than a paywall / registration wall, you will likely be OK in the situation where:
- Googlebot is never shown the recaptcha
- Other scrapers are aggressively blocked
- Human visitors get at least one page without a recaptcha wall
- Human visitors can visit more pages after completing a recaptcha (but without paying / registering)
Hope that all helps. Good luck!
-
Well I'm not saying that there's no risk in what you are doing, just that I perceive the risk to be less risky than the alternatives. I think such a fundamental change like pay-walling could be moderately to highly likely to have a high impact on results (maybe a 65% likelihood of a 50% impact). Being incorrectly accused of cloaking would be a much lower chance (IMO) but with potentially higher impact (maybe a 5% or less chance of an 85% impact). When weighing these two things up, I subjectively conclude that I'd rather make the cloaking less 'cloaky' in and way I could, and leave everything outside of a paywall. That's how I'd personally weigh it up
Personally I'd treat Google as a paid user. If you DID have a 'full' paywall, this would be really sketchy but since it's only partial and indeed data can continue to be accessed for FREE via recaptcha entry, that's the one I'd go for
Again I'm not saying there is no risk, just that each set of dice you have at your disposal are ... not great? And this is the set of dice I'd personally choose to roll with
The only thing to keep in mind is that, the algorithms which Googlebot return data to are pretty smart. But they're not human smart, a quirk in an algo could cause a big problem. Really though, the chances of that IMO (if all you have said is accurate) are minimal. It's the lesser of two evils from my current perspective
-
Yes our DA is good and we got lot of gouv, edu and medias backlinks.
Paid user did not go through recaptcha, indeed treat Google as a paid user could be a good solution.
So you did not recommend using a paywall ?
Today recaptcha is only used for decision pages
But we need thoses pages to be indexed for our business because all or our paid user find us while searching a justice decision on Google.So we have 2 solutions :
- Change nothing and treat Google as a paid user
- Use hard paywall and inform Google that we use json shema markup but we risk to seen lot of page deindexed
In addition we could go from 2 pages visited then captcha to something less intrusive like 6 pages then captcha
Also in the captcha page there is also a form to start a free trial, so visitor can check captcha and keep navigate or create a free account and get an unlimited access for 7 days.To conclude, if I well understand your opinion, we don't have to stress about being penalized for cloaking because Gbot is smart and understand why we use captcha and our DA help us being trustable by gbot. So I think the best solution is the 1, Change nothing and treat Google as a paid user.
Thank a lot for your time and your help !
It's a complicated subject and it's hard to find people able to answer my question, but you did it -
Well if you have a partnership with the Court of Justice I'd assume your trust and authority metrics would be pretty high with them linking to you on occasion. If that is true then I think in this instance Google would give you the benefit of the doubt, as you're not just some random tech start-up (maybe a start-up, but one which matters and is trusted)
It makes sense that in your scenario your data protection has to be iron-clad. Do paid users have to go through the recaptcha? If they don't, would there be a way to treat Google as a paid user rather than a free user?
Yeah putting down a hard paywall could have significant consequences for you. Some huge publishers manage to still get indexed (pay-walled news sites), but not many and their performance deteriorates over time IMO
Here's a question for you. So you have some pages you really want indexed, and you have a load of data you don't want scraped or taken / stolen - right? Is it possible to ONLY apply the recaptcha for the pages which contain the data that you don't want stolen, and never trigger the recaptcha (at all) in other areas? Just trying to think if there is a wiggle way in the middle, to make it obvious to Google you are doing all you possibly can to do keep Google's view and the user view the same
-
Hi effectdigital, thanks a lot for that answer. I agreed with you captcha is not the best UX idea but our content is sensitive, we are a legal tech indexing french justice decision. We get unique partnership with Court of Justice because we got a unique technology to anonymize data in justice decision so we don't want our competitor to scrap our date (and trust me they try, every day..). This is why we use recaptcha protection. For Gbot we use Google reverse DNS and user agent so even a great scrapper can't bypass our security.
Then we have a paid option, people can create an account and paid a monthly subscription to access content in unlimited. This is why I think about paywall. We could replace captcha page by a paywall page (with a freetrial of course) but I'm not sur Google will index millions of page hiding behing a metered paywall
As you said, I think there is no good answer..
And again, thank a lot to having take time to answer my question -
Unless you have previously experienced heavy scraping which you cannot solve any other way, this seems a little excessive. Most websites don't have such strong anti-spam measures and they cope just fine without them
I would say that it would be better to embed the recaptcha on the page and just block users from proceeding further (or accessing the content), until the recaptcha were filled. Unfortunately this would be a bad solution as scrapers would still be able to scrape the page, so I guess redirecting to the captcha is your only option. Remember that if you are letting Googlebot through (probably with a user agent toggle) then as long as scrape-builders program their scripts to serve the Googlebot UA, they can penetrate your recaptcha redirects and just refuse to do them. Even users can alter their browser's UA to avoid the redirects
There are a number of situations where Google don't consider redirect penetration to be cloaking. One big one is regional redirects, as Google needs to crawl a whole multilingual site instead of being redirected. I would think that in this situation Google wouldn't take too much of an issue with what you are doing, but you can never be certain (algorithms work in weird and wonderful ways)
I don't think any schema can really help you. Google will want to know that you are using technology that could annoy users so they can lower your UX score(s) accordingly, but unfortunately letting them see this will stop your site being properly crawled so I don't know what the right answer is. Surely there must be some less nuclear, obstructive technology you could integrate instead? Or just keep on top of your block lists (IP ranges, user agents) and monitor your site (don't make users suffer)
If you are already letting Googlebot through your redirects, why not just have a user-agent based allow list instead of a black list which is harder to manage? Find the UAs of most common mobile / desktop browsers (Chrome, Safari, Firefox, Edge, Opera, whatever) and allow those UAs plus Googlebot. Anyone who does penetrate for scraping, deal with them on a case-by-case basis
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content Strategy/Duplicate Content Issue, rel=canonical question
Hi Mozzers: We have a client who regularly pays to have high-quality content produced for their company blog. When I say 'high quality' I mean 1000 - 2000 word posts written to a technical audience by a lawyer. We recently found out that, prior to the content going on their blog, they're shipping it off to two syndication sites, both of which slap rel=canonical on them. By the time the content makes it to the blog, it has probably appeared in two other places. What are some thoughts about how 'awful' a practice this is? Of course, I'm arguing to them that the ranking of the content on their blog is bound to be suffering and that, at least, they should post to their own site first and, if at all, only post to other sites several weeks out. Does anyone have deeper thinking about this?
Intermediate & Advanced SEO | | Daaveey0 -
Translated Content on Country Domains
Hi, We have blogs set up in each of our markets, for example http://blog.telefleurs.fr, http://blog.euroflorist.nl and http://blog.euroflorist.be/nl. Each blog is localized correctly so FR has fr-FR, NL has nl-NL and BE has nl-BE and fr-BE. All our content is created or translated by our Content Managers. The question is - is it safe for us to use a piece of content on Telefleurs.fr and the French translated Euroflorist.be/fr, or Dutch content on Euroflorist.nl and Euroflorist.be/nl? We want to avoid canonicalising as neither site will take preference. Is there a solution I've missed until now? Thanks,
Intermediate & Advanced SEO | | seoeuroflorist
Sam0 -
How do the Quoras of this world index their content?
I am helping a client index lots and lots of pages, more than one million pages. They can be seen as questions on Quora. In the Quora case, users are often looking for the answer on a specific question, nothing else. On Quora there is a structure setup on the homepage to let the spiders in. But I think mostly it is done with a lot of sitemaps and internal linking in relevancy terms and nothing else... Correct? Or am I missing something? I am going to index about a million question and answers, just like Quora. Now I have a hard time dealing with structuring these questions without just doing it for the search engines. Because nobody cares about structuring these questions. The user is interested in related questions and/or popular questions, so I want to structure them in that way too. This way every question page will be in the sitemap, but not all questions will have links from other question pages linking to them. These questions are super longtail and the idea is that when somebody searches this exact question we can supply them with the answer (onpage will be perfectly optimised for people searching this question). Competition is super low because it is all unique user generated content. I think best is just to put them in sitemaps and use an internal linking algorithm to make the popular and related questions rank better. I could even make sure every question has at least one other page linking to it, thoughts? Moz, do you think when publishing one million pages with quality Q/A pages, this strategy is enough to index them and to rank for the question searches? Or do I need to design a structure around it so it will all be crawled and each question will also receive at least one link from a "category" page.
Intermediate & Advanced SEO | | freek270 -
Duplicate content issue
Hello! We have a lot of duplicate content issues on our website. Most of the pages with these issues are dictionary pages (about 1200 of them). They're not exactly duplicate, but they contain a different word with a translation, picture and audio pronunciation (example http://anglu24.lt/zodynas/a-suitcase-lagaminas). What's the better way of solving this? We probably shouldn't disallow dictionary pages in robots.txt, right? Thanks!
Intermediate & Advanced SEO | | jpuzakov0 -
Content Audit Questions
Hi Mozzers Having worked on my companies site for a couple of months now correcting many issues, im now ready to begin looking at a content review, many areas of the site contain duplicate content, the main causes being 1. Category Page Duplications
Intermediate & Advanced SEO | | ATP
e.g.
Widget Page Contains ("Blue Widget Extract")
Widget Page Contains ("Red Widget Extract")
Blue Widget Page Contains ("Same Blue Widget Extract")
Red Widget Page Contains ("Same Red Widget Extract") 2. Product Descriptions
Item 1 (Identical to item 2 with the exception of a few words and technical specs)
Item 2 Causing almost all the content on the site to get devalued. Whilst i've cleared all moz errors and warnings im certain this is causing devaluation of most of the website. I was hoping you could answer these questions so I know what to expect once i have made the changes. Will the pages that had duplicate content recover once they possess unique content or should i expect a hard and slow climb back? The website has never receive any warnings from Google, does this mean recovery for penalties like duplicate content will be quicker Several pages rank on page 1 for fairly competitive keywords despite having duplicate content and keyword spammy content. What are the chances of shooting myself in the foot by editing this content? I know I will have to wait for google to crawl the pages before i see any reflection in the changes, but how long after google has crawled the page should I get a realistic idea of how positive the changes were? As always, thanks for you time!0 -
Keep older blog content indexed or no?
Our really old blog content still sees traffic, but engagement metrics aren't the best (little time on site), and as a result, traffic has gradually started to decrease. Should we de-index it?
Intermediate & Advanced SEO | | nicole.healthline0 -
Duplicate Content Question
Hey Everyone, I have a question regarding duplicate content. If your site is penalized for duplicate content, is it just the pages with the content on it that are affected or is the whole site affected? Thanks 🙂
Intermediate & Advanced SEO | | jhinchcliffe0 -
Mobile version creating duplicate content
Hi We have a mobile site which is a subfolder within our site. Therefore our desktop site is www.mysite.com and the mobile version is www.mysite.com/m/. All URL's for specific pages are the same with the exception of /m/ in them for the mobile version. The mobile version has the specific user agent detection capabilities. I never saw this as being duplicate content initially as I did some research and found the following links
Intermediate & Advanced SEO | | peterkn
http://www.youtube.com/watch?v=mY9h3G8Lv4k
http://searchengineland.com/dont-penalize-yourself-mobile-sites-are-not-duplicate-content-40380
http://www.seroundtable.com/archives/022109.html What I am finding now is that when I look into Google Webmaster Tools, Google shows that there are 2 pages with the same Page title and therefore Im concerned if Google sees this as duplicate content. The reason why the page title and meta description is the same is simply because the content on the 2 verrsions are the exact same. Only layout changes due to handheld specific browsing. Are there any speficific precausions I could take or best practices to ensure that Google does not see the mobile pages as duplicates of the desktop pages Does anyone know solid best practices to achieve maximum results for running an idential mobile version of your main site?1