Steps you can take to ensure your content is indexed and registered to your site before a scraper gets to it?
-
Hi,
A clients site has significant amounts of original content that has blatantly been copied and pasted in various other competitor and article sites.
I'm working with the client to rejig lots of this content and to publish new content.
What steps would you recommend to undertake when the new, updated site is launched to ensure Google clearly attributes the content to the clients site first?
One thing I will be doing is submitting a new xml + html sitemap.
Thankyou
-
There are no "best practices" established for the tags' usage at this point. On the one hand, it could technically be used for every page, and on the other, should only be used when it's an article, blog post, or other individual person's writing.
-
Thanks Alan.
Guess there's no magic trick that will give you 100% attribution.
Regarding this tag, do you recommend I add this to EVERY page of the clients website including the homepage? So even the usual about us/contact etc pages?
Cheers
Hash
-
Google continually tries to find new ways to encourage solutions for helping them understand intent, relevance, ownership and authority. It's why Schema.org finally hit this year. None of their previous attempts have been good enough, and each has served a specific individual purpose.
So with Schema, the theory is there's a new, unified framework that can grow and evolve, without having to come up with individual solutions.
The "original source" concept was supposed to address the scraper issue, and there's been some value in that, though it's far from perfect. A good scraper script can find it, strip it out or replace the contents.
rel="author" is yet one more thing that can be used in the overall mix, though Schema.org takes authorship and publisher identity to a whole new, complex, and so far confused level :-).
Since Schema.org is most likely not going to be widely adopted til at least early next year, Google's encouraging use of the rel="author" tag as the primary method for assigning authorship at this point, and will continue to support it even as Schema rolls out.
So if you're looking at a best practices solution, yes, rel="author" is advisable. Until it's not.
-
Thanks Alan... I am surprised to learn about this "original source" information. There must not have been a lot of talk about it when it was released or I would have seen it.
Google recently started encouraging people to use the rel="author" attribute. I am going to use that on my site... now I am wondering if I should be using "original source" too.
Are you recommending rel="author"?
Also, reading that full post there is a section added at the end recommending rel="canonical"
-
Always have a sitemap.xml file with all the URLs you want indexed included in it. Right after publishing, submit the sitemap.xml file (or files if there are tens of thousands of pages) through Google Webmaster Tools and Bing Webmaster Tools. Include the Meta "original-source" tag in your page headers.
Include a Copyright line at the bottom of each page with the site or company name, and have that link to the home page.
This does not guarantee with 100% certainty that you'll get proper attribution, however these are the best steps you can take in that regard.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
MOZ is showing that I have non- indexed blog tag posts are they supposed to be nonindexed. My articles are indexed just not the blog tags that take you to other similar articles do I need to fix this or is it ok?
MOZ is showing that my blog post tags are not indexed my question is should they be indexed? my articles are indexed just not the tags that take you to posts that are similar. Do I need to fix this or not? Thank you
Intermediate & Advanced SEO | | Tyler58910 -
Geographic site clones and duplicate content penalties
We sell wedding garters, niche I know! We have a site (weddinggarterco.com) that ranks very well in the UK and sell a lot to the USA despite it's rudimentary currency functions (Shopify makes US customers checkout in £gbp; not helpful to conversions). To improve this I built a clone (theweddinggarterco.com) and have faked a kind of location selector top right. Needless to say a lot of content on this site is VERY similar to the UK version. My questions are... 1. Is this likely to stop me ranking the USA site? 2. Is this likely to harm my UK rankings? Any thoughts very welcome! Thanks. Mat
Intermediate & Advanced SEO | | mat20150 -
Is there a way to make Google realize/detect scraper content?
Good morning,Theory states that duplicated content reduces certain keywords’ position in Google. It also says that a web who copy content will be penalized. Furthermore, we have spam report tools and the scraper report to inform against these bad practices.In my case: the website, both, sells content to other sites and write and prepare its own content which is not in sale. However, other sites copy these last ones, publish them and Google do not penalize their position in results (not in organic results neither in Google news), even though they are reported using Google tools for that purpose.Could someone explain this to me? Is there a way to make Google realize/detect these bad practices?Thanks
Intermediate & Advanced SEO | | seoseoseos0 -
Town and County pages taking months to index.
Hi, At http://www.general-hypnotherapy-register.com/regional-hypnotherapy-directory/ we have a load of town and county pages for all of the hypnotherapists on the site a) I have checked all of these links and they are spiderable. b) About a month back I noticed after the site changes, not entirely sure why, but the site was generating rogue pages, eg http://www.general-hypnotherapy-register.com/hypnotherapists/page/5/?town=barnsley instead of http://www.general-hypnotherapy-register.com/hypnotherapists/?town=barnsley We have added meta no index, no follow to these rogue pages around 4 weeks ago..however these pages still have a google cache date of Oct 4th predating these meta changes c) There are examples of the pages we do want, indexed, and ranking too on page 1, site:www.general-hypnotherapy-register.com/hypnotherapists eg http://www.general-hypnotherapy-register.com/hypnotherapists/?town=ockham however these pages are few and far between, these have a recent google cache date of Nov 1 **d) **The xml sitemap has all of the correct URLS, but in webmaster tools, the amount of pages indexed has been stubbornly flat at 2800 out of 4400 for 4 weeks now e) Query Paramaters: for ?town and ?county in webmaster tools, are set to Yes/Specifies Would love any suggestions, Thanks. Mark.
Intermediate & Advanced SEO | | Advantec0 -
How can i stop such links being indexed
Hi, How can i stop such links being indexed The first link is what i want to stop indexed. We have 1,000's of people writing articles and the below URl shows how many articles each did http://www.somename.com/article/15633 But this is the URl which shows the exact articlehttp://www.Somename.com/article/step-step-installation-ibm-lotus-notesAs both start as thishttp://www.Somename.com/article/How can i set noindex? Should we set for each URL manually one by oneThanks
Intermediate & Advanced SEO | | mtthompsons0 -
How can I see all the pages google has indexed for my site?
Hi mozers, In WMT google says total indexed pages = 5080. If I do a site:domain.com commard it says 6080 results. But I've only got 2000 pages in my site that should be indexed. So I would like to see all the pages they have indexed so I can consider noindexing them or 404ing them. Many thanks, Julian.
Intermediate & Advanced SEO | | julianhearn0 -
Can URLs blocked with robots.txt hurt your site?
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us? I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
Intermediate & Advanced SEO | | nicole.healthline0 -
Does google can read the content of one Iframe and use it for the pagerank?
Beginners doubt: When one website has its content inside Iframe's, google will read it and consider for the pagerank?
Intermediate & Advanced SEO | | Naghirniac0