Syndicated Content Appearing Above Original
-
Hi.
I run a travel blog and my content is often re-posted by related sites with a backlink to my content (and full credit etc) but still ranks above my article in Google.
Any ideas what I can do to stop this happening?
Thanks
-
I think that it's partly because my content is not 'blog-like' it's more like a travel guide and an events guide and some products so it might not look right to people but the drop was dramatic.
By the way, I was just looking at your site. Would you be open to a guest article? I run an Israel travel site and could write about the 'real Israel'! Of course, I can link in exchange etc/whatever you like...
-
wow, that is amazing.
I've been encouraging our authors to get onto G+ to add authorship.
(I have 60 so far)
The benefit is the photos with results, but zero traffic improvement.
I wouldn't blame that on authorship either,. There must be something wrong with my site that I haven't fixed yet, but I don't know what it could be. Except I just finished, about 2 weeks ago, finding and removing the last of the duplicate descriptions and titles.
-
With some people maybe I need to look at that. Problem is that many of my articles are valid only in short term because they're about events (these are the ones people love to use) so it would take a while for a DMCA to set in right?
-
I reversed the authorship and instantly went back to the same level as before though.
-
I would not blame that on authorship. I would blame that on an increased level of piracy. Eventually they can strangle your appearance in the SERPs.
-
I understand. Those weasels do it with my content too.
When they copy I often use DMCA complaints.
-
So annoying! I find that sometimes after a week or two Google corrects itself, sometimes not. I was recommended to install authorship which I did and then found that this problem kind of solved itself, but overall search traffic fell 25%+
...!
-
Welcome to my world, Ben.
There are thousands of sites, that post our headline and a snippet of text - some with a link to us, some not.
It is very frustrating. Google buries our page and promotes those guys. Sometimes, there are several of them, all showing in the results pages and our original page is nowhere to be found, buried in the duplicate content, so if you go to the end of the results and redisplay with all the missing pages, there we are on page 1.
I've been trying to overcome this for 18 months now, but I'm not getting anywhere.
-
The problem is if they dont syndicate they'll modify and copy.
-
It might. It might not.
Content syndication has both Panda and Penguin risks.
And, you have the competitor problem.
-
Hmmm. If it isnt fully identical then Google might display both though right?
-
They will still have a relevant title tag, and relevant content.
(add this to my original reply)....
The best way to get filtered from the search results is to have an article on another site linking to identical article on your site.
-
The interesting thing is that sometimes it's tiny sites with much less authority who are ranking better.
Maybe if I got them to syndicate half the article with a "read more, click here" link that would help?
-
This happens because the other sites that post your content have more authority and that gives them a higher ranking. What can you do to prevent this?
-- only syndicate to sites that have less authority than you
-- create different content for syndication that what appears on your website
-- stop syndicating
This is just one reason why I do not syndicate anything. It creates new competitors and feeds existing competitors.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content
When I crawl my site through moz, it shows lots of Pages with Duplicate Content. The thing is all that pages are pagination pages. How should I solve this issue?
Technical SEO | | 100offdeal0 -
Syndicated content outranks my original article
I have a small site and write original blog content for my small audience. There is a much larger, highly relevant site that is willing to accept guest blogs and they don't require original content. It is one of the largest sites within my niche and many potential customers of mine are there. When I create a new article I first post to my blog, and then share it with G+, twitter, FB, linkedin. I wait a day. By this time G has seen the links that point to my article and has indexed it. Then I post a copy of the article on the much larger site. I have a rel=author tag within the article but the larger site adds "nofollow" to that tag. I have tried putting a link rel=canonical tag in the article but the larger site strips that tag out. So G sees a copy of my content on this larger site. I'm hoping they realize it was posted a day later than the original version on my blog. But if not will my blog get labeled as a scraper? Second: when I Google the exact blog title I see my article on the larger site shows up as the #1 search result but (1) there is no rich snippet with my author creds (maybe because the author tag was marked nofollow?), and (2) the original version of the article from my blog is not in the results (I'm guessing it was stripped out as duplicate). There are benefits for my article being on the larger site, since many of my potential customers are there and the article does include a link back to my site (the link is nofollow). But I'm wondering if (1) I can fix things so my original article shows up in the search results, or (2) am I hurting myself with this strategy (having G possibly label me a scraper)? I do rank for other phrases in G, so I know my site hasn't had a wholesale penalty of some kind.
Technical SEO | | scanlin0 -
A problem with duplicate content
I'm kind of new at this. My crawl anaylsis says that I have a problem with duplicate content. I set the site up so that web sections appear in a folder with an index page as a landing page for that section. The URL would look like: www.myweb.com/section/index.php The crawl analysis says that both that URL and its root: www.myweb.com/section/ have been indexed. So I appear to have a situation where the page has been indexed twice and is a duplicate of itself. What can I do to remedy this? And, what steps should i take to get the pages re-indexed so that this type of duplication is avoided? I hope this makes sense! Any help gratefully received. Iain
Technical SEO | | iain0 -
"Standout" tag and "Original content" tags - what's the latest?
In November 2010 Google introduced the "standout tag" http://support.google.com/news/publisher/bin/answer.py?hl=en&answer=191283 I can't find any articles/blog posts/etc in google after that date, but its use was suggested in a google forum today to help with original content issues. Has anyone used them? Does anyone know what's the latest with them? Are they worth trying for SEO? Is there a possible SEO penalty for using them? Thanks, Jean
Technical SEO | | JeanYates0 -
How damaging is duplicate content in a forum?
Hey all; I hunted around for this in previous questions in the Q&A and didn't see anything. I'm just coming back to SEO after a few years out of the field and am preparing recommendations for our web dev team. We use a custom-coded software for our forums, and it creates a giant swathe of duplicate content, as each post has its own link. For example: domain.com/forum/post_topic domain.com/forum/post_topic/post1 domain.com/forum/post_topic/post2 ...and so on. However, since every page of the forum defaults to showing 20 posts, that means that every single forum thread that's 20 posts long has 21 different pages with identical content. Now, our forum is all user-generated content and is not generally a source of much inbound traffic--with occasional exceptions--but I was curious if having a mess of duplicate content in our forums could damage our ability to rate well in a different directory of the site. I've heard that Panda is really cracking down on duplicate content, and last time I was current on SEO trends, rel="canonical" was the hot new thing that everyone was talking about, so I've got a lot of catching up to do. Any guidance from the community would be much appreciated.
Technical SEO | | TheEnigmaticT0 -
Duplicate content issue
Hi everyone, I have an issue determining what type of duplicate content I have. www.example.com/index.php?mact=Calendar,m57663,default,1&m57663return_id=116&m57663detailpage=&m57663year=2011&m57663month=6&m57663day=19&m57663display=list&m57663return_link=1&m57663detail=1&m57663lang=en_GB&m57663returnid=116&page=116 Since I am not an coding expert, to me it looks like it is a URL parameter duplicate content. Is it? At the same time "return_id" would makes me think it is a session id duplicate content. I am confused about how to determine different types of duplicate content, even by reading articles on Seomoz about it: http://www.seomoz.org/learn-seo/duplicate-content. Could someone help me on how to recognize different types of duplicate content? Thank you!
Technical SEO | | Ideas-Money-Art0 -
Follow up to Archive of Content
This is a follow up to the Question I ask: http://www.seomoz.org/q/archive-of-content I have decided that I am going to move the articles from example.com (non-commercial) to website.com (commercial) however I was having a think, some of the articles on example.com and ranking well for some keywords, maybe getting around 20,000 visits from natural search, would it be possible when moving this article just to do a 301 redirect from the page with the article example.com to the new website? Hope that makes some sense. Kind Regards,
Technical SEO | | Paul780 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0