Educational SEO articles

Duplicate Content And SEO

Duplicate Content And SEO

Duplicate content and SEO and how it affects the performance of a site is a very old question. Just like most of the ranking factors, interpretation depends on the latest search engine algorithms. Still, there are some rules that are more or less the same for a long time. 

The most important thing you should know about duplicate content and SEO (used as an example because most of the webmasters are trying to get their organic traffic) is that there is a big difference between copied and duplicate content. Semantics aside, duplicate content is evidently treated differently by Google than copied content, with the difference being the INTENT and nature of the duplicated text.




This latest advice from Google is useful in that it clarifies Google’s position, we will quickly paraphrase below:


– There is no duplicate content penalty
– Google rewards UNIQUENESS and the signals associated with ADDED VALUE
– Google FILTERS duplicate content
– Duplicate content can slow Google down in finding new content
– XML sitemaps are just about the BEST technical method of helping Google discover your new content
– Duplicate content is probably not going to set your marketing on fire
– Google wants you to concentrate signals in canonical documents, and it wants you to focus on making these canonical pages BETTER for USERS.
– For SEO, it is not necessarily the abundance of duplicate content on a website that is the real issue. It’s the lack of positive signals that NO unique content or added value provides that will fail to help you rank faster and better in Google.

A sensible strategy for SEO would still appear to be to reduce Googlebot crawl expectations and consolidate ranking equity & potential in high-quality canonical pages and you do that by minimizing duplicate or near-duplicate content.




Duplicate content is any text repeated in more than one URL, either internal or external. This is what happens when your site generates multiple copies of the same page, or when a spammer copies one of your articles.




Incorrect pages – Having different pages for the same content means leaving to the search engine the choice of the correct page. This is not a good idea since you can choose a version other than the one you want.

Worse visibility – As a consequence of the above, the search engine may end up showing a lower weight copy, and therefore position it worse than the good version would be.

Poor Indexing – Indexing of your pages may be affected because the searcher spends his time tracking duplicate pages. If duplicate content is a significant portion of the site, the search engine will visit the important pages less frequently.

Waste of links – Duplicate pages can receive links and dilute the strength of your content since all those links could (and should) be joining forces on a single page.

Misattribution – The searcher can decide that your content originates from another domain and exclude your pages in their results. It’s high, but it happens.




If you’ve signed up for Google webmaster tools, this is definitely the best starting point. Go to Search Appearance> HTML Enhancements and pay attention to duplicate title tags and metadata. The report tells you the number of replicas that exist and on which pages they have been found so you can correct them. There is also an easier option available as a free service, and a more advanced one when you pay. Check out Copyscape and test their services. Ideally, you will use GWT and then check those pages with Copyscape.




In short, the duplicate content and SEO question has a simple answer. Google rejects duplicate content but does not penalize it. What it does is filter it so that it does not appear in the results, which is enough punishment. However, sites that copy and/or rewrite the content of others in a systematic way are penalized. The famous Panda algorithm was designed for that mission.