Duplicate Content: Avoid Search Engine Penalties

In the online world, search engine rankings are everything. Without coming up on the search engines for key phrases, all your hard work will never be seen – or not seen enough to make much of a difference. Because rankings are such a big deal, there are numerous ways that people try to cheat the engines – search engine spam – to trick engines into displaying a page higher in search results.

For every new trick, the coders behind our favorite search spots come up with a new filter. Their filters are designed to prevent results that are “fraudulent” – an attempt to intentionally trick the search engine. It’s bad business for places like Google to return results to their users that aren’t what the user is looking for – that user will turn elsewhere. In turn, it’s bad business for people wanting to be listed at places like Google to try tricking them – they will turn elsewhere for quality links to list.

One of the latest methods of trying to trick the search bots is through duplicate content. It’s also the latest on the list of filters and ways that you might – even accidentally – find yourself dropped from listings.

What Duplicate Content Is

The idea that it only takes one bad apple to spoil the bunch definitely applies here. There are many of us who don’t intend to do anything to trick search engines, and work our butts off trying to come up with keywords and phrases that might snag us a better ranking. We find quality links, get quality inbound links, and make sure that there’s plenty of content for a search engine “spider” to read.

All it took was a few bad apples abusing the idea of duplicate content to cause the rest of us a new hassle.

In its most obvious form, duplicate content is a page that is an exact copy of another page. In the past, many people thought that creating copies of their content would earn them higher rankings – in other words, that by repeating the same words in different spots on their website, popular search engines would think their content was keyword rich. To an extent, it worked because having several pages with the same content brought several listings on a search.

With new filters in place, duplicate content can take on more subtle meanings:

  • Identical Pages – The obvious form where one website offers the exact same pages and content as another website. Here’s where it gets tricky, though. Some websites have “sister” sites – associated websites that generally look similar, and often contain identical content. Many search engines will now filter these sites out completely as spam.
  • Scraped Content – Blogs run the highest risk of being filtered out under this category. What scraped content means is that someone has taken the wording from one website’s page and rearranged it to look different. They might change a word here and there to alter the voice of the writing. On the whole, though, the content is exactly the same as the other site’s. Look at it this way – a search engine’s robot will go through your page looking at individual words. It counts the number of times some words are repeated (keywords). If the actual words are the same as those found on another web page, the robot won’t care what order they came in. It just factors the math.
  • E Commerce – It’s blissfully easy to have your own store now. Ecommerce scripts like OSCommerce are free, easy to set up, and allow anyone with the slightest amount of computer knowledge to sell stuff. The problem with e-commerce sites is that many people running them will copy the manufacturer’s description for products they sell. Already, there’s a problem – that content was repeated on the manufacturer’s website. But here comes another problem – how many other sites selling the same products are also using the manufactuer’s description? Again, the filters will cancel these listings out.
  • Article Distribution – This might be one of the most important categories of all, because it applies in so many ways. First, for authors who publish their content online, this category means that you must drastically alter any article you write in order to avoid search engine filters. If you write an article on, for example, the benefits of Jojoba for skin care and publish it on one site, you’re going to want to go a whole new angle and use completely different wording to again publish it on another website. Another way this category affects people is that many websites, blogs, and e-zines use content that has been listed on a free reprint website. In addition to the original reprint website, how many other sites are using this free content?

As you’re beginning your website, or at least once a year in the operation of your site, you should seriously take a look at each of these categories and make sure they don’t apply to the pages you’re publishing.

One method I’ve heard recommended many times (and makes a lot of sense) is to set aside one day each month and concentrate on fixing a single category. If I ran an ecommerce website, then, I might decide that the 15th of every month I would sit down and re-write product descriptions to ensure that they were as unique as possible. In other words, it’s likely that one category applies to the type of site you run more than any other category. Concentrate on this one and hit the others as you go along to avoid being penalized for duplicate content.

Leave a Reply

Your email address will not be published. Required fields are marked *


two × 5 =