Tue. Mar 5th, 2024

  I’ve learned recently that one big topic at Pubcon was combating duplicate content, scrapers, splogs (spam blogs).  The thought is to battle these web polluters with a sitemap standard, or more so a standard to guarantee claim of content ownership.

The concept is this:  Once an author creates new content, and pings a sitemap service, this “first contact” determines the content claim of ownership.  This sounds like a good idea to me, but I can imagine the increased number of unresolved disputes between spammers and legit writers, as well as larger media houses trying to lay claim to copywritten materials  that aren’t theirs.

So I say to this, yes, let’s make another stupid protocol, another obscure thing to Stumble Upon, or another book to read at Barnes and Nobles.  I classify all this stuff under blahblah-ML

Anyways, I was totally thrilled when Google came out with their own sitemap standard which is pretty awesome.  I don’t want to use it to resolve content ownership disputes, which is a waste of time.  We are going to have dupers, scrapers and splogs regardless, so enforcing a sitemap standard to claim content ownership will certainly create new scams to write new standards, continuing a never ending cycle of junk.

By admin

2 thoughts on “Universal Sitemaps Protocol…Just Another Stupid Standard”
  1. Sitebases, the next protocol after Sitemaps…
    It can save the time and press for the search engine, also for the websites.
    It can bring new search engine that named Search Engin 2.0.
    Using Sitebase protocol, will save 95% bandwidth above. It is anthother sample for long tail thery.
    In this protoco…

  2. i understand about the viral stuff but it looked like i got redirected from my banking site to something that looked like it. but once i took out the “2” from the www2.xxxxxxxx, you’re site came up html style.
    what’s up with that?

Leave a Reply