Web 3.0 = Deja Vu All Over Again

Web 3.0 = Deja Vu All Over Again

David Churbuck had a great post about the NYT article on Web 3.0

A direct link to John Markoff’s NYT piece here…

It seems like deja vu all over again to me.

I remember in 2000 as the bubble was starting to burst it seemed everyone had a “semantic search” or “contextual keywording”  solution in the works.  Brilliant “artificial intelligence” applications that were going to change the way the web worked, that would do it better, faster and more accurately than any lowly subeditor.  There were the truly amazing feats of powerpoint and the equally evocative product evangelists pitching truly smart apps and 3d semantic search modelers, that now seem more like early tag clouds than anything else.  Companies like Autonomy, Metatagger and a slew of others I’ve forgotten offered promises that would surely transform the net with their utterly brilliant algorithms written by their CEO, who they always mentioned in hushed tones *might* be the smartest person they’ve ever met.

They all seemed to gravitate toward towards discussions of how search and keywording had trouble contextually dealing with terms like “apple and washington”  – does the user really want a computer or a fruit?  Adding the word Macintosh to the fray doesn’t help much either.  Great discussions, that one could noodle on forever, and easily repeated in meetings later to impress the boss, as the concept of contextualization was, er, neat.

The problem was the computers just weren’t that smart.  The scheme always relied on developing a huge overarching taxonomy for the way you do business, then devoting resources (as in those subeditors they didn’t think so much of before) to “train the system.”  The rub is they weren’t really teaching the system as much as trying to catch obvious errors.  Like when it failed to identify a fruit vs. computer, every single stinking time.

So now we’re back to the point where our smart@ss computers are going to make things happen better, faster and more accurately than our subeditors can.  The problem I see is this: while my computer today is much faster than the one I ran in 2000, I don’t see it as particularly smarter.  In fact, it’s no better at discerning canned spiced ham product in my in box than it’s predeccessors were.  So these are the machines that are going to fix things for us?

Nope, I see Semantic Web, or Web 3.0 as yet another “nuclear” term for buzzword bingo.  Whenever it gets mentioned, I’ll be pulling a Costanza – “That’s it for me, goodnight everybody…”

4 thoughts on “Web 3.0 = Deja Vu All Over Again

  1. what venom! have you been playing with Verity? iPhrase?

    i worked in “informational retrieval” back in the day. nothing to do with that movie “Brazil.” but pre-GOOG. thomas.loc.gov. we called her InQuery….

    first of all, i hear ya.

    but i tend to be optimistic. like Star Trek fans should. here’s why.

    first, it’s not Knowledge Management.

    second, what lacked to me, and what makes the semantic web a wee bit different, is the standards. yeah, i know we had the roots of xml then and even xml itself when you’re talking autonomy. but for whatever reason we did stuff in psuedo-SGML. our own DTD.

    third, open source.

  2. I drank the Kool-aid once upon a time and found it had a little somethin’ extra in it…

    Let’s just say I’m a bit jaded. But I have seen decent implementations. There were some guys out of Montreal whose company name escapes me that had built a cool little tagging app that I was able to get working against my test cms…and it actually did a fairly good job (although the plan called for the ubiquitous ‘system teaching’ period).

    You might be onto something with the standards & open source. We truly stunk at standards. Right on the money when you mention the atrocities we perpetrated in SGML and with our own DTDs. Kind of like writing the 10 commandments and adding a codicil on great lines to pickup married women.

Leave a Reply

Your email address will not be published. Required fields are marked *