TripleI: Web 2.0 meets Web 3.0
By bblfish on Sep 17, 2007
A week ago I was in Graz, Austria, for a conference called TripleI: iKnow, iMedia, iSemantics, bringing together researchers from the fields of cognitive science, media studies and semantic web technologies. There were some very interesting papers given there, which I shall speak of in due course. First a (very little) ego booster: I had my first paper presented here, written it is true together with Andreas Blumauer and Peter Reiser who also turned up as a Keynote speaker, with some excellent slides (available from his blog).
The Long Tail
In “The Long Tail” Andersen pointed out that information technology is turning mass markets into a million niches. As the incremental costs of making goods available are lowered, companies can offer massive variety in their catalogue instead of the one size fits all blockbusters. This effect is even more true for knowledge, which by its essence is diverse and which benefits most by the information revolution. As more and more people participate in the knowledge process, making more information than ever available, as we move from an economy of scarcity to an economy of abundance, the problem also shifts from one of finding information at all to being completely overwhelmed by it. The key to increased information productivity is therefore to improve the match making process. Now the Long Tail in an organization is not so much the documents that encode the knowledge as the people who know. By making it easy to share information, you not only expose the hidden knowledge, you also expose the knowers themselves. The benefit comes from seeing who knows what, in being able to engage them in more effective roles.
Just as companies try to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head, so must more decentralization in KM System be strived for. These must work with the decentralised nature of knowledge, linking individuals to individuals, picking up data wherever it exists, and linking it together. The universal linking nature of the semantic web infrastructure, with its ability to relate globally dispersed resources, clearly points to a very central role for this technology.
Data is the Next Intel Inside
Tim O’Reilly states that “Every significant internet application to date has been backed by a specialized database”. Indeed, enterprises are not only facing an unforeseen growth of complexity of data, content and knowledge, they are also challenged by the always increasing need for integrating data. Classification and semantic annotation (by a combination of user-driven, expert-driven and automatic measures) of all the information in an enterprise is the key for a successful implementation. While the use of RDF is not usually part of the Web 2.0 story, it is clear that this plays especially well to its strengths. Being designed for data integration world wide, with the use of the well known URI to create a global information space, RDF is perfectly suited to be the new Intel Inside for the global distributed corporate database.
Users add Value
„Involve your users both implicitly and explicitly in adding value to your application.” From a KM-perspective this means, that any metadata users add to a knowledge object (by tagging, rating, commenting, and even clicking on things) the more precisely important aspects of any asset can be calculated and evaluated. Amazon.com for example, ranks products by computing the "most popular" ones not only on sales but also factors some call the "flow" around products. In order to describe these actions or preferences, which take place on the WWW of information resources identified by URLs; one needs to describe them as actions on, preferences for, tags on resources identified by URLs. RDF, the Resource Description Framework, which uses the Universal Resource Identifiers as its corner stone, clearly serves as the enabler for the above mentioned Web 2.0 design pattern.
Network Effects by Default
The demand to “set inclusive defaults for aggregating user data as a side-effect of their use of the application” is one of the most obvious options how to transfer this design pattern straight into a Knowledge Management System: For example, each user tag improves the tag recommender system of a KM system which eventually helps to get better search results. Network effects can take many shapes or forms. Tags for example can also be disambiguated by linking them to a wiki, and allowing the wiki page owners and users to vote on their precise meaning. “The service automatically gets better the more people use it”, but the service needs not be a single service. Interacting services (such as a wiki and a tagging engine) can use the distributed knowledge of the enterprise in completely unsuspected ways.
Some Rights Reserved
For the most efficient data-sharing this design pattern demands to “follow existing standards, and use licenses with as few restrictions as possible”. Looking at today’s intranet solutions, content management systems or other systems where users usually generate or distribute content, it becomes obvious, that a strong culture of ownership is making it a lot more difficult to merge information than it should be. Although more flexible ways to define IPRs exist (like creative commons), this lesson has not been integrated in current KM systems. Data cannot flow if it siphoned off behind legal barriers. One structural way to help align corporate interests with individual ones would be to make the cost of secrecy in an enterprise apparent.
The Perpetual Beta
The statement “One of the defining characteristics of internet era software is that it is delivered as a service, not as a product” reflects best what is meant by “perpetual beta”. When applying this design pattern, which is strongly linked to open source development practices, in a Knowledge Management System, we can support our knowledge intensive processes in a much more flexible way. Instead of storing all knowledge in a centralised database, we should provide smart services which are constantly developed on top of insights gained from monitoring user behaviour together with other users acting as co-developers.
Cooperate, Don't Control
This design pattern has been discussed for years throughout the KM community. Neither Web 2.0 nor Knowledge Management is a technological revolution: “The transformations the Web is subject to are not driven by new technologies but by a fundamental mind shift that encourages individuals to take part in developing new structures and content.” [Kolbitsch, 06] The question in the context of Knowledge Management is: How can we stimulate this mind shift? In our first use case we will consider if measuring the value of user contributions could be an answer.
Software Above the Level of a Single Device
The idea of the Web as a platform – “What applications become possible when our phones and our cars are not consuming data but reporting it?” makes technologies which support support semantic interoperability on top of metadata standards even more necessary. From a KM-perspective this means that knowledge generation and annotation must happen on top of standard formats like RDF. From a technical perspective Tim Berner ́s Lee ́s proposal of an RDF bus deploying RDF mapping tools like D2R [Bizer, 03] seems to be an applicable solution which is already at hand.
Notice how Peter Reiser cleverly placed the "Web n+1" word in this paper, allowing him now to claim the wikipedia page for it :-) The paper then goes on to develop Peter's idea of Community Equity, a way of thinking of the link between participation and visibility inside a company, and a providing a framework for creating an architecture to help give people incentives to cooperate.