By user13366078 on Apr 21, 2008
Last week, I attended a meeting of the BITKOM Working Group for Knowledge Engineering & Management at the Sun Frankfurt office. The meeting was very nicely organized by Mr. Weber, Mr. Neuwirth and some colleagues from Sun in Germany (Hi Hansjörg, you should really blog!) and Peter Reiser from Sun in Switzerland. Therefore, I got to play host of the meeting without having to do too much work :).
Peter asked me to present his work on Community Equity (see also this interview with Shel Israel and this other one with Robert Scoble) and the CE 2.0 project to the group. The working group was very interested in how to encourage communities to participate and how Community Equity mechanisms can be used towards this goal. We had quite a few positive discussions during the breaks.
But, some people seem to be concerned with tracking community contribution and participation on an automatic basis, for example, see Mike's post on the subject and Alec's reaction to Peter's interview. These are all very valid thoughts, and indeed nobody wants to see their work or life be reduced into a couple of numbers.
As always, the threat is not in the technology, but in the way we use it:
- Measuring stuff is a good thing, if you know what you measure and how accurate that measurement is.
- Telling people how their work is being received is also a good thing. I always get a kick out of the HELDENFunk download statistics (We should probably start publishing them), or my own blog's metrics. This is a huge motivator.
- Telling people about how other people's work has been received is also a good thing. Nobody would put the kind of trust into eBay if it weren't for their rating system. How many books have you bought on Amazon based on other people's recommendations, stars, etc. on their site?
- Web 2.0 style commenting, crosslinking, social networking, tagging and rating is also a good thing. Much of the web 2.0 world today would be untrusted, unnavigationable and unuseful if it weren't for those mechanisms.
- The next step is to take these concepts, and apply them to an enterprise context. This is what Peter's Community Equity work is all about. The goal I see here is: If you do a good job, others should be able to notice (including, but not limited to, your manager). If you're looking for an expert on topic X, you should be able to find people that may be able to help you. If you are talking to person Y or if you run into that person as part of a team, you should be able to see what kind of work that person has contributed to the enterprise before and what others are saying about them. Think Amazon and eBay and LinkedIn ratings, recommendations, tags etc. as a tool to better navigate the social network and knowledge base of your enterprise.
Notice that the part where discussions become heated is not the technology one, it's the "what do we do with the numbers" part. That, of course, is where we need to be careful. We need to understand how the data is generated, how it has been processed (i.e. the exact rules and formular that is used to generate the Community Equity score) and what it does not tell us. You may trust your latest auction winner to transact with you on that particular sale, but you still don't know if she is actually a nice person or not :).
As long as the process is open, well-understood and transparent, using Web 2.0 mechanisms and Community Equity style metrics can be a very useful thing. You can generate a lot of useful information based on that kind of data: What are hot topics? Which documents are the most used, best rated, most re-used ones? Who are the company internal creators, connectors and consumers of knowledge? What topics have trouble to be picked up by the community? Sounds like fascinating stuff, if you're responsible for your company's knowledge...
Of course, this was only a small part of the BITKOM meeting. We heard presentations by other companies on different applications of knowledge management technologies in a customer service context. Interestingly, all of them (including CE 2.0) mentioned the term Ontology in one way or other. In a knowledge management context, an Ontology is the part of the system that relates "words" or other abstract data to real-world concepts and objects, resolving ambiguities, consolidating synonyms and clarifying user-errors. It's the part of the system that tries to bring in semantic knowledge as opposed to mere processing words.
Ontologies are very hard to do. That's why most of the times they are generated "by hand" which is very time and resource consuming. The holy grail of ontologies is when the system can automatically generate semantic meaning out of naked data by itself, without any help. Some of this systems are seeded with hand-made ontologies that can then expand somewhat automatically.
An interesting approach to generating ontologies might be to analyze web 2.0 style tagging data that has been created by users. An ontology system could then try to identify clusters of tags and assign them to a real world concept, then try to identify relationships between those concepts. As an example, the tags "LDAP", "Directory Server", "DS" all belong to the same concept and they are related to (but not the same as) "Identity Management", "IdM", and "Databases". A search engine then can use this data to find better matches for a user that is looking for "Identity Management and LDAP interoperability".
As you can see, even a seemingly dry and academic workshop on "Knowledge Engineering and Management", organized by an industry association can be an exciting topic, sometimes transcending the boundaries between technology, philosophy and anybody's daily web 2.0 style work.