Monday Nov 02, 2009

Ten Requirements for Achieving Collaboration #6:Data Accessibility for People and Computers

TouchSun.jpg
The data contained within information artifacts must be accessible by people and machines. We will cover 3 main advantages that data accessibility for people and computers deliver. 1. High relevance leads to lean systems. 2. People want relevant information, not potentially relevant hits. 3. Context drives relevancy, delivery drives efficiency. We are in the midst of a series investigating collaboration. We previously wrote about the two types of collaboration - intentional and accidental. INTENTIONAL: where we get together to achieve a goal and ACCIDENTAL: where you interact with something of mine and I am never aware of your interaction While intentional collaboration is good it is not where the bulk of untapped collaborative potential lies. Accidental collaboration is. But the challenge is to intentionally facilitate accidental collaboration. For the full list of 10 requirements see the original post. Last time I wrote about requirement #5: why data must be referencable and portable. This time we will continue on that theme but discuss why the data we made portable and referencable last time must still be accessible to both people as well as computers. First remember that the data we're talking about is not nicely contained in a row or cell in a traditional relational database. The data we're interested in and that we have been talking about is the data that exists inside documents, web pages, images and other information artifacts. So in one way at least, the information is already human accessible. It is in a document or other information artifact after all. And those are typically created by people for people. Parsed and extracted data that is referencable is still accessible because we do not fundamentally alter the original container (i.e. the document). Any good enterprise information architecture must include a fully-fledged ECM (enterprise content management) system for this reason. There needs to be a place to store the original source documents, images, videos and web pages. Also, computers and systems should have no problem accessing the data that we derived from the artifacts in the previous posts. This is because after the data is parsed, extracted and marked up in the ways we've previously described, it gets stored in a computer referencable system like a database or an RDF store or a linked combination of similar stores and indexes. Computers and systems can access that data (of course assuming network connections are established and maintained). Indeed, many SOA and Service Bus integration layers have been doing similar things for some time. They are able to access transaction, web service and request data and attach it to the brokered request while bringing along original documents and other unstructured information files as payload.
blueShoes.jpg
But did you notice what I just wrote there? The relevant data as well as the containing or supporting unstructured data files are attached to the request and passed around from system to transaction to data store to website. It is the equivalent of carrying around a file cabinet full of stock photos when all I really want is to sort catalog entries on blue shoes. "Blue" is important data that is only accessible by a human looking at a picture. Or, best case, by a computer system that can parse attached metadata assuming that "blue" was entered by a person somewhere further up the line (and not "teal", "aqua", or "navy"). But if a similar SOA request had access to the full complement of parsed and extracted data then it could carry with it only that data that was actually needed rather than the over-full payload it is today. [Read More]

Tuesday Sep 08, 2009

Sweet ORACLENERD Threads Winners

[Read More]
About

Enterprise 2.0 and Content Management

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today