Monday Mar 19, 2007

NetBeans Day, Paris

The Sun Tech Days converence has moved to Paris and is taking place under the Grande Arche de la Defense, the third arch of triumph, from which you can see, on a clear day, the second arch at Etoile, and in the far distance the first arch that is standing in front of the Louvre.

NetBeans is becoming more and more impressive. I just uploaded a number of photos on flickr. There you can see

  • Romain Guy (blog) elegantly bringing us up to date with Matisse and the ease of linking it to data sources.
  • Ludovic Champenois (blog) and Alexis Moussine-Pouchkine (blog) giving an overview of the latest J2EE and Web Services abilities of NetBeans.
  • Roman Strobl (blog) going into the NetBeans profiler, how to build on the NetBeans platform, and a lot more
  • Petr Suchomel (blog) showing off some really impressive elements of the mobility pack: most memorable of all a graphical programming environment to write cell phone applications, and Java powered vector graphics cell phones.
  • John Treacy (blog) directing all of this from the front bench.
  • An attentive crowd of developers come from all over France. (The other developers were at the Solaris track in another room)

Time to go to bed, for the next installment, tomorrow morning.

Monday Mar 12, 2007

Java leads in SemWeb Tools

According to Michael Bergman's excellent SemWeb Tools Survey Java is used in 49% of the 500 tools developed for the Semantic Web. A distant second is JavaScript with 13% coverage.

As a long time Java Developer (I went to the first Java One Conference in 1996), this is great confirmation of my initial choice. The energy I have invested in learning Java and its libraries has been amply rewarded. And now with Java being GPLed I feel comfortable that no one will ever be excluded from joining this great community, that these investments are safe, and that great things will be built thereupon.

Part of the reason for Java's success here is that it has delivered on its "write once run everywhere" promise whilst attaining absolutely stunning levels of efficiency. As Jonathan Rentzsch argued so well in Programmers Don't Like to Code, we just want to solve problems. The idea of having to debug something for every other platform feels like a huge waste of energy. Java is therefore a natural choice: a lot more can be done there, a lot larger audience can be reached, with a lot less work.

The Semantic Web is language and platform neutral though, and as the tools survey shows every platform in existence now has a hook into it.
One of my favorite tools is the python based, which I use every day to read and query rdf files, and that contains a powerful N3 based rules engine which is pointing to the next step in the development of the Semantic Web.
Another very interesting tool I came across recently is the Lisp based 64bit RDF data store AllegroGraph by Franz. Lisp was perhaps my other favorite language, and I can never completely forget the amazing Lisp machines I saw in the early 80ies which came with color A4 screens which I saw display rotating three dimensional texture mapped spheres on in real time.

But to tell the truth, I think I only know a small fraction of all the tools listed by Michael. There is something for everyone there, and more than any one person can digest. (Unless your name is Michael Bergman :-)

Friday Mar 02, 2007

Baetle: Bug And Enhancement Tracking LanguagE ( version 0.00001 )

So here is an absolute first draft version of a Bug ontology, just starting from information I can glean from bugs in the NetBeans repository. It really is not meant to be anything else than a place to start from, to see what the field looks like, and to try out a few ideas. There is a lot that needs to changed. If possible other ontologies should be reused. But for the moment we just need to work out if this can be of any use...

The idea is that this should end up being a solid Bug Ontology that could be used by bugzilla and other repositories to enable people to query for bugs across repositories. So if people want to join in helping out, I am currently developing it on the mailing list at, just because I have access to that repository, and so that I don't need to create another space immediately.

Before sitting down and writing out the OWL it is just a lot easier to diagram these in UML. So that's what I have done here.

Here is a description of bug 18177 from the NetBeans bug tracker, using the above outline. It is in N3.

@prefix : <> .
@prefix foaf: <> .
@prefix sioc: <> .
@prefix awol: <> .

       a :Feature;
       :status "resolved";
       :resolution "fixed";
       :qa_contact [ foaf:mbox <> ];
       :priority "P1";
       :component [ :label "openide"
                    :version "3.6";
       :bugType "looks";
       :assigned_to  <>";
       :reporter [ :nick "jtulach"];
       :interested [ :mbox <> ];
       :interested [ :mbox <> ];
       :updated "2003-12-11T06:25:15"\^\^xsd:dateTime;
       :dependsOn <>,
       :blocks <>,
       :hasDuplicate <>;
       :description <>;
       :comment <>,
       :attachment <>.

<> a :Description;
     :content [ awol:body """We need a reasonable way how to organize looks so we can easily locate them for
given representation object in a way that makes it probable that correct look
will be found. We need a way for different modules to replace (or lower
priority) of some look and provide their own. We need use to be present with
available looks (which make some sence on that object) and being able to
pernamently change the order of looks available for given object.""";
               awol:type "text/plain";
     :author [ foaf:mbox <> ];
     :created "2001-11-29T08:17:00"\^\^xsd:dateTime .     

I have hyperlinked a couple of the relations above, to show exactly how what we are doing is just describing the web. (RDF stands for Resource Description Framework after all) Click on the links and you will get a description of what that thing is about. There is already one big improvement here over the xml format provided by the service (see the xml for bug 18177 for example) and that is that we don't have to send the content of an attachment around. We just refer to it by a url (

The bugs have the #it anchor, to distinguish the bug from the page which describes it.

The URL for the code should clearly be the urls of the http repository versions. Then we should be able to link the code to packages, which should link up to libraries, which link up to products; and we can link any of those to bugs.

So what could one do with all that information available from a SPARQL endpoint?

Thursday Mar 01, 2007

Web 3.0 at JavaOne and Jazoon

JavaOne will finally present a session on the Semantic Web. This is an opportunity to speak to some of the millions of Java programmers, who in my opinion will find it a lot easier to understand this than xml people have. RDF is a graph of resources and Java a graph of objects. Java developers are confident with UML diagrams and abstractions. The tools are there. It's just a matter of seeing what can be done.

Tim Boudreau helped get me enthusiastic about presenting something at JavaOne, and so we will be presenting together a session currently entitled Web 3.0: this is the Semantic Web. I only have two more weeks to prepare the slides for JavaOne, which won't be enough to get all the demos I would like to do ready. Luckily the Birds of a Feather session, which this will be part of, has a lot more flexible timetable I am told, and some changes may be possible even quite late.

I will be giving the same talk, probably improved by then, at Jazoon a month later.

So there is a lot of work on the table, and I should probably seriously reduce my blogging contributions if I am going to get all this done!

OpenId and SAML

Paul Madsen illustrates the relation between OpenId and SAML

Having looked at OpenId I got to wonder a little how this links in with other technologies such as SAML.

One nice thing is it looks like we can have one URL Identifier and use both services. Pat Patterson recently showed with a nice video how one can use the same id to work with OpenId and SAML. His solution is simply to add a meta tag in the head of the html like this

<meta http-equiv="X-XRDS-Location" content="">
This brings one to a YADIS file which lists the various types of identification services one wishes to use with one's id. [0] The YADIS file links to a SAML file with identification information, and the url of the authentication server. From there on it looks like the processes are quite similar to those of OpenID, except that the information passed to and fro is in more complex xml documents.

So we have two more indirections, than the simplest OpendId example, or only one more indirection from Sam Ruby's nice OpeniId howto[1]. So what does one gain? Well the SAML is understood to be enterprise ready and proven to work with very large installations, which are the use cases it attempted to solve. This of course comes at the cost of more complexity, which may or may not be covered by open source projects such as OpenSSO.

Some interesting links I came across doing this research:

[0] It also shows a horrible oasis urn, why does oasis always use urns instead of urls?
[1] Notice how this could have been cut down to no indirection with the use of rdf vocabularies. The YADIS and the SAML files could have been combined, and they could have in turn have been combined with the information at the openid resource...

Monday Feb 26, 2007

High Tech Vienna

Museum of Modern Art, Vienna

Last week I traveled to Vienna for a few days meeting with Andreas Blumauer, Alois Reitbauer, and Max Wegmüller (Sun) to work on Semantic Web/tagging related ideas. These were a few very intense days and evenings with discussions going late into the night at the Heurigen.

We looked at Semantic Wikis - especially the java based ikewiki and the famous mediawiki (see comparison) for ideas on how one could link search, tagging and wikis. Ikewiki has some very nice features, including relation completion which is somewhat akin to method completion in modern java IDEs. If Ikewiki knows the type of the resource it is on, it will be able to use ajax calls to list a number of possible relations. Ikewiki is probably not stable enough for immediate deployment though as we had trouble after Andreas entered a contradictory statement into it [1]. I will have to play with this more to get a better understanding of where things are going in this area. Please send me any suggestions of other cool semantic wikis I should look at.

Another thing I am going to have to look into more detail now is the question of scalability of Semantic Web tools and the size of applications that have been deployed. People don't yet have a good feeling as to the size of projects being developed currently, and it worries them. There are large databases out there now that can do billions of triples such as Allegrograph, BigOWLIM, or Oracle's 10g database... Projects such as neuroweb as described by "Semantic Web Meets e-Neuroscience: An RDF Use Case" are using the Semantic Web in big ways. I don't like to talk about things I don't know about, so I should probably find the biggest projects, interview some of the people there and blog about it.

After three days of hard work I had a short visit around Vienna. The picture I had of Vienna was one of an old town full of beautiful old monuments, and so I was nicely surprised to find something completely different. On Friday evening I walked into a Cafe near the Semantic Web School, on Lerchenfelder Gürtel, and found an excellent funk band called Groove Coalition playing (pix of bar, pic of band). The next day I walked around Vienna and ended up in the museum district, where I visited the museum of modern art (pictured above). As I had to be at the airport at 5am I decided to stay overnight in the Cafe Leopold which stayed open until 4am.

On the whole it was a very enjoyable stay, and I wish I could have stayed longer. A full set of pictures is available on my flickr account under the tag vienna.

[1] Andreas made a page be both a foaf:Document and a foaf:Person which are defined as disjoint sets: foaf:Person owl:disjointWith foaf:Document. In fact this must be a little tricky thing to do correctly in a semantic wiki, as the URL for the page has to be a Document and not the thing one wishes to describe. To do this right each page should therefore have a #about anchor which would be the thing the page is about, such as Java or the Black Box. If a page needs an anchor, it may as well have a number of them too, so that one could describe a number of concepts on the same page, which may well sometimes be handy...

Thursday Feb 22, 2007

sparql4j: a jdbc4 driver

Andreas Blumauer pointed me to sparql4j, an open source JDBC 4.0 driver for SPARQL. Compose SPARQL queries and get JDBC result sets back. Weird. But why not :-)

The esw maintains a list of SPARQL enpoints to try it out on...

Thursday Feb 15, 2007

Java gained ground in 2006!

A very interesting article by Greg Luck "And the Programming Language of 2006 is ...", has as answer Java!

Apparently a lot of C++ programs are now being ported to Java due to generics. Furthermore in turns out that strong typing does have some important speed advantages as well as making it possible to have such nice refactoring tools.

via: Andrew Newman's More News.

Wednesday Feb 14, 2007

JSR-311: a Java API for RESTful Web Services?

JSR 311: Java (TM) API for RESTful Web Services has been put forward by Marc Hadley and Paul Sandoz from Sun. The initial expert membership group come from Apache, BEA, Google, JBoss, Jerome Louvel of the very nice RESTlet framework, and TmaxSoft.

But it has also created some very strong negative pushback. Eliot Rusty Harold does not like it at all:

Remember, these are the same jokers who gave us servlets and the URLConnection class as well as gems like JAX-RPC and JAX-WS. They still seem to believe that these are actually good specs, and they are proposing to tunnel REST services through JAX-WS (Java API for XML Web Services) endpoints.

They also seem to believe that "building RESTful Web services using the Java Platform is significantly more complex than building SOAP-based services". I don't know that this is false, but if it's true it's only because Sun's HTTP API were designed by architecture astronauts who didn't actually understand HTTP. This proposal does not seem to be addressing the need for a decent HTTP API on either the client or server side that actually follows RESTful principles instead of fighting against them.

To give you an idea of the background we're dealing with here, one of the two people who wrote the proposal "represents Sun on the W3C XML Protocol and W3C WS-Addressing working groups where he is co-editor of the SOAP 1.2 and WS-Addressing 1.0 specifications. Marc was co-specification lead for JAX-WS 2.0 (the Java API for Web Services) developed at the JCP and has also served as Sun's technical lead and alternate board member at the Web Services Interoperability Organization (WS-I)."

Heavy words indeed.

Roy fielding is dead against the name
Marc, I already explained to Rajiv last November that I would not allow Sun to go forward with the REST name in the API. It doesn't make any sense to name one API as the RESTful API for Java, and I simply cannot allow Sun to claim ownership of the name (which is what the JSR process does by design). Change the API name to something neutral, like JAX-RS.

Jerome Louvel, whose RESTlet framework is very promising, has a fuller explanation of what is being attempted in this JSR.

I am still not so sure what they want to do exactly, but it seems to be based on JRA which has annotations of the form:

public List<Customer> getCustomers();

public void addCustomer(Customer customer);
which frankly does seem a little weird. But I am willing to give it some time to understand. Marc Hadley has more details about what he is thinking of.

If one were to standardize an api why not standardize the RESTlet API? That makes more immediate sense to me. It is working code, people are already participating in the process, and the feedback is very good. Now I don't know exactly how JSRs work and what the relationship is between the initial proposal and the final solution. Is the final proposal going to be close to the current one, with those types of annotations? Or could it be something much closer to RESTlets?

On the other hand I am pleased to see a JSR that is proposing to standardise annotations. That makes me think it would probably the time may be ripe to think of standardize the @rdf annotations mapping POJOs to the semantic web the way so(m)mer and elmo are doing.

More reading:

Some blogs on the subject:

So in summary: it is great that REST has attracted so much attention as to stimulate research into how to make it easy for Java developers to do the right thing by default. Let's hope that out of this something is born that does indeed succeed in reaching that goal.

Off topic: Eliott Rusty Harold recently wrote up 10 predictions for xml in 2007 where he gives a very partial description of RDF, correctly pointing out the importance of GRDDL, but completely missing out the importance of SPARQL.

Tuesday Feb 13, 2007

mSpace: web 2.0 meets web 3.0 meets iTunes

Have you ever found the category browsing of iTunes to be a little limited? If so you have to try out mSpace, a Web 2.0 music browser, but also a whole new way of thinking about exploring relational data. Before reading any further just try it out!

So what does that mspace application do? If you have used iTunes and you view it in Browser mode by hitting ⌘B for example, you will have noticed that you are only confronted with three selection panes titled "Genre", "Artist" and "Album". You can't add any more, nor can you re-arrange them. Well the default is good as a default, but if you like to listen to classical music, then you may find that constraining your search by "Artist" is really not quite as interesting as constraining it by "Composer". So really you would like to have three columns "Genre","Composer","Album". This is what mSpace allows you to do. Not only that, but you can add any number of other columns and rearrange these columns any way you want by dragging and dropping them. You can then use this to search the information space the way that makes most sense to you.

As interesting as the UI is the theory behind it. Based on some 4 year old Semantic Web research (see their papers) this recent implementation makes all the points in an instant. For detailed description of the thinking behind this it is worth reading "Applying mSpace Interfaces to the Semantic Web", which gives a Description Logic (which is in short an Object Oriented declarative logical formalism) basis for their work.

A Java version, called jSpace is being implemented by Clark and Parsia. Looks like one just would need to resuscitate the work on jTunes and presto, one could have something a lot more interesting than iTunes, that worked on all platforms. The theory behind this is certainly going to be really useful to help me implement Beatnik.

Digg it.

Friday Feb 09, 2007

Beatnik: change your mind

Some people lie, sometimes people die, people make mistakes: one thing's for certain you gotta be prepared to change your mind.

Whatever the cause, when we drag information into the Beatnik Address Book (BAB) we may later want to remove it. In a normal Address Book this is straightforward. Every vcard you pick up on the Internet is a file. If you wish to remove it, you remove the file. Job done. Beatnik though does not just store information: Beatnik also reasons.

When you decide that information you thought was correct is wrong, you can't just forget it. You have to disassociate yourself from all the conclusions you drew from it initially. Sometimes this can require a lot of change. You thought she loved you, so you built a future for you two: a house and kids you hoped, and a party next week for sure. Now she's left with another man. You'd better forget that house. The party is booked, but you'd better rename in. She no longer lives with you, but there with him. In the morning there is no one scurrying around the house. This is what the process of mourning is all about. You've got the blues.

Making up one's mind

Beatnik won't initially reason very far. We want to start simple. We'll just give it some simple rules to follow. The most useful one perhaps is to simply work with inverse functional properties.

This is really simple. In the friend of a friend ontology the foaf:mbox relation is declared as being an InverseFunctionalProperty. That means that if I get the graph at I can add it to my database like this.

If I then get the graph at

I can then merge both graphs and get the following

Notice that I can merge the blank nodes in both graphs because they each have the same relation foaf:mbox to the resource Since there can only be one thing that is related to that mbox in that way, we know they are the same nodes. As a result we can learn that :joe knows a person whose home page is, and that same person foaf:knows :jane, neither of those relations were known (directly) beforehand.

Nice. And this is really easy to do. A couple of pages of lines of java code can work through this logic and add the required relationships and merge the required blank nodes.

Changing one's mind

The problem comes if I ever come to doubt what Joe's foaf file says. I would not just be able to remove all the relations that spring from or reach the :joe node, since the relation that Henry knows jane is not directly attached to :joe, and yet that relation came from joe's foaf file.

Not trusting :joe's foaf file may be expressed by adding a new relation <> a :Falsehood . to the database. Since doing this forces changing the other statements in the database we have what is known as non monotonic reasoning.

To allow the removal of statements and the consequences those statements led to, an rdf database has to do one of two things:

  • If it adds the consequences of every statement to the default graph (the graph of things believed by the database) then it has to keep track of how these facts were derived. Removing any statement will then require searching the database for statements that relied on it and nothing else in order to remove them too, given that the statement that one is attempting to remove is not itself the consequence of some other things one also believes (tricky). This is the method that Sesame 1.0 employs and that is described in “Inferencing and Truth Maintenance in RDF Schema - a naive practical approach” [1]. The algorithm Jeen and Arjohn develop shows how this works with RDF Schema, but it is not very flexible. It requires the database to use hidden data structures that are not available to the programmer, and so for example in this case, where we want to do Inverse Functional property deductions, we are not going to be easily able to adapt their procedure to our needs.
  • Not to add the consequence of statements to the database, but to do Prolog like backtracking when answering a query over only the union of those graphs that are trusted. So for example one could ask the engine to find all People. Depending on a number of things, the engine might first look if there are any things related to the class foaf:Person. It would then look at all things that were related to subclasses of foaf:Person if any. Then it may look for things that have relations that have domains that are foaf:Person such as foaf:knows for example. Finally with all the people gathered it would look to see if none of them were the same.
    All this could be done by trying to apply a number of rules to the data in the database in attempting to answer the query, in a Prolog manner. Given that Beatnik has very simple views on the data it is probably simple enough to do this kind of work efficiently.

So what is needed to do this well is the following:

  • notion of separate graphs/context
  • the ability to easily union over graphs of statements and query the union of those easily
  • defeasible inferencing or backtracking reasoning
  • flexible inferencing would be best. I like N3 rules where one can make statemnts about rules belonging to certain types of graphs. For example it would be great to be able to write rules such as: { ?g a :Trusted. ?g => { ?a ?r ?b } } => { ?a ?r ?b } a rule to believe all statements that belong to trusted graphs.

From my incomplete studies (please let me know if I am wrong) none of the Java frameworks for doing this are ideal yet, but it looks like Jena is at present the closest. It has good reasoning support, but I am not sure it is very good yet at making it easy to reason over contexts. Sesame is building up support for contexts, but has no reasoning abilities right now in version 2.0. Mulgara has very foresightedly always had context support, but I am not sure if it has Prolog like backtracking reasoning support.

[1] “Inferencing and Truth Maintenance in RDF Schema - a naive practical approach” by Jeen Broekstra and Arjohn Kampman

Wednesday Jan 31, 2007

The F3 Object Model

Chris Oliver who has developed the very neat Swing User Interface scripting language F3, describes the F3 object model in a recent blog entry of his. I mention it, because every time he explains it to me, I find the parallels with the RDF data model so amazingly striking that I can just attribute it to the fact that we are homing in on something absolutely fundamental. I keep thinking F3 has a great chance at being a major contender for a Semantic Web User Interface scripting language.

Here is what he says:

I wanted an easy way of creating Model/View GUI's. To that end I wanted a simple, object-oriented system. The informal conceptual basis for this I borrowed from Martin and Odell.

To summarize the important points (informally):
  • Classes correspond to the concepts we use to identify the common characteristics of the things around us and how they relate to each other.
  • Thus a class declares a set of potential relationships (links) between objects.
  • Once an object has been classified we can navigate those relationships to discover its properties, i.e. the other objects related to it.
  • In F3 these properties are called attributes.
  • In F3 functions are queries that navigate links or produce new objects in terms of existing ones, but do not modify the links of existing objects.
  • In F3 all change consists of adding new objects or of adding, removing, or replacing the links between existing objects.
  • In F3 Events are simply notifications of object instantiation or of "link addition", "link removal", or "link replacement"
  • In F3 operations in addition to performing queries can sequentially perform one or more such link modifications. Such modifications may be sequenced with conditional logic and by selection and iteration over query results.
  • The values of an object's attributes may be specified either through explicit assignment or by means of a bound query. In the latter case implicit modification of the attribute occurs whenever the inputs to the query expression change and produce a new result.
  • So when a change occurs one way to specify the "effect" is to define a bound query that expresses how other objects depend on it.
  • The other way to respond to change is by making further explicit modifications using triggers. A trigger is an operation that is performed whenever an insert, delete, or replace event which applies to it occurs.
As far as the syntax of F3, it's intended to be familiar to mainstream programmers whose primary language is derived from C (C++, Java, JavaScript, PHP, etc), but also includes features from query languages (OQL, XQuery, SQL).

To address each one of those points I have put together the table below. The left hand contains Chris' statments, and on the right I show how they map to semantic web ideas.

F3RDF and tools
Classes correspond to the concepts we use to identify the common characteristics of the things around us and how they relate to each other.Not too disimilar from how OWL defines a class: A class defines a group of individuals that belong together because they share some properties.
Once an object has been classified we can navigate those relationships to discover its properties, i.e. the other objects related to it. The object oriented way of navigating a graph of objects is indeed by starting from one object and finding how it relates to other objects by following the values of the (private, protected or public) fields of an object.
In F3 these properties are called attributes. In rdf the properties of an object are called either properties or relations. But "attribute" is just as good too for the purposes at hand. [1]
In F3 functions are queries that navigate links or produce new objects in terms of existing ones, but do not modify the links of existing objects. Functions are like small inferencing engines in some way. They find relations between objects by going from known relations to new ones, but they do not add any facts to the database.
In F3 all change consists of adding new objects or of adding, removing, or replacing the links between existing objects. RDF specs do not really deal with changes to the state of a database. That is outside the scope of the specification, but all programmatic tools such as Jena or Sesame have methods to help one do such things.
  • Those frameworks that keep very close to the basic rdf model such as Jena and Sesame give one ways of creating resources and setting their properties. This is how Jena adds a new relation:
    // some definitions
    String personURI    = "http://somewhere/JohnSmith";
    String givenName    = "John";
    String familyName   = "Smith";
    String fullName     = givenName + " " + familyName;
    // create an empty Model
    Model model = ModelFactory.createDefaultModel();
    // create the resource
    //   and add the properties cascading style
    Resource johnSmith
      = model.createResource(personURI)
             .addProperty(VCARD.FN, fullName)
                               .addProperty(VCARD.Given, givenName)
                               .addProperty(VCARD.Family, familyName));
    There are other methods to remove properties from a resource...
  • More Object oriented frameworks - that are usually build on the simpler frameworks above - such as Elmo or so(m)mer work like F3 in a more OO fashion. Elmo uses getters and setters to change object properties. So(m)mer like F3 just changes link relations by setting the fields directly. In F3 for example the following adds a new relation between a persons and their age:
    person.age = 23
    In so(m)mer this would be done the same way..
In F3 Events are simply notifications of object instantiation or of "link addition", "link removal", or "link replacement"Now this is the great originality of F3: in making the event mechanism transparent.

Tools such as Jena provide such a mechanism, as described in the Event handling docs.

Tools such as so(m)mer would have values change automatically if they were linked to an inferencing database. But none of them would generate the right kinds of events to control a swing gui, which is what F3 does in the background, saving one a huge amount of event handling code.

In F3 operations in addition to performing queries can sequentially perform one or more such link modifications. Such modifications may be sequenced with conditional logic and by selection and iteration over query results.
The values of an object's attributes may be specified either through explicit assignment or by means of a bound query. In the latter case implicit modification of the attribute occurs whenever the inputs to the query expression change and produce a new result.
So when a change occurs one way to specify the "effect" is to define a bound query that expresses how other objects depend on it.
The other way to respond to change is by making further explicit modifications using triggers. A trigger is an operation that is performed whenever an insert, delete, or replace event which applies to it occurs.

One thing that seems very clear from the above and having asked Chris Oliver about the implementation of F3, is that a triple database such as Jena could provide the base on which the F3 language is built. Jena has the key components: ways to add relations to a store and an event listening mechanism. F3 is the layer above that makes programming User Interfaces easy. Jena is of course heavy weight for most uses of F3 but perhaps not all - and perhaps one could have some interface to make that decision at run time. The up side looks just too good. With a little work F3 could be the Semantic Web UI scripting language.

[1] It occurs to me writing this that I would perhaps want to reserve the word attribute for links an object has to a literal, and the word "relation" for a link to another object... As far as the usage of the words goes in either F3 or rdf it makes no real difference.

Thursday Dec 14, 2006

Universal Drag And Drop

As the world becomes completely interconnected, we need to drag and drop anything over computers, between operating systems and across continents. There is one easy way to do this, which we have been using for a long time without perhaps realizing it: by using URLs to send people web pages we like. Now we can generalize this to drag and drop anything.

In RDF we can use URLs to name anything we want. We can name people for example. My foaf name is If you GET its meaning by clicking on the above link, the resource will return a representation in rdf/xml describing me and people I know. Using this feature I will be able to drag and drop my foaf name onto the AddressBook I am writing. It will be able to ask for its preferred representation from the resource and from the returned rdf, deduce that the given url points to a foaf:Person, and so display those connections it understands in the associated graph.

But we could go one further with this mechanism. The operating system could easily be adapted to use this information to change the picture of the mouse cursor when the drag event occurs, by fetching a cached representation (REST is designed for caching) or by querying the cached graph, to find what the type of the resource is ( a foaf:Person ) and so display a stick person or if a relationship to a logo is available the logo of the person, with perhaps their initials printed below.

All one needs for drag and drop are URLs - metadata is just one GET away.

Wednesday Nov 15, 2006

Jazoon call for SemWeb papers and demos

The Jazoon Java conference to be held end of June 2007 in Switzerland has opened its call for papers. I am on the program committee, and am pushing for Semantic Web technologies to be represented in the conference. Other program committee members are supportive.

There is a huge amount of open source Java work going on in this area and it is time for these technologies to be presented and explained to the larger Java community, which represent over 4 million developers.

Please have a look at the Call for Papers and contact me with your ideas as soon as possible so that I can get some initial feedback from the other members.

I am also looking at what can be done for the Java One conference in 2007, so I will try to pass on any suggestions or proposals you send me for Jazoon on to Java One organizers too.

Thursday Mar 23, 2006

Google Video introduces the Semantic Web

A few months ago I put together a slide show to introduce the Semantic Web. In order to make the problem less abstract I try to present the Semantic web through a very practical problem faced by Software engineers and first presented by Fred Books in the 70ies in a very influential book The Mythical Man Month [1]. Simply put, adding more engineers to a project does not make it go faster. So how could the SemWeb affect software development in an Open Source world, where there are not only many more developers, but also these are distributed around the world with no central coordinating organisation? Having presented the problem, I then introduce RDF and Ontologies, how this meshes with the Sparql query language, and then show how one could use these technologies to make distributed software development a lot more efficient.

Having given the presentation in November last year, I spent some time over Xmas putting together a video of it (in h.264 format). The result is not too bad for a first attempt at adding sound to a slide show, though it may at points be a little slow I have to admit. It takes time to do this well, and I don't have time to improve it. So the video of my slide show presenation is a little long at 30 minutes, but it should be a good introduction for people with software engineering experience.

Then last week I thought it would be fun to put it online, and so I placed it on google video, where you can still find it. But you will notice that Google video reduces the quality quite dramatically, so that you will really need to have the pdf side by side, if you wish to follow. But if you can view the latest mpeg4 format (H.264) then you will find the this movie a lot clearer to watch [2].

  1. the great thing about making things public is that 10 minutes after I did this, it was pointed out to me that I had mis named Brooks in the presentation. Ouch! It's easy to fix the slides, but fixing the video is going to be less pleasant :-(
  2. This movie is served with mime-type video/h264 but for some reason it does not open quicktime on OSX automatically when using Safari.



« July 2016