Beatnik: change your mind

Some people lie, sometimes people die, people make mistakes: one thing's for certain you gotta be prepared to change your mind.

Whatever the cause, when we drag information into the Beatnik Address Book (BAB) we may later want to remove it. In a normal Address Book this is straightforward. Every vcard you pick up on the Internet is a file. If you wish to remove it, you remove the file. Job done. Beatnik though does not just store information: Beatnik also reasons.

When you decide that information you thought was correct is wrong, you can't just forget it. You have to disassociate yourself from all the conclusions you drew from it initially. Sometimes this can require a lot of change. You thought she loved you, so you built a future for you two: a house and kids you hoped, and a party next week for sure. Now she's left with another man. You'd better forget that house. The party is booked, but you'd better rename in. She no longer lives with you, but there with him. In the morning there is no one scurrying around the house. This is what the process of mourning is all about. You've got the blues.

Making up one's mind

Beatnik won't initially reason very far. We want to start simple. We'll just give it some simple rules to follow. The most useful one perhaps is to simply work with inverse functional properties.

This is really simple. In the friend of a friend ontology the foaf:mbox relation is declared as being an InverseFunctionalProperty. That means that if I get the graph at I can add it to my database like this.

If I then get the graph at

I can then merge both graphs and get the following

Notice that I can merge the blank nodes in both graphs because they each have the same relation foaf:mbox to the resource Since there can only be one thing that is related to that mbox in that way, we know they are the same nodes. As a result we can learn that :joe knows a person whose home page is, and that same person foaf:knows :jane, neither of those relations were known (directly) beforehand.

Nice. And this is really easy to do. A couple of pages of lines of java code can work through this logic and add the required relationships and merge the required blank nodes.

Changing one's mind

The problem comes if I ever come to doubt what Joe's foaf file says. I would not just be able to remove all the relations that spring from or reach the :joe node, since the relation that Henry knows jane is not directly attached to :joe, and yet that relation came from joe's foaf file.

Not trusting :joe's foaf file may be expressed by adding a new relation <> a :Falsehood . to the database. Since doing this forces changing the other statements in the database we have what is known as non monotonic reasoning.

To allow the removal of statements and the consequences those statements led to, an rdf database has to do one of two things:

  • If it adds the consequences of every statement to the default graph (the graph of things believed by the database) then it has to keep track of how these facts were derived. Removing any statement will then require searching the database for statements that relied on it and nothing else in order to remove them too, given that the statement that one is attempting to remove is not itself the consequence of some other things one also believes (tricky). This is the method that Sesame 1.0 employs and that is described in “Inferencing and Truth Maintenance in RDF Schema - a naive practical approach” [1]. The algorithm Jeen and Arjohn develop shows how this works with RDF Schema, but it is not very flexible. It requires the database to use hidden data structures that are not available to the programmer, and so for example in this case, where we want to do Inverse Functional property deductions, we are not going to be easily able to adapt their procedure to our needs.
  • Not to add the consequence of statements to the database, but to do Prolog like backtracking when answering a query over only the union of those graphs that are trusted. So for example one could ask the engine to find all People. Depending on a number of things, the engine might first look if there are any things related to the class foaf:Person. It would then look at all things that were related to subclasses of foaf:Person if any. Then it may look for things that have relations that have domains that are foaf:Person such as foaf:knows for example. Finally with all the people gathered it would look to see if none of them were the same.
    All this could be done by trying to apply a number of rules to the data in the database in attempting to answer the query, in a Prolog manner. Given that Beatnik has very simple views on the data it is probably simple enough to do this kind of work efficiently.

So what is needed to do this well is the following:

  • notion of separate graphs/context
  • the ability to easily union over graphs of statements and query the union of those easily
  • defeasible inferencing or backtracking reasoning
  • flexible inferencing would be best. I like N3 rules where one can make statemnts about rules belonging to certain types of graphs. For example it would be great to be able to write rules such as: { ?g a :Trusted. ?g => { ?a ?r ?b } } => { ?a ?r ?b } a rule to believe all statements that belong to trusted graphs.

From my incomplete studies (please let me know if I am wrong) none of the Java frameworks for doing this are ideal yet, but it looks like Jena is at present the closest. It has good reasoning support, but I am not sure it is very good yet at making it easy to reason over contexts. Sesame is building up support for contexts, but has no reasoning abilities right now in version 2.0. Mulgara has very foresightedly always had context support, but I am not sure if it has Prolog like backtracking reasoning support.

[1] “Inferencing and Truth Maintenance in RDF Schema - a naive practical approach” by Jeen Broekstra and Arjohn Kampman


Post a Comment:
Comments are closed for this entry.



« July 2016