Are OO languages Autistic?

illustration of a simple play

One important criterion of Autism is the failure to develop a proper theory of mind.

A standard test to demonstrate mentalizing ability requires the child to track a character's false belief. This test can be done using stories, cartoons, people, or, as illustrated in the figure, a puppet play, which the child watches. In this play, one puppet, called, Sally, leaves her ball in her basket, then goes out to play. While she is out, naughty Anne moves the ball to her own box. Sally returns and wants to play with her ball. The child watching the puppet play is asked where Sally will look for her ball (where does Sally think it is?). Young children aged around 4 and above recognize that Sally will look in the basket, where she (wrongly) thinks the ball is.
Children with autism will tend to answer that Sally will look for the ball in the box.

Here are two further descriptions of autism from today's version of the Wikipedia article:

The main characteristics are of Autism are impairments in social interaction, impairments in communication, restricted interests and repetitive behavior.
Sample symptoms include lack of social or emotional reciprocity, stereotyped and repetitive use of language or idiosyncratic language, and persistent preoccupation with parts of objects.

In order to be able to have a mental theory one needs to be able to understand that other people may have a different view of the world. On a narrow three dimensional understanding of 'view', this reveals itself in that people at different locations in a room will see different things. One person may be able to see a cat behind a tree that will be hidden to another. In some sense though these two views can easily be merged into a coherent description. They are not contradictory. But we can do the same in higher dimensions. We can think of people as believing themselves to be in one of a number of possible worlds. Sally believes she is in a world where the ball is in the basket, whereas Ann believes she is in a world where the ball is in the box. Here the worlds are contradictory. They cannot both be true of the actual world.

To be able to make this type of statement one has to be able to do at least the following things:

  • Speak of ways the world could be
  • Refer to objects across these worlds
  • Compare these worlds
The ability to do this is present in none of the well known Object Oriented (OO) languages by default. One can add it, just as one can add garbage collection to C, but it requires a lot of discipline and care. It does not come naturally. Perhaps a bit like a person with Asperger's syndrome can learn to interact socially with others, but in a reflective awkward way.

Let us illustrate this with a simple example. Let us see how one could naively program the puppet play in Java. Let us first create the objects we will need:

Person sally = new Person("Sally");
Person ann = new Person("Ann");
Container basket = new Container("Basket");
Container box = new Container("Box");
Ball ball = new Ball("b1");
Container room = new Container("Room");
So far so good. We have all the objects. We can easily imagine code like the following to add the ball into the basket, and the basket into the room.
basket.add(ball);
room.add(basket);
Perhaps we have methods whereby the objects can ask what their container is. This would be useful for writing code to make sure that a thing could not be in two different places at once - in the basket and in the box, unless the basket was in the box.
Container c = ball.getImmediateContainer();
Assert.true(c == basket);
try {
      box.add(ball)
      Assert.fail();
} catch (InTwoPlacesException e) {
}
All that is going to be tedious coding, full of complicated issues of their own, but it's the usual stuff. Now what about the beliefs of Sally and Ann? How do we specify those? Perhaps we can think of sally and ann as being small databases of objects they are conscious of. Then one could just add them like this:
sally.consciousOf(basket,box,ball);
ann.consciousOf(basket,box,ball);
But the problem should be obvious now. If we move the ball from the basket to the box, the state of the objects in sally and ann's database will be exactly the same! After all they are the same objects!
basket.remove(ball);
box.add(ball);
Ball sb = sally.get(Ball.class,"b1");
Assert.true(box.contains(sb));
//that is because
Ball ab = ann.get(Ball.class,"b1");
Assert.true(ab==sb);
There is really no way to change the state of the ball for one person and not for the other,... unless perhaps we give both people different objects. This means that for each person we would have to make a copy of all the objects that they could think of. But then we would have a completely different problem: namely deciding when these two objects were the same. For it is usually understood that the equality of two objects depends on their state. So one usually would not think that an physical object could be the same if it was in two different physical places. Certainly if we had a ball b1 in a box, and another ball b2 in a basket, then what on earth would allow us to say we were speaking of the same ball? Perhaps their name, if it we could guarantee that we had unique names for things. But we would still have some pretty odd things going on then, we would have objects that would somehow be equal, but would be in completely different states! And this is just the beginning of our problems. Just think of the dangers involved here in taking an object from ann's belief database, and how easy it would be to by mistake allow it to be added to sally's belief store.

These are not minor problems. These are problems that have dogged logicians for the last century or more. To solve it properly then one should look for languages that were inspired by the work of such logicians. The most serious such project is now knows as the Semantic Web.

Using N3 notation we can write the state of affairs described by our puppet show, and illustrated by the above graph, out like this:

@prefix : <http://test.org/> .

:Ann :believes { :ball :in :box . } .
:Sally :believes { :ball in :basket } .

N3 comes with a special notation for grouping statements by placing them inside of { }. We could then easily ask who believes the ball is in the basket using SPARQL

PREFIX : <http://test.org/>
SELECT ?who
WHERE {
     GRAPH ?g1 { :ball :in :basket }
     ?who :believes ?g1 .
}

The answer would bind ?who to :Sally, but not to :Ann.

RDF therefore gives us the basic tools to escape from the autism of simpler languages:

  • One can easily refer to the same objects across contexts, as URIs are the basic building block of RDF
  • The basic unit of meaning are sets of relations - graphs - and these are formally described.
The above allows query for objects across contexts and so to compare, merge and work with contexts.

It is quite surprising once one realizes this, to think how many languages claim to be web languages, and yet fail to have any default space for the basic building blocks of the web: URIs and the notion of different points of views. When one fetches information from a remote server one just has to take into account the fact that the server's view of the world may be different and incompatible in some respects with one's own. One cannot in an open world just assume that every body agrees with everything. One is forced to develop languages that enable a theory of mind. A lot of failures in distributed programming can probably be traced down to working with tools that don't.

Of course tools can be written in OO languages to work with RDF. Very good ones have been written in Java, such as Sesame, making it possible to query repositories for beliefs across contexts (see this example). But they bring to bear concepts that don't sit naturally with Java, and one should be aware of this. OO languages are good for building objects such as browsers, editors, simple web servers, transformation tools, etc... But they don't make it easy to develop tools that require just the most basic elements of a theory of mind, and so most things to do with communication. For that one will have to use the work done in the semantic web space and familiarize oneself with the languages and tools developed for working with them.

Finally the semantic web also has its OO style with the Web Ontology Language (OWL). This is just a set of relations to describe classes and relations. Notice though that it is designed for intra context inference, ie all inferences that you can make within a world. So in that sense thinking in OO terms does even at the Semantic Web layer seem to not touch on thinking across contexts, or mentally. Mind you, since people deal with objects, it is also important to think about objects to understand people. But it is just one part of the problem.

vote on reddit and follow the discussion
vote on dzone

Comments:

I don't see the problem with equality of balls. Surely, if Sally gives up looking in the basket and discovers another, very similar, ball in the box, she won't be able to tell that it's the same ball either?
I'd go as far as to say that we can only be certain (assuming we aren't being tricked, of course) that two balls are different- either if we can see them both at the same time, in different locations, or if some immutable property of a ball is different.

What am I missing here? :)

P.S. It seems a shame that in SPARQL the query has to be two lines, after seeing the elegant N3 way of writing a compound statement.

Posted by Joseph Razavi on September 17, 2008 at 07:12 AM CEST #

Joseph,

ok, so let us explore more carefully what would be needed to fill up Sally's and Ann's belief store, if we duplicate every object they know about.

Let us create Sally's world view:

Container sally_basket = new Container("Basket");
Container sally_box = new Container("Box");
Ball sally_ball = new Ball("b1");
sally.consciousOf(sally_basket,sally_box,sally_ball);

And let us create Ann's world view:

Container ann_basket = new Container("Basket");
Container ann_box = new Container("Box");
Ball ann_ball = new Ball("b1");
ann.consciousOf(ann_basket,ann_box,ann_ball);

Perhaps we now want to also create the real objects Sally and Ann are manipulating.

Container real_basket = new Container("Basket");
Container real_box = new Container("Box");
Ball real_ball = new Ball("b1");
reality.consiousOf(real_ball, real_box, real_basket);

Now when Sally adds the real_ball to the basket we can say

real_basket.add(real_ball);

We would of course have to immediately also change Sally and Ann's beliefs too

sally_basket.add(sally_ball);
ann_basket.add(ann_ball);

so here we might want to say that sally and ann's graph of beliefs are consistent with reality because

for(NamedObj anobj: ann.getAllObjs()) {
//note the way we find out that these objects are meant to be mapped to one another is to give them the same name
NamedObject robj = reality.getObjectOfName(anobj.getName);

//now somehow compare the properties of robj with those of anobj
//they should somehow be isomorphic
//though we would like it somehow to be possible for ann not to have to know everything of reality
}

Next sally leaves the Room and Anne moves the ball:

reality_basket.remove(reality_ball);
reality_box.add(reality_ball);

Since only Anne is conscious of this we have to also move her consciousness objects (quite Russellian for those of you who may have studied philosophy)

ann_basket.remove(ann_ball);
ann_box.add(ann_ball);

Now of course Anne and Sally have different graphs of objects in their heads. Now that sally_ball is in the sally_basket and reality_ball in in the reality_basket, we have two balls with very different properties. So we really cannot run equals on them to find out if they are the same. We can only do this because we gave them the same name.

btw: This shows the importance of the names in linking these objects. The java pointer won't do, since we had to create two different objects, neither will the properties of the objects do, since Sally's beliefs don't coincide with reality.

If we hang onto the names then we can see what would be needed to test if Sally's beliefs coincide with reality. We can also see what she would do if she were to look for the ball. We'd just have to work with the objects of her beliefs.

So I suppose we can make this work, but it does require a lot of duplication of objects. Every agent needs to duplicate every object it is conscious of. One also needs unique names for every object. If these communications are going to be global, only URIs will do.

As I pointed out though, all these objects are available in the same virtual machine. There is no clear distinction in the VM as to who these objects belong to. It could be all too easy for some code to accidentally take a sally object and have it interact with a reality object. From the JVM point of view these are all just objects....

In the N3 on the other hand it is easy to place things in contexts { }. One has to create context extraction rules explicity to merge information. Also as we have not gained much by using java pointers, we can remove them and identify things directly with URIs. So we both simplify the code, and clarify what is going on.

Posted by Henry Story on September 17, 2008 at 08:08 AM CEST #

This may be the most ridiculous article I have ever seen.

Posted by guest on September 17, 2008 at 05:13 PM CEST #

Actually, the part I don't get about this analysis is that it seems to misplace the notion of Sally's belief.

That's not a belief about the characteristics of the \*ball\*, it's a belief about the characteristics of the \*basket\* (i.e. that it contains the ball).

In simpler terms, Sally has a reference to the basket, and perhaps a reference to the ball as well, but if she's going to search some object in order to determine the location of the ball (the premise of the scenario), it makes more sense for her search inside the basket object rather than query the ball object. The result will be "nope, no ball here", of course, but that's the point of the scenario.

The problem of confused databases seems to only exist because the abstractions are leaky with respect to how those objects and Sally behave in the real world.

If you model the ball to match Sally's mental image of it, the ball \*doesn't know\* where it is located. Moreover, Sally doesn't truly know where the ball is located when she returns (in the abstraction of this problem, otherwise she wouldn't need to search for it.

It makes no sense whatsoever for her to query the ball for it's location, because that presumes prior knowledge of its location in order to make the query.

I.e. The ball's knowledge of its location is an \*implementation detail\* and should not be present in the interface seen by Sally.

Hence, question asked of the child is reworded to: "Towards what object does Sally's reference to the container of the ball point?"

Phrased that way I suspect even autistic children might answer correctly. Asking where she will look is a question about human motivation, which is usually short-circuited in the autistic.

Posted by Ray Trent on September 17, 2008 at 05:46 PM CEST #

The sally and ball picture is actually misleading. At first I thought "look in the box!". How silly of me, a 4 year old would know better!

What I realise soon after is that the basket without a ball is noticeably different from the basket with a ball, and my eyes have picked this up for just a fraction of a second.

This method will actually inadvertently pick up autistics anyway, since they are generally said to be good at spot-the-difference, but if you want a fair test, make a picture with two boxes next time.

Posted by Ali on September 17, 2008 at 06:18 PM CEST #

This is silly, your SPARQL example models belief DIRECTLY, yet you don't model belief you just model objects. Belief is another semantic layer and it isn't in your OO model. SPARQL has belief because you made it explicit, it is a relationship that is seperate from the physical relationship. Your OO model ONLY modelled the physical relationship and not beliefs, that's why it fell apart.

Posted by Tom DeGene on September 17, 2008 at 10:41 PM CEST #

The example makes no sense... the real ball is only one and is either in the basket OR in the box.
Ann and Sally's perceptions/beliefs can be different and I would model it differently...

Posted by Gabriel C. on September 17, 2008 at 11:23 PM CEST #

This is an autistic question. Because it assumes an anti pattern for location/persistence ownership and identity at the same time.

You have implied location/persistence as the box. But its mostly location and you have asked the question, "where is it" and yet postulated that OOP would somehow duplicate the object.

You have also complicated the issue by implying ownership. Wrong again.

This is absurd. Clearly if the "ball" was a true object, its location would be attributable via a relationship to possible locations or an attribute of "stored" balls.

To go through a location search is clearly stupid, if you could just ask the ball (you remember the ball dont you) where it is. You dont have to posess the state of the ball to remember it do you?

If doing a ball search is not OOP enough for you then you have not properly represented the idea of location, balls or the fact that your participants have a memory of all these things

Because a clear approach without true ball identity would be to search for it in known locations not in the last location it existed.

This is flawed logic. If you want a purely stateless system in which the ball can be located then assigning it a URL may seem cool to you but where did the state to store the URL come from and by what mechanism is that state refreshed when the location changes?

You said ann puts the ball in "her" basket but then it gets put into "the" box. Clearly 2 independent concepts of storage and location.

Pls get a grip. This is not OOP or the semantic web its just your own lack of a clue on how to describe a problem and solve it

Posted by bob on September 17, 2008 at 11:43 PM CEST #

I strongly think you have taken a "wrong" approach. The approach cannot actually be called wrong, but it was the way in which you decided to do your object modelling. Here, you are only modelling the "real world". (That is often enough for practical computer applications.) If you wanted to do object modelling at this level, you would want to do what the brain does. It keeps a copy of what it thinks is the world. So, Sally has a copy of the real world, which is mostly a small subset of the real world (and may be different from objects in the real world); and so does Ann. There is one global copy of real world. Sally / Ann would find out differences only when they compare a subset of the real world with their copy of the world.

ie: sally.look( box );
and look would be:
Vector<Object> look( Container where )
{
return RealWorld.get(where).getAllObjects();
}

and then sally would:
sally.take(ball, box);
of sally.search(ball, box);

boolean take( Object what, Container fromWhere)
{
os = sally.look(fromWhere);
if(!os.contains(what))
{
if(myWorld.get(where).
contains(what)))
//My World has it, but not real world.
throw MismatchException;
else
//My world doesn't have it, so no
worries.
return false;
}
os.remove(what);
}

And sally would probably catch MismatchException
and do:

sally.cry();
sally.cry();
sally.cry();

Posted by Razee Marikar on September 18, 2008 at 12:33 AM CEST #

He He! I like reading posts like this, they Mislead people into thinking there is some defined way of arriving at a result or not!, some theory so to speak, but in context of the real world, what actually happens is something akin to "if it can happen it will" what if when Sally looks in the box, thinking that that is a logical conclusion to her problem, she finds a cat?

sally.scream()

oops!

http://en.wikipedia.org/wiki/Probability_theory

Posted by Martin McEvoy on September 18, 2008 at 04:54 AM CEST #

Hi all thanks for your comments. I will reply shortly to a few here. First I would like to suggest people look quickly at the documentation of Sesame, in particular
http://openrdf.org/doc/sesame2/users/ch08.html#d0e1218

-------------------

Ray Trent said:
[[
Actually, the part I don't get about this analysis is that it seems to misplace the notion of Sally's belief.

That's not a belief about the characteristics of the \*ball\*, it's a belief about the characteristics of the \*basket\* (i.e. that it contains the ball).
]]

Ray it could be both. Her beliefs are about the basket, the ball and the relation they are in.

[[
In simpler terms, Sally has a reference to the basket, and perhaps a reference to the ball as well, but if she's going to search some object in order to determine the location of the ball (the premise of the scenario), it makes more sense for her search inside the basket object rather than query the ball object. The result will be "nope, no ball here", of course, but that's the point of the scenario.
]]

The point of the question is not what that much how is she going to search, but how is she going to decide what she is going to do. What does she believe about the world. This comes before she searches. She wants the ball. Why does she first look in the basket? What she really wants to do is query

SELECT ?container
WHERE { :ball :in ?container . }

[[
Hence, question asked of the child is reworded to: "Towards what object does Sally's reference to the container of the ball point?"
]]

That would require the child of whome you are asking the question to have a theory of the state of Sally's mind. That is the child needs to have both an idea of her thoughts, but also the thoughts of that other child, as she cannot peek inside Sally's brain. This is what autistic children have trouble with. And also it is not so easy to do off the bat in Java, that is without a library such as Sesame.

--------------------

Razee Marikar:

your example is nice code to model how we would act _on_ the world. But it does not show how you would do it to model how other people are thinking about the world. After all the little boy - autism is more prevalent in boys - looking at the muppet show is asked what he thinks the Sally character is going to do. So he has to realise that the way he thinks the world is, is different from the way the puppet thinks of it.

Posted by Henry Story on September 18, 2008 at 05:48 AM CEST #

Does Sally believes in God ?
ha ha ha

wrong domain !

Posted by giku on September 18, 2008 at 06:29 AM CEST #

I really like your analysis of OOP and it's relation to autism. I have never thought of it in such a way, but it does make a lot of sense (when I am able to remove my preconceptions). However, why is autism bad in this scenario (if you are even implying it)? I can understand that if our perspective is not an omniscient one then this can fail us. Would you please provide a applied example of the problem at hand with relation to your point on Semantic Web? Thanks a lot!

Posted by Ryan on September 18, 2008 at 03:09 PM CEST #

Ryan wrote:
> Would you please provide a applied example of the problem at hand with relation to your point on Semantic Web?

Thanks for asking Ryan. Yes there are a lot of examples. The following article "Extending the Representational State Transfer (REST) Architectural Style for Decentralized Systems" which you can find here
http://portal.acm.org/citation.cfm?id=999447
makes the point about how the distance between the source of a message and the recipient of a message makes perfect immediate communication impossible, if you think of it as resources having the same access to an object. But if you think of it as message passing then you can do some interesting things... Ok I read that quickly, but that is what made me decide to write this article out today.

In the AddressBook I am writing, which I describe in an audio slide cast here:
http://blogs.sun.com/bblfish/entry/building_secure_and_distributed_social
I need to get data from distributed places around the web. This can only be done seriously if you accept that there will be spammers, liars, and just simply wrong data out there. So though you may by default merge data, you may want to make it easy to unmerge it too. I wrote about that in more detail here:
http://blogs.sun.com/bblfish/entry/beatnik_change_your_mind

As I said, if you are writing tools, that you can think of as physical, mechanical objects, that don't have to have points of view on the universe, say if you are writing a web browser, a calculator, or some such thing, then this is not important. But as soon as you want to mesh the information on the web, you will need to take the opinion of others into account. We are fast moving to a world where this is going to become more and more important.

In any case it is good to know the limitations of your tools. :-)

Posted by Henry Story on September 18, 2008 at 04:53 PM CEST #

To the article writer: it's amazing that someone with some much knowledge can make so little use of it. You just decided to use one (non-obvious) approach of modelling the situation, and draw very wide conclusions of the narrow-minded initial premises.

Posted by Freddy Fish on September 19, 2008 at 02:00 AM CEST #

Freddy Fish wrote:
> You just decided to use one (non-obvious) approach of modelling the situation, and draw very wide conclusions of the narrow-minded initial premises.

I am basing this off core work in logic and philosophy on belief contexts and looking at how this applies to OO programming. There is of course an OO way to model this, and that is to use RDF frameworks such as http://openrdf.org/

Do you have other suggestions?

Posted by Henry Story on September 19, 2008 at 02:35 AM CEST #

Hi Henry,

This is a really interesting post. I like looking at this problem as a theory of mind issue. Anytime you want to support multiple models of a domain or allow people to float hypotheses, you have to have some mechanism for encoding belief.

That said, I'm not convinced there's something intrinsic in OOP languages that won't let you do that. I think your OOP example just doesn't model belief, and that you could add in a belief layer using Java. And it's certainly possible to fail to model it using RDF. But I agree that it's easy to do in RDF, and probably much more lightweight than implementing something in an OOP language. RDF was built to accomodate distributed, decentralized knowledge. We always talk about using it to aggregate data from different sources. This is the opposite side of the coin -- different people describing the same thing in different ways, which you may or may not be able to integrate into a cohesive state.

Posted by Sharon on September 19, 2008 at 02:57 AM CEST #

Since you seem to be getting a lot of questions about this post, I thought I'd say that since you explained my question to me I have begun to understand ( I hope ;) ), so it's quite possible to :).

Posted by Joseph Razavi on September 19, 2008 at 01:00 PM CEST #

Hmmmm. Parts of your intriguing article made me think of erlang.

"When one fetches information from a remote server one just has to take into account the fact that the server's view of the world may be different and incompatible in some respects with one's own. One cannot in an open world just assume that every body agrees with everything. One is forced to develop languages that enable a theory of mind. A lot of failures in distributed programming can probably be traced down to working with tools that don't."

Erlang was created for coding highly fault-tolerant (and distributed) systems; characteristics stemming this fact might make it an example of a language that 'enables a theory of mind.'

http://erlang.org/white_paper.html

Posted by Benjamin Damman on September 19, 2008 at 08:33 PM CEST #

You first modeled the "real" states of objects in OO, identified some "problems" then moved to N3 notation to model \*beliefs\* of the persons. I think this is just wrong. Why just model the same beliefs into the OO model and see if there is actually any difference.

Posted by Andy on September 20, 2008 at 09:24 AM CEST #

Andy wrote:
>Why just model the same beliefs into the OO model and see if there is actually any difference.

Andy, that is what I do in my first and very lengthy reply to Joseph in this post. I model the beliefs of Sally, Anne, and Reality in the same way. You will have to admit that doing so is a lot more clumsy that doing this in N3. But it does the importance of adding names to the objects. See the reply above.

Posted by Henry Story on September 20, 2008 at 11:07 AM CEST #

I think people stumble because it looks a lot like a criticism of OO languages-- one which boils down to, essentially, that work is required to get them to model a specific type of problem.
If you look at it as an interesting point of view from which to look at (and compare) OO languages and N3, it works nicely, the metaphor of autism working to reveal, from an interesting perspective, a difference between the two.

Maybe the confusion is mostly to do with seeing the article from a different point of view ;)

Posted by Joseph Razavi on September 20, 2008 at 01:13 PM CEST #

I thought I should mention that I have had quite a lot of experience building an RDF to java object mapper, and that the thoughts here are the result of my working on that project. See:

https://sommer.dev.java.net/sommer/index.html

Posted by Henry Story on September 24, 2008 at 03:00 AM CEST #

So what about the temporal aspect in all this. What happens after Sally finds out correct location of the ball. How will RDF show the difference in her understanding of the world over certain period of time?

Posted by rahul on September 24, 2008 at 03:22 AM CEST #

> So what about the temporal aspect in all this. What happens after Sally finds out correct location of the ball. How will RDF show the difference in her understanding of the world over certain period of time?

The best thing to merge information would be to work directly with a four dimensional representation of the world. Then information about the past, the present and the future can be merged nicely. Otherwise one has to use lifting rules, to transform relations that are embedded in a temporal context into on that no longer is. See:

http://blogs.sun.com/bblfish/entry/it_s_all_about_context

Posted by Henry Story on September 24, 2008 at 03:49 AM CEST #

All these issues are addressed in Cyc. Anyone doing semantic modeling (aka ontological engineering) should be required to read their publications, starting with "Building Large Knowledge-Based Systems: Representation and Inference in the Cyc Project.", Lenat & Guha (1990). The "belief" problem is specifically addressed by microtheories. One of Lenat's criticisms of OWL is that it lacks many of the inference mechanisms needed to do the really rich modeling that Cyc does. But progress towards unification is being made, however slowly, and those gaps will be filled eventually.

Posted by Jim White on November 07, 2008 at 06:01 PM CET #

Jim, I completely agree that Guha's thesis on "Contexts: A Formalization and Some Applications" is key to understanding the semantic web. When one reads his thesis it is evident how cwm, N3 and more recently SPARQL have taken those lessons to heart. I wrote about this in "Keeping track of Context in Life and on the Web"

http://blogs.sun.com/bblfish/entry/it_s_all_about_context

Until I read Guha's thesis I had a lot of trouble understanding how the semantic web could work. Having read the first half of Guha's thesis (I did not bother with the detailed proofs) and learnt N3, it soon became clear how it all fits together.

The beauty is that this is completely compatible with OWL. It is just that OWL is not able to express belief contexts. Yes, thinking about objects, classes and inheritance is key to thought. OWL helps formalize some aspects of that. ( I say some, because the inference relations of OWL are too rigid for many contexts. For example one wants to say that every human being has two parents that are human. But of course evolutionary theory tells us that this is not absolutely true. Some of our ancestors were not human. Still even if one were to create fuzzy or even probabilistic versions of OWL one would not be able to construct belief contexts. ) If one were to get into a conversation with either Ann or Sally in our example, the conversation would expect them to understand the objective relations expressed by OWL. But as all logicians since Frege keep rediscovering, extensional semantics by itself is not enough to capture subjectivity. One needs modal logic for this - which can be extensional if as David Lewis does, one quantifies over possible worlds. To understand other people we have to understand that they may think the world they are living to be different from the one we are in. All that appears quite clearly in the RDF semantics by the way. To quote from Section 1.2

[[
The basic intuition of model-theoretic semantics is that asserting a sentence makes a claim about the world: it is another way of saying that the world is, in fact, so arranged as to be an interpretation which makes the sentence true. In other words, an assertion amounts to stating a constraint on the possible ways the world might be. Notice that there is no presumption here that any assertion contains enough information to specify a single unique interpretation. It is usually impossible to assert enough in any language to completely constrain the interpretations to a single possible world, so there is no such thing as 'the' unique interpretation of an RDF graph. In general, the larger an RDF graph is - the more it says about the world - then the smaller the set of interpretations that an assertion of the graph allows to be true - the fewer the ways the world could be, while making the asserted graph true of it.
]] http://www.w3.org/TR/rdf-mt/

Most computing until now tried to avoid the problem because (1) it is complicated, and (2) most programs were written by a close nit team, that could try to act as one thinking entity. As soon as you move to something of the scale of the web, those assumptions have to be dropped. You enter what I like recently provocatively called the postmodern era

http://blogs.sun.com/bblfish/entry/the_coming_postmodern_era

Posted by Henry Story on November 07, 2008 at 06:35 PM CET #

Post a Comment:
Comments are closed for this entry.
About

bblfish

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today