Wednesday Jan 13, 2010

Faviki: social bookmarking for 2010

faviki logo

Faviki is simply put the next generation social bookmarking service. "A bookmarking service? You must be kidding?!" I can hear you say in worried exasperation. "How can one innovate in that space?" Not only is it possible to innovate here, let me explain why I moved all my bookmarks from delicious over to faviki.

Like delicious, digg, twitter and others... Faviki uses crowd sourcing to allow one to share interesting web pages one has found, stay up to date on a specific topic of interest, and keep one's bookmarks synchronized across computers. So there is nothing new at that level. If you know, you won't be disoriented.

What is new is that instead of this being one crowd sourced application, it is in fact two. It builds on wikipedia to help you tag your content intelligently with concepts taken from dbpedia. Instead of tagging with strings the meaning of which you only understand at that time, you can have tags that make sense, backed by a real evolving encyclopedia. Sounds simple? Don't be deceived: there is a huge potential in this.

Let us start with the basics: What is tagging for? It is here to help us find information again, to categorize our resources into groups so that we can find them again in the rapidly increasing information space. I now have close to ten years of bookmarks saved away. As a result I can no longer remember what strings I used previously to tag certain categories of resources. Was it "hadopi", "paranoia", "social web", "socialweb", "web", "security", "politics", "zensursula", "bigbrother", "1984", ... If I tag a document about a city should I tag it "Munich", "München", "capital", "Bavaria", "Germany", "town", "agglomeration", "urbanism", "living", ...? As time passed I found it necessary to add more and more tags to my bookmarks, hoping that I would be able to find a resource again in the future by accidentally choosing one of those tags. But clearly that is not the solution. Any of those tags could furthermore be used very differently by other people on delicious. Crowd sourcing only partially works, because there is no clear understanding on what is meant by a tag, and there is no space to discuss that. Is "bank" the bank of a river, or the bank you put money in? Wikipedia has a disambiguation page for this, which took some time to put together. No such mechanism exists on delicious.

Faviki neatly solves this problem by using the work done by another crowd sourced application, and allowing you to tag your entries with concepts taken from there. Before you tag a page, Faviki finds some possible dbpedia concepts that could fit the content of the page to tag. When you then choose the tags, the definition from wikipedia is made visible so that you can choose which meaning of the tag you want to use. Finally when you tag, you don't tag with a string, but with a URI: the DBPedia URI for that concept. Now you can always go back and check the detailed meaning of your tags.

But that is just the beginning of the neatness of this system. Imagine you tag a page with (the user does not see this URL of course!). Then by using the growing linked data cloud Faviki or other services will be able to start doing some very interesting inferencing on this data. So since the above resource is known to be a town, a capital, to be in Germany which is in Europe, to have more than half a million inhabitants, to be along a certain river, that contains certain museums, to have different names in a number of other languages, to be related in certain ways to certain famous people (such as the current Pope)... it will be possible to improve the service to allow you to search for things in a much more generic way: you could search by asking Faviki for resources that were tagged with some European Town and the concept Art. If you are searching for "München" Faviki will be able to enlarge the search to Munich, since they will be known to be tags for the same city...

I will leave it as an exercise to the reader to think about other interesting ways to use this structured information to make finding resources easier. Here is an image of the state of the linked data cloud 6 months ago to stimulate your thinking :-)


But think about it the other way now. Not only are you helping your future self find information bookmarked semantically - let's use the term now - you are also making that information clearly available to wikipedia editors in the future. Consider for example the article "Lateralization of Brain Function" on wikipedia. The Faviki page on that subject is going to be a really interesting place to look to find good articles on the subject appearing on the web. So with Faviki you don't have to work directly on wikipedia to participate. You just need to tag your resources carefully!

Finally I am particularly pleased by Faviki, because it is exactly the service I described on this blog 3 years ago in my post Search, Tagging and Wikis, at the time when the folksonomy meme was in full swing, threatening according to it's fiercest proponents to put the semantic web enterprise into the dustbin of history.

Try out Faviki, and see who makes more sense.

Some further links:

Sunday Nov 29, 2009

Web Finger proposals overview

If all you had was an email address, would it not be nice to be able to have a mechanism to find someone's home page or OpenId from it? Two proposals have been put forward to show how this could be done. I will look at them and add a sketch of my own that hopefully should lead us to a solution that takes the best of both proposals.

The WebFinger GoogleCode page explains what webfinger is very well:

Back in the day you could, given somebody's UNIX account (email address), type
$ finger 
and get some information about that person, whatever they wanted to share: perhaps their office location, phone number, URL, current activities, etc.

The new ideas generalize this to the web, by following a very simple insight: If you have an email address like, then the owner of is responsible for managing the email. That is the same organization responsible for managing the web site So all that is needed is some machine readable pointer from to a lookup giving more information about owner of the email address. That's it!

The WebFinger proposal

The WebFinger proposed solution showed the way so I will start from here. It is not too complicated, at least as described by John Panzer's "Personal Web Discovery" post.

John suggests that there should be a convention that servers have a file in the /host-meta root location of the HTTP server to describe metadata about the site. (This seems to me to break web architecture. But never mind: the resource can have a link to some file that describes a mapping from email ids to information about it.) The WebFinger solution is to have that resource be in a new application/host-meta file format. (not xml btw). This would have mapping of the form

Link-Pattern: <{%uri}>; 
So if you wanted to find out about me, you'd be able to do a simple HTTP GET request on, which will return a representation in another new application/xrd+xml format about the user.

The idea is really good, but it has three more or less important flaws:

  • It seems to require by convention all web sites to set up a /host-meta location on their web servers. Making such a global requirement seems a bit strong, and does not in my opinion follow web architecture. It is not up to a spec to describe the meaning of URIs, especially those belonging to other people.
  • It seems to require a non xml application/host-meta format
  • It creates yet another file format to describe resources the application/xrd+xml. It is better to describe resources at a semantic level using the Resouces Description Framework, and not enter the format battle zone. To describe people there is already the widely known friend of a friend ontology, which can be clearly extended by anyone. Luckily it would be easy for the XRD format to participate in this, by simply creating a GRDDL mapping to the semantics.

All these new format creation's are a real pain. They require new parsers, testing of the spec, mapping to semantics, etc... There is no reason to do this anymore, it is a solved problem.

But lots of kudos for the good idea!

The FingerPoint proposal

Toby Inkster, co inventor of foaf+ssl, authored the fingerpoint proposal, which avoids the problems outlined above.

Fingerpoint defines one useful relation sparql:fingerpoint relation (available at the namespace of the relation of course, as all good linked data should), and is defined as

	a owl:ObjectProperty ;
	rdfs:label "fingerpoint" ;
	rdfs:comment """A link from a Root Document to an Endpoint Document 
                        capable of returning information about people having 
                        e-mail addresses at the associated domain.""" ;
	rdfs:subPropertyOf sparql:endpoint ;
	rdfs:domain sparql:RootDocument .
It is then possible to have the root page link to a SPARQL endpoint that can be used to query very flexibily for information. Because the link is defined semantically there are a number of ways to point to the sparql endpoint:
  • Using the up and coming HTTP-Link HTTP header,
  • Using the well tried html <link> element.
  • Using RDFa embedded in the html of the page
  • By having the home page return any other represenation that may be popular or not, such as rdf/xml, N3, or XRD...
Toby does not mention those last two options in his spec, but the beauty of defining things semantically is that one is open to such possibilities from the start.

So Toby gets more power as the WebFinger proposal, by only inventing 1 new relation! All the rest is already defined by existing standards.

The only problem one can see with this is that SPARQL, though not that difficult to learn, is perhaps a bit too powerful for what is needed. You can really ask anything of a SPARQL endpoint!

A possible intermediary proposal: semantic forms

What is really going on here? Let us think in simple HTML terms, and forget about machine readable data a bit. If this were done for a human being, what we really would want is a page that looks like the site, which currently is just one query box and a search button (just like Google's front page). Let me reproduce this here:

Here is the html for this form as its purest, without styling:

     <form  action='/lookup' method='GET'>
         <img src='' />
         <input name='email' type='text' value='' />         
         <button type='submit' value='Look Up'>Look Up</button>

What we want is some way to make it clear to a robot, that the above form somehow maps into the following SPARQL query:

PREFIX foaf: <>
SELECT ?homepage
   [] foaf:mbox ?email;
      foaf:homepage ?homepage

Perhaps this could be done with something as simple as an RDFa extension such as:

     <form  action='/lookup' method='GET'>
         <img src='' />
         <input name='email' type='text' value='' />         
         <button type='submit' value='homepage' 
                sparql='PREFIX foaf: <> 
                 GET ?homepage
                 WHERE {
                   [] foaf:mbox ?email;
                      foaf:homepage ?homepage
                 }">Look Up</button>

When the user (or robot) presses the form, the page he ends up on is the result of the SPARQL query where the values of the form variables have been replaced by the identically named variables in the SPARQL query. So if I entered in the form, I would end up on the page, which could perhaps just be a redirect to this blog page... This would then be the answer to the SPARQL query

PREFIX foaf: <>
SELECT ?homepage
   [] foaf:mbox "";
      foaf:homepage ?homepage
(note: that would be wrong as far as the definition of foaf:mbox goes, which relates a person to an mbox, not a string... but let us pass on this detail for the moment)

Here we would be defining a new GET method in SPARQL, which find the type of web page that the post would end up landing on: namely a page that is the homepage of whoever's email address we have.

The nice thing about this is that as with Toby Inkster's proposal we would only need one new relation from the home page to such a finder page, and once such a sparql form mapping mechanism is defined, it could be used in many other ways too, so that it would make sense for people to learn it. For example it could be useful to make web sites available to shopping agents, as I had started thinking about in RESTful semantic web services before RDFa was out.

But most of all, something along these lines, would allow services to have a very simple CGI to answer such a query, without needing to invest in a full blown SPARQL query engine. At the same time it makes the mapping to the semantics of the form very clear. Perhaps someone has a solution to do this already. Perhaps there is a better way of doing it. But it is along these lines that I would be looking for a solution...

(See also an earlier post of mine SPARQLing AltaVista: the meaning of forms)

How this relates to OpenId and foaf+ssl

One of the key use cases for such a Web Finger comes from the difficulty people have of thinking of URLs as identifiers of people. Such a WebFinger proposal if successful, would allow people to type in their email address into an OpenId login box, and from there the Relying Party (the server that the user wants to log into), could find their homepage (usually the same as their OpenId page), and from there find their FOAF description (see "FOAF and OpenID").

Of course this user interface problem does not come up with foaf+ssl, because by using client side certificates, foaf+ssl does not require the user to remember his WebID. The browser does that for him - it's built in.

Nevertheless it is good that OpenId is creating the need for such a service. It is a good idea, and could be very useful even for foaf+ssl, but for different reasons: making it easy to help people find someone's foaf file from the email address could have many very neat applications, if only for enhancing email clients in interesting new ways.


It was remarked in the comments to this post that the format for the /host-meta format is now XRD. So that removes one criticism of the first proposal. I wonder how flexible XRD is now. Can it express everything RDF/XML can? Does it have a GRDDL?

Wednesday Nov 25, 2009

Identity in the Browser, Firefox style

Mozilla's User Interface chief Aza Raskin just put forward some interesting thoughts on what Identity in the Browser could look like for Firefox. As one of the Knights in search of the Golden Holy Grail of distributed Social Networking, he believes to have found it in giving the browser more control of the user's identity.

The mock up picture reproduced below, shows how Firefox, by integrating identity information into the browser, could make it clear as to what persona one is logged into a site as. It would also create a common user interface for allowing one to log in to a site under a specific Identity, as well as allow one to create a new one. Looking at the Weave Identity Account Manager project site one finds that it would also make it easy to generate automatically passwords for each site/identity, to sync one's passwords across devices, as well as to change the passwords for all enabled sites simultaneously if one feared one's computer had fallen in the wrong hands. These are very appealing properties, and the UI is especially telling, so I will reproduce the main picture here:

The User Interface

One thing I very strongly support in this project is the way it makes it clear to the user, in a very visible location - the URL bar -, as what identity he is logged in as. Interestingly this is at the same location as the https information bar, when you connect to secure sites. Here is what URL bar looks like when connected securely to LinkedIn:

One enhancement the Firefox team could immediately work on, without inventing a new protocol, would be to reveal in the URL bar the client certificate used when connected to a https://... url. This could be done in a manner very similar to the way proposed by Aza Raskin in the his Weave Account manager prototype pictured above. This would allow the user to

  • know what HTTPS client cert he was using to connect to a site,
  • as well as allow him to log out of that site,
  • change the client certificate used if needed
The last two feature of TLS are currently impossible to use in browsers because of the lack of such a User Interface Handle. This would be a big step to closing the growing Firefox Bug 396441: "Improve SSL client-authentication UI".

From there it would be just a small step, but one that I think would require more investigation, to foaf+ssl enhance the drop down description about both the server and the client with information taken from the WebID. A quick reminder: foaf+ssl works simply by adding a WebID - which is just a URL to identify a foaf:Agent - as the subject alternative name of the X509 certificate in the version 3 extensions, as shown in detail in the one page description of the protocol. The browser could then GET the meaning of that URI, i.e. GET a description of the person, by the simplest of all methods: an HTTP GET request. In the case of the user himself, the browser could use the foaf:depiction of the user, to display a picture of him. In the case of the web site certificate, the browser could GET the server information at its WebId, and display the information placed there. Now if the foaf file is not signed by a CA, then the information given by the remote server about itself, should perhaps be placed on a different background or in some way to distinguish the information in the certificate, from the information gleaned from the WebId. So there are a few issues to work on here, but these just only involve well developed standards - foaf and TLS - and some user interface engineers to get them right. Easier, it seems to me, than inventing a whole protocol - even though it is perhaps every engineers desire to have developed a successful one.

The Synchronization Piece

Notice how foaf+ssl enables synchronization. Any browser can create a public/private key pair using the keygen element, and get a certificate from a WebId server, such as Such a server will then add that public key as an identifier for that WebId to the foaf file. Any browser that has a certificate whose public key matches that published on the server, will be able to authenticate to that server and download all the information it needs from there. This could be information

  • about the user (name, depiction, address, telephone number, etc, etc)
  • a link to a resource containing the bookmarks of the user
  • his online accounts
  • his preferences
Indeed you can browse all the information can glean just from my public foaf file here. You will see my bookmarks taken from delicious, my tweets and photos all collected in the Activity tab. This is just one way to display information about me. A browser could collect all that information to build up a specialized user interface, and so enable synchronization of preferences, bookmarks, and information about me.

The Security Problem

So what problem is the Weave team solving in addition to the problem solved above by foaf+ssl?

The weave synchronization of course works in a similar manner: data is stored on a remote server, and clients fetch and publish information to that server. One thing that is different is that the Weave team wish to store the passwords for each of the user's accounts onto a remote server that is not under the user's control. As a result that information needs to be encrypted. In foaf+ssl only the public key is stored on a remote server, so there is no need to encrypt that information: the private key can remain safely on the client key chain. Of course there is a danger with the simple foaf+ssl server that the owner of the remote service can both see and change the information published remotely depending on who is asking for it. So an unreliable server could add a new public key to the foaf file, and thereby allow a malicious client to authenticate as the user in a number of web sites.

It is to solve this problem that Weave was designed: to be able to publish remotely encrypted information that only the user can understand. The publication piece uses a nearly RESTful API. This allows it to store encrypted content such as passwords, identity information, or indeed any content on a remote server. The user would just need to remember that one password to be able to synchronize his various Identities from one device to another. There is a useful trick that is worth highlighting: each piece of data is encrypted using a symmetric key, which is stored on the server encrypted with a public key. As a result one can give someone access to a piece of data just by publishing the symmetric key encrypted using one of her public key.

Generalization of Weave

To make the above protocol fully RESTful, it needs to follow Roy Fielding's principle that "REST APIs must be hypertext driven". As such this protocol is failing in this respect in forcing a directory layout ahead of time. This could be fixed by creating a simple ontology for the different roles of the elements required in the protocol: such as public keys, symmetric keys, data objects, etc... This would then enable the Linked Data pattern. Allowing each of the pieces of data to be anywhere on the web. Of course nothing would stop the data from being set out the way specified in the current standard. But it immediately opens up a few interesting possibilities. For example if one wanted a group of encrypted resources to be viewed by the same group of people, one would need only one encrypted symmetric key each of those resources could point to, enabling less duplication.

By defining both a way of getting objects, and their encoding, the project is revealing its status as a good prototype. To be a standard, those should be separated. That is I can see a few sperate pieces required here:

  1. An ontology describing the public keys, the symmetric keys, the encrypted contents,...
  2. Mime types for encrypted contents
  3. Ontologies to describe the contents: such as People, bookmarks, etc...
Only (1) and (2) above would be very useful for any number of scenarios. The contents in the encrypted bodies could then be left to be completely general, and applied in many other places. Indeed being able to publish information on a remote untrusted server could be very useful in many different scenarios.

By separating the first two from (3), the Weave project would avoid inventing yet another way to describe a user for example. We already have a large number of those, including foaf, Portable Contacts, vcard, and many many more... I side for data formats being RDF based, as this separates the issues of syntax and semantics. It also allow the descriptions to be extensible, so that people can think of themselves in more complex ways that that which the current developers of Weave have been able to think of. That is certainly going to be important if one is to have a distributed social web.

Publishing files in an encrypted manner remotely does guard one from malicious servers. But it does I think also reduce the usability of the data. Every time one wants to give access to a resource to someone one needs to encrypt the symmetric key for that user. If the user looses his key, one has to re-encrypt that symmetric key. By trusting the server as foaf+ssl does, it can encrypt the information just in time, for the client requesting the information. But well, these are just different usage scenarios. For encrypting password - which we should really no longer need - then certainly the Weave solution is going in the right direction.

The Client Side Password

Finally Weave is going to need to fill out forms automatically for the user. To do this again I would develop a password ontology, and then markup the forms in such a way that the browser can deduce what pieces of information need to go where. It should be a separate effort to decide what syntax to use to markup html. RDFa is one solution, and I hear the HTML5 solution is starting to look reasonable now that they removed the reverse DNS namespace requirement. In any case such a solution can be very generic, and so the Firefox engineers could go with the flow there too.

RDF! You crazy?

I may be, but so is the world. You can get a light triple store that could be embedded in mozilla, that is open source, and that is in C. Talk to the Virtuoso folks. Here is a blog entry on their lite version. My guess is they could make it even liter. KDE is using it....

Thursday Oct 15, 2009

November 2nd: Join the Social Web Camp in Santa Clara

The W3C Social Web Incubator Group is organizing a free Bar Camp in the Santa Clara Sun Campus on November 2nd to foster a wide ranging discussion on the issues required to build the global Social Web.

Imagine a world where everybody could participate easily in a distributed yet secure social web. In such a world every individual will control their own information, and every business could enter into a conversation with customers, researchers, government agencies and partners as easily as they can now start a conversation with someone on Facebook. What is needed to go in the direction of The Internet of Subjects Manifesto? What existing technologies can we build on? What is missing? What could the W3C contribute? What could others do? To participate in the discussion and meet other people with similar interests, and push the discussion further visit the Santa Clara Social Web Camp wiki and

If you are looking for a reason to be in the Bay Area that week, then here are some other events you can combine with coming to the Bar Camp:

  • The W3C is meeting in Santa Clara for its Technical Plenary that week in Santa Clara.
  • The following day, the Internet Identity Workshop is taking place in Mountain View until the end of the week. Go there to push the discussion further by meeting up with the OpenId, OAuth, Liberty crowd, which are all technologies that can participate in the development of the Social Web.
  • You may also want to check out ApacheCon which is also taking place that week.

If you can't come to the west coast at all due to budget cuts, then not all is lost. :-) If you are on the East coast go and participate in the ISWC Building Semantic Web Applications for Government tutorial, and watch my video on The Social Web which I gave at the Free and Open Source Conference this summer. Think: if the government wants to play with Social Networks, it certainly cannot put all its citizens information on Facebook.

Monday Oct 12, 2009

One month of Social Web talks in Paris

Poster for the Social Web Bar Camp @LaCantine

As I was in Berlin preparing to come to Paris, I wondered if I would be anywhere near as active in France as I had been in Germany. I had lived for 5 years in Fontainebleau, an hour from Paris, close but just too far to be in the swing of things. And from that position, I got very little feel for what was happening in the capital. This is what had made me long to live in Paris. So this was the occasion to test it out: I was going to spend one month in the capital. On my agenda there was just a Social Web Bar Camp and a few good contacts.

The Social Web Bar Camp at La Cantine which I blogged about in detail, was like a powder keg for my stay here. It just launched the whole next month of talks, which I detail below. It led me to make a very wide range of contacts, which led to my giving talks at 2 major conferences, 2 universities, one other Bar Camp, present to a couple of companies, get one implementation of foaf+ssl in Drupal, and meet a lot of great people.

Through other contacts, I also had an interview with a journalist from Le Monde, and met the very interesting European citizen journalism agency Cafe Babel (for more on them see this article).

Here follows a short summary of each event I presented the Social Web at during my short stay in Paris.

Friday, 18 September 2009
Arrived in plane from Berlin, and met the journalists at the Paris offices of Cafe Babel, after reading an article on them in the July/August issue of Internationale Politik, "Europa aus Erster Hand".
Saturday, 19 September 2009
Went to the Social Web Bar Camp at La Cantine which I blogged about in detail. Here I met a many people, who connected me up with the right people in the Paris conference scene, where I was then able to present. A couple of these did not work out due to calendar clashes, such as an attempted meeting with engineers and users of Elgg a distributed Open Source Social Networking Platform popular at Universities here in France and the UK.
Monday, 21 September 2009
Visited the offices of Le Monde, and had lunch with a journalist there. I explain my vision of the Social Web and the functioning of foaf+ssl. He won't be writing about it directly he told me, but will develop these ideas over time in a number of articles. ( I'll post updates here, though it is sadly very difficult to link to articles in Le Monde, as they change the URLs for their articles, make them paying only after a period of time, and then don't even make an abstract available for non paying members).
Friday, 25 September 2009
I visited the new offices of a startup with a history: they participated in the building of the web site of Ségolène Royal the contender with Nicholas Sarkozi, during the last French Presidential Elections.
There I met up with Damien Tournoud, and expert Drupal Developer, explained the basics of foaf+ssl, pointed him to the Open Source project, and let him work on it. With a bit of help from Benjamin Nowack the creator of the ARC2 Semantic Web library for PHP, Damien had a working implementation the next day. We waited a bit, before announcing it the following Wednesday on the foaf-protocols mailing list.
Tuesday 29 September, 2009
La Cantine organised another Bar Camp, on a wide range of topics, which I blogged about in detail. There I met people from Google, Firefox, and reconnected up with others. We also had a more open round table discussion on the Social Web.
Thursday 1st and Friday 2nd October, 2009
I visited the Open World Forum, which started among others with a track on the Semantic Desktop "Envisioning the Open Desktop of the future", headed by Prof Stefan Decker, with examples of implementations in the latest KDE (K Desktop Environment).
I met a lot of people here, including Eric Mahé, previously Technology Advisor at Sun Microsystems France. In fact I met so many people that I missed most of the talks. One really interesting presentation by someone from a major open source code search engine, explained that close to 60% of Open Source software came from Eastern and Western Europe combined. (anyone with a link to the talk?)
Saturday, 3rd October 2009
I presented The Social Web in French at the Open Source Developer Conference France which took place in La Villette.
I was really happily surprised to find that I was part of a 3 hour track dedicated to the Semantic Web. This started with a talk by Oliver Berger "Bugtracking sur le web sémantique. Oliver has been working on the Baetle ontology as part of the 2 year government financed HELIOS project. This is something I talked about a couple of years ago and wrote about here in my presentation Connecting Software and People. It is really nice to see this evolving. I really look forward to seeing the first implementations :-)
Oliver's was followed by a talk by Jean-Marc Vanel, introducing Software and Ontology Development, who introduced many of the key Semantic Web concepts.
Tuesday 6th October, morning
Milan Stankovitch whom I had met at the European Semantic Web Conference, and again at the Social Web Bar Camp, invited me to talk to the developers of, a very interesting web platform to help problem seekers find problem solvers. The introductory video is really worth watching. I gave them the talk I keep presenting, but with a special focus on how this could help them in the longer term make it easier for people to join and use their system.
Tuesday 6th September, afternoon
I talked and participated in a couple of round table talks at the 2nd Project Accelerator on Identity at the University of Paris 1, organised by the FING. Perhaps the most interesting talk there was the one by François Hodierne , who works for the Open Source Web Applications & Platforms company, and who presented the excellent project La Distribution whose aim it is to make installing the most popular web applications as easy as installing an app on the iPhone. This is the type of software needed to make The Internet of Subjects Manifesto a reality. In a few clicks everyone should be able to get a domain name, install their favorite web software on it - Wordpress, mail, wikis, social network, photo publishing tool - and get on with their life, whilst owning their data, so that if they at a later time find the need to move, they can, and so that nobody can kick them off their network. This will require rewriting a little each of the applications so as to enable them to work with the distributed secure Social Web, made possible by foaf+ssl: an application without a social network no longer being very valuable.
Thurday 9th October, 2009
Pierre Antoine Champin from the CNRS, the National French Research organisation, had invited me to Lyon to present The Social Web. So I took the TGV from Paris at 10:54 and was there 2 hours later, which by car would have been a distance of 464km (288.3 miles) according to Google Maps. The talk was very well attended with close to 50 students showing up, and the session lasted two full hours: 1 hour of talks and by many good questions.
After a chat and a few beers, I took the train back to Paris where the train arrived just after 10pm.
Saturday October 10, 2009
I gave a talk on the Social Web at Paris-Web, on the last day of a 3 day conference. This again went very well.
After lunch I attended two very good talks that complemented mine perfectly:
  • David Larlet had a great presentation on Data Portability, which sparked a very lively and interesting discussion. Issues of Data ownership, security, confidentiality, centralization versus decentralization came up. One of his slides made the point very well: by showing the number of Web 2.0 sites that no longer exist, some of them having disappeared by acquisition, others simply technical meltdown, leaving the data of all their users lost forever. (Also see David's Blog summary of Paris-Web. )
  • Right after coffee we had a great presentation on the Semantic Web by Fabien Gandon, who managed to give in the limited amount of time available to him an overview of the Semantic Web stack from bottom to top, including OWL 1 and 2, Microformats, RDFa, and Linked data, and various very cool applications of it, that even I learned a lot. His slides are available here. He certainly inspired a lot of people.
Tuesday, 13 October 2009
Finally I presented at the hacker space La suite Logique, which takes place in a very well organized very low cost lodging space in Paris. They had presentations on a number of projects happening there:
  • One project is to build a grid by taking pieces from the remains of computers that people have brought them. They have a room stashed full of those.
  • Another projects is to add wifi to the lighting to remotely control the projectors for theatrical events taking place there.
  • There was some discussion on how to add sensors to dancers, as one Daito Manabe a Japanese artist has done, in order to create a high tech butoh dance (see the great online videos).
  • Three engineers presented the robots they are constructing for a well known robot fighting competition
Certainly a very interesting space to hang out in, meet other hackers, and get fun things done in.
All of these talks were of course framed by some great evenings out, meeting people, and much more, which I just don't have time to write down right here. Those were the highlights of my month's stay in Paris. I must admit that I really had no idea it to be so active!

Wednesday Oct 07, 2009

Sketch of a RESTful photo Printing service with foaf+ssl

Let us imagine a future where you own your data. It's all on a server you control, under a domain name you own, hosted at home, in your garage, or on some cloud somewhere. Just as your OS gets updates, so all your server software will be updated, and patched automatically. The user interface for installing applications may be as easy as installing an app on the iPhone ( as La Distribution is doing).

A few years back, with one click, you installed a myPhoto service, a distributed version of fotopedia. You have been uploading all your work, social, and personal photos there. These services have become really popular and all your friends are working the same way too. When your friends visit you, they are automatically and seamlessly recognized using foaf+ssl in one click. They can browse the photos you made with them, share interesting tidbits, and more... When you organize a party, you can put up a wiki where friends of your friends can have write access, leave notes as to what they are going to bring, and whether or not they are coming. Similarly your colleagues have access to your calendar schedule, your work documents and your business related photos. Your extended family, defined through a linked data of family relationship (every member of your family just needs to describe their relation to their close family network) can see photos of your family, see the videos of your new born baby, and organize Christmas reunions, as well as tag photos.

One day you wish to print a few photos. So you go to web site we will provisionally call is neither a friend of yours, nor a colleague, nor family. It is just a company, and so it gets minimal access to the content on your web server. It can't see your photos, and all it may know of you is a nickname you like to use, and perhaps an icon you like. So how are you going to allow access to the photos you wish to print? This is what I would like to try to sketch a solution for here. It should be very simple, RESTful, and work in a distributed and decentralized environment, where everyone owns and controls their data, and is security conscious.

Before looking at the details of the interactions detailed in the UML Sequence diagram below, let me describe the user experience at a general level.

  1. You go to site after clicking on a link a friend of your suggested on a blog. On the home web page is a button you can click to add your photos.
  2. You click it, and your browser asks you which WebID you wish to use to Identify yourself. You choose your personal ID, as you wish to print some personal photos of yours. Having done that, your are authenticated, and welcomes you using your nicknames and displays your icon on the resulting page.
  3. When you click a button that says "Give access to the pictures you wish us to print", a new frame is opened on your web site
  4. This frame displays a page from your server, where you are already logged in. The page recognized you and asks if you want to give access to some of your content. It gives you information about's current stock value on NASDAQ, and recent news stories about the company. There is a link to more information, which you don't bother exploring right now.
  5. You agree to give access, but only for 1 hour.
  6. When your web site asks you which content you want to give it access to, you select the pictures you would like it to have. Your server knows how to do content negotiation, so even though copying each one of the pictures over is feasible, you'd rather give access to the photos directly, and let the two servers negotiate the best representation to use.
  7. Having done that you drag and drop an icon representing the set of photos you chose from this frame to a printing icon on the frame.
  8. thanks you, shows you icons of the pictures you wish to print, and tells you that the photos will be on their way to your the address of your choosing within 2 hours.

In more detail then we have the following interactions:

  1. Your browser GETs's home page, which returns a page with a "publish my photos" button.
  2. You click the button, which starts the foaf+ssl handshake. The initial ssl connection requests a client certificate, which leads your browser to ask for your WebID in a nice popup as the iPhone can currently do. then dereferences your WebId in (2a) to verify that the public key in the certificate is indeed correct. Your WebId (Joe's foaf file) contains information about you, your public keys, and a relation to your contact addition service. Perhaps something like the following:
    :me xxx:contactRegistration </addContact> . uses this information when it creates the resulting html page to point you to your server.
  3. When you click the "Give access to the pictures you wish us to print" you are sending a POST form to the <addContact> resource on your server, with the WebId of <> in the body of the POST. The results of this POST are displayed in a new frame.
  4. Your web server dereferences, where it gets some information about it from the NASDAQ URL. Your server puts this information together (4a) in the html it returns to you, asking what kind of access you want to give this company, and for how long you wish to give it.
  5. You give access for 1 hour by filling in the forms.
  6. You give access rights to to your individual pictures using the excellent user interface available to you on your server.
  7. When you drag and drop the resulting icon depicting the collection of the photos accessible to, onto its "Print" icon in the other frame - which is possible with html5 - your browser sends off a request to the printing server with that URL.
  8. dereferences that URL which is a collection of photos it now has access to, and which it downloads one by one. had access to the photos on your server after having been authenticated with its WebId using foaf+ssl. (note: your server did not need to GET's foaf file, as it still had a fresh version in its cache). builds small icons of your photos, which it puts up on its server, and then links to in the resulting html before showing you the result. You can click on those previews to get an idea what you will get printed.

So all the above requires very little in addition to foaf+ssl. Just one relation, to point to a contact-addition POST endpoint. The rest is just good user interface design.

What do you think? Have I forgotten something obvious here? Is there something that won't work? Comment on this here, or on the foaf-protocols mailing list.


Creative Commons License sequence diagram by Henry Story is licensed under a Creative Commons Attribution 3.0 United States License.
Based on a work at

Wednesday Sep 09, 2009

RDFa parser for Sesame

RDFa is the microformat-inspired standard for embedding semantic web relations directly into (X)HTML. It is being used more and more widely, and we are starting to have foaf+ssl annotated web pages, such as Alexandre Passant's home page. This is forcing me to update my foaf+ssl Identity Provider to support RDFa.

The problem was that I have been using Sesame as my semweb toolkit, and there is currently was no RDFa parser for it. Luckily I found out that Damian Steer (aka. Shellac) had written a SAX bases rdfa parser for the HP Jena toolkit, which he had put up on the java-rdfa github server. With a bit of help from Damian and the Sesame team, I adapted the code to sesame, create a git fork of the initial project, and uploaded the changes on the bblfish java-rdfa git clone. Currently all but three of the 106 tests pass without problem.

To try this out get git, Linus Torvalds' distributed version control system (read the book), and on a unix system run:

$ git clone  git://

This will download the whole history of changes of this project, so you will be able to see how I moved from Shellac's code to the Sesame rdfa parser. You can then parse Alex's home page, by running the following on the command line (thanks a lot to Sands Fish for the Maven tip in his comment to this blog):

$ mvn  exec:java -Dexec.mainClass="rdfa.parse" -Dexec.args=""

[snip output of sesame-java-rdfa compilation]

@prefix foaf: <> .
@prefix geo: <> .
@prefix rel: <> .
@prefix cert: <> .
@prefix rsa: <> .
@prefix rdfs: <> .

<> <> <> ;
        <> <> , 
                     <> .

<> rdfs:label "About"@en .

<> a foaf:Person ;
        foaf:name "Alexandre Passant"@en ;
        foaf:workplaceHomepage <> , 
                               <> ;
        foaf:schoolHomepage <> , 
                            <> ;
        foaf:topic_interest <> ,
                            <> ;
        foaf:currentProject <> , 
                <> ;
        <> """
\\nDr. Alexandre Passant is a postdoctoral researcher at the Digital Enterprise Research Institute, National University
of Ireland, Galway. His research activities focus around the Semantic Web and Social Software: in particular, how these
fields can interact with and benefit from each other in order to provide a socially-enabled machine-readable Web,
leading to new services and paradigms for end-users. Prior to joining DERI, he was a PhD student at Université 
Paris-Sorbonne and carried out applied research work on \\"Semantic Web technologies for Enterprise 2.0\\" at
Electricité De France. He is the co-author of SIOC, a model to represent the activities of online communities on the
Semantic Web, the author of MOAT, a framework to let people tag their content using Semantic Web technologies, and
is also involved in various related applications as well as standardization activities.\\n"""@en ;
        foaf:based_near <> ;
        geo:locatedIn <> ;
        rel:spouseOf <> ;
        foaf:holdsAccount <> ,
                          <> ,
                          <> , 
                          <> , 
                          <> .

<> a rsa:RSAPublicKey ;
        cert:identity <> .

_:node14efunnjjx1 cert:decimal "65537"@en .

<> rsa:public_exponent _:node14efunnjjx1 .

_:node14efunnjjx2 cert:hex "8af4cb6d6ec004bd28c08d37f63301a3e63ddfb812475c679cf073c4dc7328bd20dadb9654d4fa588f155ca05e7ca61a6898fbace156edb650d2109ecee65e7f93a2a26b3928d3b97feeb7aa062e3767f4fadfcf169a223f4a621583a7f6fd8992f65ef1d17bc42392f2d6831993c49187e8bdba42e5e9a018328de026813a9f"@en .

<> rsa:modulus _:node14efunnjjx2 .


This graph can then be queried with SPARQL, merged with other graphs, and just as it links to other resources, those can in turn link back to it, and to elements defined therein. As a result Alexandre Passant can then use this in combination with an appropriate X509 certificate to log into foaf+ssl enabled web sites in one click, without needing to either remember a password or a URL.

Friday Jul 24, 2009

How to write a simple foaf+ssl authentication servlet

After having set up a web server so that it listens to an https socket that accepts certificates signed by any Certification Authority (CA) (see the Tomcat post), we can write a servlet that uses these retrieved certificates to authenticate the user. I will detail one simple way of doing this here.

Retrieving the certificate from the servlet

In Tomcat compatible servlets it is possible to retrieve the certificates used in a connection with the following code:

protected void doGet(HttpServletRequest request, HttpServletResponse response)
             throws ServletException, IOException {
       X509Certificate[] certificates = (X509Certificate[]) request

Verifying the WebId

This can be done very easily by using a class such as DereferencingFoafSslVerifier (see source), available as a maven project from so(m)mer repository (in the foafssl/ directory).

Use it like this:

  Collection<? extends FoafSslPrincipal> verifiedWebIDs = null;

  try {
     FoafSslVerifier FOAF_SSL_VERIFIER = new DereferencingFoafSslVerifier();
     verifiedWebIDs = FOAF_SSL_VERIFIER.verifyFoafSslCertificate(foafSslCertificate);
  } catch (Exception e) {
     redirect(response,...); //redirect appropriately

If the certificate is authenticated by the WebId, you will then end up with a collection of FoafSslPrincipals, which can be used for as an identifier for the user who just logged in. Otherwise you should redirect the user to a page enabling him to login with either OpenId, or the usual username/password pair, or point him to a page such as this one where he can get a foaf+ssl certificate.

For a complete example application that uses this code, have a look at the Identity Provider Servlet, which is running at (note this servlet was trying to create a workaround for an iPhone bug. Ignore that code for the moment).


The current library is too simple and has a few gaping usability holes. Some of the most evident are:

  • No support for rdfa or turtle formats.
  • The Sesame RDF framework/database should be run as a service, so that it can be queried directly by the servlet. Currently the data gathered by the foaf file is lost as soon as the FOAF_SSL_VERIFIER.verifyFoafSslCertificate(foafSslCertificate); method returns. This is ok for a Identity Provider Servlet, but not for most other servers. A Java/RDF mapper such as the So(m)mer mapper would then make it easy for Java programmers to use the information in the database to personalize the site with the information given by the foaf file.
  • develop an access control library that makes it easy to specify which resources can be accessed by which groups of users, specified declaratively. It would be useful for example to be able to specify that a number of resources can be accessed by friends of someone, or friends of friends of someone, or family members, ....

But this is good enough to get going. If you have suggestions on the best way to architect some of these improvements so that we have a more flexible and powerful library, please contact me. I welcome all contributions. :-)

Thursday Jul 23, 2009

How to setup Tomcat as a foaf+ssl server

foaf+ssl is a standards based protocol enabling one click identification/authentication to web sites, without requiring the user to enter either a username or a password. It can be used as a global distributed access control mechanism. It works with current browsers. It is RESTful, thereby working with Linked Data and especially linked foaf files, enabling thereby distributed social networks.

I will show here what is needed to get foaf+ssl working for Tomcat 6x. The general principles are documented on the Tomcat ssl howto page, which should be used for detailed reference. Here I will document the precise setup needed for foaf+ssl. If you want to play with this protocol quickly without bothering with this procedure I recommend using the foaf+ssl Identity Provider service which you can point to on your web pages, and which will then redirect your users to the service of your choosing with the URLEncoded WebId of your visitor.

foaf+ssl works by having the server request a client certificate on an https connection. The server therefore needs an https end point which can be specified in Tomcat by adding the following connector to the conf/server.xml file:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="50" scheme="https" secure="true"
Note: the default https port is 443, but it requires root privileges.

Servers authentify themselves by sending the client a certificate signed by a well known Certificate Authority (CA) whose public key is shipped in all browsers. Browsers use the public key to verify the signature sent by the server. If the server sends a certificate that is not signed by one of these CAs (perhaps it is self signed) then the web browser will usually display some pretty ugly error message, warning the user to stay clear of that site, with some complex way of bypassing the warning, which if the user is courageous and knowledgeable enough will allow him to add the certificate to a list of trusted certs. This warning will put most people off. It is best therefore to buy a CA certified cert.(I found one for €15 at trustico.) Usually the CA's will have very detailed instructions for installing the cert for a wide range of servers. In the case of Tomcat you will end up with the following addition property values:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="50" scheme="https" secure="true"
               keystoreType="JKS" keystorePass="changeme" 

And of course this requires placing the server cert file at the keystoreFile path.

There are usually two ways for the server to respond to the client not sending a (valid) certificate. Either it can simply fail, or it can allow the server app to decide what to do. Automatic failure is not a good option, especially for a login service, as the user will then be confronted with a blank page. Much better is to allow the server to redirect the user to another page explaining how to get a certificate and giving him the option of authentication using OpenId or simply the well known username/password pattern. To enable Tomcat to respond this way you need to add the clientAuth="want" attribute value pair:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="50" scheme="https" secure="true"
               keystoreType="JKS" keystorePass="changeme" 
           sslProtocol="TLS" clientAuth="want" />

Most Java Web Servers on receiving a client certificate, attempt to automatically validate it, by verifying that it is correctly signed by one of the CA's shipped with the Java Runtime Environment (JRE), verifying that the cert is still valid, ... As the SSL library that ships with the JRE does not implement foaf+ssl we will need to do the authentication at the application layer. We therefore need to bypass the SSL Implementation. To do this Bruno Harbulot put together the JSSLUtils library available on Google Code. As mentioned in the JSSLUtils Tomcat documentation page this will require you to place two jars in the Tomcat lib directory: jsslutils-0.5.1.jar and jsslutils-extra-apachetomcat6-0.5.2.jar (the version numbers may differ as the library evolves). You will also need to specify the SSLImplementation in the conf file as follows:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="50" scheme="https" secure="true"
               keystoreType="JKS" keystorePass="changeme" 
           sslProtocol="TLS" clientAuth="want" />

Usually servers send in the request to the client a list of Distinguished Names of certificates authorities (CA) they trust, so that the client can filter from the certificates available in the browser those that match. Getting client certificates signed by CA's is a complex and expensive procedure, which in part explains why requesting client certificates is very rarely used: very few people have certificates signed by well known CAs. Instead those services that rely on client certificate tend to sign those certificates themselves, becoming their own CA. This means that certificates end up being valid for only one domain. foaf+ssl bypasses this problem by accepting certificates signed by any CA, going so far as to allow even self signed certs. The server must therefore send an empty list of CAs meaning that the browser can send any certificate (TLS 1.1). With the JSSLutils library available to Tomcat, this is specified in the conf/server.xml file with the acceptAnyCert=true attribute.

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="50" scheme="https" secure="true"
               keystoreType="JKS" keystorePass="changeme" 
           acceptAnyCert="true" sslProtocol="TLS" clientAuth="want" />

At this point you have set up your Apache Server correctly. A user that arrives at your SSL endpoint and that has a couple of certificates will be asked to choose between them. Your client code can the extract the certificate with the following code:

       X509Certificate[] certificates = (X509Certificate[]) request

You can use these certificates then to extract the WebId, and verify the SSL certificates. I will write more about how to do this in my next blog post.

Monday Jul 20, 2009

two months of foaf+ssl talks

For the past one and a half months I have been traveling through Europe giving talks on foaf+ssl, the RESTful authentication protocol for the Social Web. Here is a short summary of where I have been.

18 May 2009, Salzburg Research
On my way cycling from Fontainebleau to Vienna, I stopped by in Salzburg, Austria, where the offices of the organisers of the EU sponsored KIWI (Knowledge in a Wiki) project, which Sun is participating in, are located. I introduced the group there to foaf+ssl, and they are now working on an implementation for their award winning semantic wiki.
20 May 2009, Semantic Web Company
Right after arriving in Vienna, I met up with Andreas Blumauer, editor of the recently published Springer Book "Social Semantic Web". Hopefully my presentation will make its way in some form or another into the next edition :-). Andreas also gave me an overview of the powerful yet easy to use thesaurus management system named Pool Party, they are developing.
1 June 2009, European Semantic Web Conference, Heraklion
Ian Jacobi who had come to Crete for the occasion, helped me present the paper FOAF+SSL: RESTful Authentication for the Social Web in the SPOT track. The other papers presented in that track all fitted together very well, giving a very good overview of the topics that need to be covered in this space. I will be rereading them soon. The ESWC conference was also a great opportunity to do a number of quick one to one presentations by demoing it working on the iPhone. ( Sadly the latest OS release broke the SSL stack, making my iPhone so much less useful )
18 June, Vienna University of Technology
In Crete I met Christoph Grün who helped organize a slot to present at the Institute of Software Technology & Interactive Systems. Christoph is working on Online Tourism web services, which would be a great use case for foaf+ssl. Imagine a group of people deciding to organize an outing on a tourism wiki site, where all members of the group would get access to that outing after a simple drag and drop of a foaf:Group URL onto the outing project console.... No account setup required.
23 June, Metalab Hacker's Club, Vienna
While in Vienna I gave a presentation at the Metalab, an open meeting space for hackers of all walks of life. As it happened a journalist from the well known French newspaper "Le Monde" happened to be present and wrote up an article "Les nouvelles tribus du Net" (now paying) on the lab, mentioning my presentation en passant.
2-3 July, Sun Microsystems Kiwi Meeting, Prague
The Kiwi group met in Prague for a couple of days to synchronize their work. After having won the best semantic web application prize at the European Semantic Web Conference in Crete, the mood was very positive. This was a good place to introduce the rest of the group to the potential of foaf+ssl, which is currently being implemented in Kiwi by Stefanie Stroka.
13 July, University of Leipzig
I spent a whole day with the excellent Agile Knowledge Engineering and Semantic Web team at the University of Leipzig. After an update on their latest work with DBPedia, Ontowiki, xOperator, ... I presented foaf+ssl. After lunch we then spent the afternoon on a very helpful hands on session. There are still enough rough edges in the different implementations of foaf+ssl that a bit of guidance can save a lot of time. End result, a few days later Sebastian Dietzold notified me that Philipp Frischmuth had written a first implementation available publicly at During our session we also discovered a bug on, which was soon fixed.
15 July, University of Potsdam
Hagen organised a very well attended meeting at the University of Potsdam. The questions following the talk were very good, and showed a large interest. Sadly we did not have time for a hands on session, as my next meeting was just a few hours later. Hands on sessions are still very important, as they help turn a talk into an experience. It helps a lot that Melvin Carvalho enhanced to make it very easy to create both a foaf file and a linked certificate, so with time these hands on sessions should be easier and shorter to do.
15 July, New Thinking Store, Berlin
I finished the day with a presentation at the New Thinking Store in Berlin, organized by Martin Schmidt. This was an opportunity again to present to Web 2.0 and more directly practical people.

Tuesday Apr 07, 2009

Sun Initiates Social Web Interest Group

I am very pleased to announce that Sun Microsystems is one of the initiating members of the Social Web Incubator Group launched at the W3C.

Quoting from the Charter:

The mission of the Social Web Incubator Group, part of the Incubator Activity, is to understand the systems and technologies that permit the description and identification of people, groups, organizations, and user-generated content in extensible and privacy-respecting ways.

The topics covered with regards to the emerging Social Web include, but are not limited to: accessibility, internationalization, portability, distributed architecture, privacy, trust, business metrics and practices, user experience, and contextual data. The scope includes issues such as widget platforms (such as OpenSocial, Facebook and W3C Widgets), as well as other user-facing technology, such as OpenID and OAuth, and mobile access to social networking services. The group is concerned also with the extensibility of Social Web descriptive schemas, so that the ability of Web users to describe themselves and their interests is not limited by the imagination of software engineers or Web site creators. Some of these technologies are independent projects, some were standardized at the IETF, W3C or elsewhere, and users of the Web shouldn't have to care. The purpose of this group is to provide a lightweight environment designed to foster and report on collaborations within the Social Web-related industry or outside which may, in due time affect the growth and usability of the Social Web, rather than to create new technology.

I am glad we are supporting this along with these other prestigious players:

This should certainly help create a very interesting forum for discussing what I believe is one of the most important issue on the web today.

Friday Apr 03, 2009

howto get a foaf+ssl certificate to your iPhone

In my previous post I showed that a passwordless distributed social web is already possible on the iPhone. It just requires one to upload a foaf+ssl certificate to it. Here is a relatively easy way to do this. I leave it up to the readers of this blog to build even better ways to do it.

First of course you need to have a foaf+ssl certificate. If you don't have a foaf file, then you may want to first check out foafbuilder to create a foaf file and help you tie your distributed persona on the web together. It would be great if foafbuilder could also create those foaf+ssl certs.... For the moment they don't so the easiest way to get it is using the certificate creation service. That will load the certicicate right in your browser, and help you test it.

Once you have a certificate in your browser - I am assuming Firefox here - you just need to export it to the hard drive. In FF go to Preferences, and click on the advanced tab, and choose the encryption section.

Firefox encryption tag

I have a number of foaf+ssl certificates as you can see here. Choose one of them and click the Backup button. This will open another window asking you where you wish to save your certificate. Save it somewhere obvious in pkcs12 format. Make sure the file ends with a .p12 extension. You will also be asked for a password to encrypt your certificate, so it can't be opened in transit. You can use a complex password here as you will only need to remember it once.

my certificates.

Then just mail yourself that .p12 file using an account you can access on the iPhone of course. It is just a matter then of going to your iPhone, and opening your mail. In my mail I added a link to the web service I wanted to use next, to save me typing later.

mail in iphone

When you click on the p12 link in your iphone, it will then ask you if you wish to install it. The certificate will most likely not be verified by another party. But that's ok, because you are the person who verified it. It is a certificate about you, and you know yourself better than most other people (except your mama of course).

iphone install profile window

You are then asked to enter the password you used to encrypt the certificate earlier. Once this is done your certificate will be installed on your iPhone, where it can stay happily for a very long time.

enter certificate password

If you wish to have a number of different personalities on the web you can create different foaf profiles of yourself, where you can link different pieces of your web life together. As all detective films show it is very difficult to keep things forever secret. But you can at least keep pieces of your life clearly seperated, to keep nosy people busy.

Global Identity in the iPhone browser

Typing user name/passwords on cell phones is extreemly tedious. Here we show how identification & authentication can be done in two clicks. No URL to type in, no changes to the iPhone, just using bog standard SSL technology tied into a distributed global network of trust, which is known as foaf+ssl.

After having installed a foaf+ssl certificate on my phone (which I will explain how to do in my next post), I directed Safari to, which is a foaf+ssl enabled web site. This brought up the following screen:

empty page

This is a non personalised page. In the top right is a simple foaf+ssl login button. This site was not designed for the iPhone, or it would have been a lot more prominent. (This is easy to change for of course). So I the zoomed onto the login link as shown in the following snapshot. Remember that I don't have an account on This could be the first time ever I go there. But nevertheless I can sign up: just click that link.

login link

So clicking on this foaf+ssl enabled link brings up the following window in Safari. Safari warns me first that the site requires a certificate. The link I clicked on sent me to a page that is requesting my details.

certificate warning

As I do in fact want to login, I click the continue button. The iPhone then presents me with an identity selector, asking me which of my two certificates I want to use to log in:

certificate selection

Having selected the second one, the certificate containing my WebId is sent to the server, which authenticates me. The information from my foaf file is then used to personalise my experience. Here gives me a nice human readable view of my foaf file. I can even explore my social network right there and then, by clicking on the links to my friends. Again, this will work even if you never did go to before. All you need is of course a well filled out foaf file, which services such as are making very easy to do. Anyway, here is the personalised web page. It really knows a lot about me after just 2 clicks!


The site currently has another tab, showing my activity stream of all the chats I have on the web, which it can piece together since I linked all my accounts together in my foaf file, as I explained in the post "Personalising my Blog" a few months ago.

activity stream

Other web sites could use this information very differently. My web server itself may also decide to show selected information to selected servers... Implementing this is it turns out quite easy. More on that on this blog and on the foaf-protocols mailing list.

Thursday Jan 15, 2009

The W3C Workshop on the Future of Social Networking Position Papers

picture by Salvadore Dali

I am in Barcelona, Spain (the country of Dali) for the W3C Workshop on the Future of Social Networking. To prepare for this I decided to read through the 75 position papers. This is the conference I have been the best prepared for ever. It really changes the way I can interact with other attendees. :-)

I wrote down a few notes on most paper I read through, to help me remember what I read. This took me close to a week, a good part of which I spent trying to track down the authors on the web, find their pictures, familiarise myself with their work, and fill out my Address Book. Anything I could do to help me find as many connections as possible to help me remember the work. I used delicious to save some subjective notes, which can be found on under the w3csn tag. I was going to publish this on Wednesday, but had not quite finished reading through all the papers. I got back to my hotel this evening to find that Libby Miller, who co-authored the foaf ontology, had beat me to it with the extend and quality of her reviews which she published in a two parts:

Amazing work Libby!

70 papers is more than most people can afford to read. If I were to recommend just a handful of papers that stand out in my mind for now these would be:

  • Paper 36 by Ching-man Au Yeung, Laria Liccardi, Kanghao Lu, Oshani Seneviratne and Tim Berners Lee wrote the must read paper entitled "Decentralization: The Future of Online Social Networking". I completely agree with this outlook. It also mentions my foaf+ssl position paper, which of course gives it full marks :-) I would use "distribution" perhaps over "decentralisation", or some word that better suggests that the social network should be able to be as much of a peer to peer system as the web itself.
  • "Leveraging Web 2.0 Communities in Professional Organisations" really prooves why we need distributed social networks. The paper focuses on the problem faced by Emergency Response organisation. Social Networks can massively improove the effectiveness of such responses, as some recent catastrophes have shown. But ER teams just cannot expect everyone they deal with to be part of just one social network silo. They need to get help from anywhere it can come from. From professional ER teams, from people wherever they are, from infromation wherever it finds itself. Teams need to be formed ad hoc, on the spot. Not all data can be made public. Distributed Open Secure Social Networks are what is needed in such situations. Perhaps the foaf+ssl proposal (wiki page) can help to make this a reality.
  • In "Social networking across devices: opportunity and risk for the disabled and older community", Henni Swan explains how much social networking information could be put to use to help make better user interface for the disabled. Surprisingly enough none of the web sites, so taken by web 2.0 technologies, seem to put any serious, effort in this space. Aparently though this can be done with web 2.0 technologies, as Henny explains in her blog. The semantic Web could help even further I suggested to her at her talk today, by splitting the data from the user interface. Specialised browsers for the disabled could adapt the information for their needs, making it easy for them to navigate the graph.
  • "Trust and Privacy on the Social Web" starts the discussion in this very important space. If there are to be distributed social networks, they have to be secure, and the privacy and trust issues need to be looked at carefully.
  • On a lighter note, Peter Ferne's very entertaining paper "Collaborative Filtering and Social Capital" comes with a lot of great links and is a pleasure to read. Did you know about the Whuffie Index or CELEBDAQ? Find out here.
  • Many of the telecoms papers, of which Telefonica's "The social network behind telecom networks" reveal the elephant in the room that nobody saw in social networking: the telecoms. Who has the most information about everyone's social network? What could they do with this information? How may people have phones, compared to internet access? Something to think about.
  • Nokia's position paper can then be seen in a different light. How can handset manufacturers help put to use the social networking and location information contemporay devices are able to access? The Address Book in cell phones is the most important application in a telephone. But do people want to only connect to other Nokia users? This has to be another reason for distributed social networks.

    I will blog about other posts as the occasion presents itself in future blogs. This is enough for now. I have to get up early and be awake for tomorrow's talks which start at 8:30 am.

    In the mean time you can follow a lively discussion of the ongoing conference on twitter under the w3csn tag.

  • Friday Dec 12, 2008

    ruby script to set skype and adium mood message with twitter on osx

    Twitter is a great way to learn many little web2.0ish things. I wanted to set the status message on my Skype and Adium clients using my last twitter message. So I found a howto document by Michael Tyson which I adapted a bit to add Skype support and to only post twits that were not replies to someone else - I decide there was just too much loss of context for that to make sense.

    #!/usr/bin/env ruby
    # Update iChat/Adium/Skype status from Twitter
    # Michael Tyson 
    # Contributor: Henry Story
    # Set Twitter username here
    Username = 'bblfish'
    require 'net/http'
    require 'rexml/document'
    include REXML
    # Download timeline XML and extract latest entry
    url = "" + Username + ".atom"
    xml_data = Net::HTTP.get_response(URI.parse(url)).body
    doc    =
    latest = XPath.match(doc,"//content").detect { |c| not /@/.match(c.text)}
    message = latest.text.gsub(/\^[\^:]+:\\s\*/, '')
    exit if ! message
    # Apply to status
    script = 'set message to "' + message.gsub(/"/, '\\\\"') 
             + "\\"\\n" +
             'tell application "System Events"' 
             + "\\n" +
             'if exists process "iChat" then tell application "iChat" to set the status message to message' 
             + "\\n" +
             'if exists process "Adium" then tell application "Adium" to set status message of every account to message' 
             + "\\n" +
             'if exists process "Skype" then tell application "Skype" to send command "set profile mood_text "'
             + ' & message script name "twitter"'
             + "\\n" +
             'end tell' + "\\n"
    IO.popen("osascript", "w") { |f| f.puts(script) }

    This can then be added to the unix crontab as explained in Michael's article, and all is good.

    What can one learn with this little exercise? Quite a lot:

    • Ruby - this is my first Ruby hack
    • Atom - twitter uses an atom xml feed to publish its posts
    • unix crontab
    • AppleScript to send messages to all these silly OSX apps
    • vi to edit all of this, but that's not obligatory, you can use less viral ones
    • the value of reusing data accross applications
    So that's a good way to spend a little time when one has had a little bit too much to drink the night before. Hmm, is this what one calls procrastination (video)?

    Thursday Dec 04, 2008

    video on distributed social network platform NoseRub

    I just came across this video on Twitter by pixelsebi explaining Distributed social networks in a screencast, and especially a php application NoseRub. Here is the video.

    Distributed Social Networking - An Introduction from pixelsebi on Vimeo.

    On a "Read Write Web" article on his video, pixelsebi summarizes how all these technologies fit together:

    To sum it up - if I would have to describe it somebody who has no real clue about it at all:
    1. Distributed Social Networking is an architecture approach for the social web.
    2. DiSo and Noserub are implementations of this "social web architecture"
    3. OpenSocial REST API is one of many ways to provide data in this distributed environment.
    4. OpenOScial based Gadgets might run some time at any node/junction of this distributed environment and might be able to handle this distributed social web architecture.

    So I would add that foaf provides semantics for describing distributed social networks, foaf+ssl is one way to add security to the system. My guess is that the OpenSocial Javascript API can be decoupled from the OpenSocial REST API and produce widgets however the data is produced (unless they made the mistake of tying it too closely to certain URI schemes)

    Tuesday Dec 02, 2008

    foaf+ssl: adding security to open distributed social networks

    For the "W3C Workshop on the Future of Social Networking", taking place in Barcelona January 2009

    Henry Story
    Bruno Harbulot, Ian Jacobi, Toby Inkster
    Melvin Carvalho

    Semantic Web vocabularies such as foaf permit distributed hyperlinked social networks to exist. We would like to discuss a group of related ways we are exploring (mailing list) to add information and services protection to such distributed networks.

    One major criticism of open networks is that they seem to have no way of protecting the personal information distributed on the web or limiting access to resources. Few people are willing to make all their personal information public, many would like large pieces to be protected, making it available only to a select group of agents. Giving access to information is very similar to giving access to services. There are many occasions when people would like services to only be accessible to members of a group, such as allowing only friends, family members, colleagues to post a blog, photo or comment on a site. How does one do this in a maximally flexible way, without requiring any central point of access control?

    Using an intuition made popular by OpenID we show how one can tie a User Agent to a URI by proving that he has write access to it. foaf+ssl is architecturally a simpler alternative to OpenID (fewer connections), that uses X.509 certificates to tie a User Agent (Browser) to a Person identified via a URI. However, foaf+ssl can provide additional features, in particular, some trust management, relying on signing FOAF files, in conjunction with set of locally trusted keys, as well as a bridge with traditional PKIs. By using the existing SSL certificate exchange mechanism, foaf+ssl integrates more smoothly with existing browsers (pictures with Firefox) including mobile devices, and permits automated sessions in addition to interactive ones.

    The steps in the protocol can be summarised simply:

    1. A web page points to a protected resources using a https URL, e.g.
    2. The client fetches the secure http URL .
    3. As part of that exchange the server requests the client certificate. The client returns Romeo's (possible self signed) certificate, containing the little known X.509 v3 extensions section:
              X509v3 extensions:
                 X509v3 Subject Alternative Name: 
      Because the connection is encrypted, Juliet's server knows that Romeo's client knows the private key of the public key that is also passed in the certificate. Something like:
            Subject Public Key Info:
                  Public Key Algorithm: rsaEncryption
                  RSA Public Key: (1024 bit)
                      Modulus (1024 bit):
                      Exponent: 65537 (0x10001)
    4. Juliet's server dereferences the URI found in the certificate, fetching a document .
    5. The document's log:semantics is queried for information regarding the public key contained in the previously mentioned X.509. This can be done in part with a SPARQL query such as:
      PREFIX cert: <>
      PREFIX rsa: <>
      SELECT ?modulus ?exp
      WHERE { 
         ?key cert:identity <>;
              a rsa:RSAPublicKey;
              rsa:modulus [ cert:hex ?modulus; ];
              rsa:public_exponent [ cert:decimal ?exp ] .   
      If the public keys in the certificate is found to be identical to the one published in the foaf file, the server knows that the client has write access over the resource.
    6. Romeo's identity is then checked as to its position in a graph of relations (including frienship ones) in order to determine trust according to some criteria . Juliet's server can get this information by crawling the web starting from her foaf file, or by other means.
    7. Access is granted or denied .

    We have tested this on multiple platforms in a number of different languages, (Java™, Python, ...) and across a number of existing web browsers (Firefox, Safari, more to come).

    foaf+ssl is one protocol that we would like to concentrate on due to its simplicity. But there are a number of other ways of achieving the same thing, by using OpenID for example. All of them require some extra pieces:

    • An ontology to describe what can be done with the data (copied, republished,...) or what obligations incur in using a service .
    • An ontology to describe who has access to the service. This would be useful to help people decide if they should bother trying to access it, or what else they need to do such as become friends with someone, or reveal a bug in the software somewhere .
    • Other things that might come up .

    We will discuss our experience implementing this, the problems we have encountered and where we think this is leading us to next.

    Sunday Nov 30, 2008

    personalising my blog

    image of the sidebar of my blog

    Those who read me via news feeds (I wonder how many those are), may not have seen the recent additions I have made to my blog pages. I have added a view onto:

    This is quite a lot of personal info. With my friend of a friend network it should be clear how you have more and more of the type of information you could find in social networking sites such as facebook on my blog. And this could keep growing of course.

    The current personalization is mostly powered by JavaScript (with one flash application for ). Here is the code I added to my blog template, pieces of which I found here and there on the web, often in templates provided by the web services themselves.

     <h2>Recent Photos</h2><!-- see -->
        <div id="flickr"><script type="text/javascript" 
        <div class="recentposts">
         <script type="text/javascript" 
        <div id="twitter_div" class="recentposts">
        <a href="">last 5 entries:</a><br/>
    <ul id="twitter_update_list"></ul>
    <script src="" type="text/javascript"></script>
    <script src="" type="text/javascript">
      <h2>Listening To</h2>
    <!-- I am looking for something lighter than this! -->
    <style type="text/css">table.lfmWidgetchart_0bbc5b054e26d39362c0a10c7761f484 td 
      {margin:0 !important;padding:0 !important;border:0 !important;}
     table.lfmWidgetchart_0bbc5b054e26d39362c0a10c7761f484 tr.lfmHead 
         no-repeat 0 0 !important;}
     table.lfmWidgetchart_0bbc5b054e26d39362c0a10c7761f484 tr.lfmEmbed object {float:left;}
     table.lfmWidgetchart_0bbc5b054e26d39362c0a10c7761f484 tr.lfmFoot td.lfmConfig a:hover 
        {background:url( no-repeat 0px 0 !important;;}
     table.lfmWidgetchart_0bbc5b054e26d39362c0a10c7761f484 tr.lfmFoot td.lfmView a:hover 
        {background:url( no-repeat -85px 0 !important;}
     table.lfmWidgetchart_0bbc5b054e26d39362c0a10c7761f484 tr.lfmFoot td.lfmPopup a:hover 
        {background:url( no-repeat -159px 0 !important;}
    <table class="lfmWidgetchart_0bbc5b054e26d39362c0a10c7761f484" cellpadding="0" cellspacing="0" border="0" 
       style="width:184px;"><tr class="lfmHead">
       <td><a title="bblfish: Recently Listened Tracks" href="" target="_blank" 
             no-repeat 0 -20px;text-decoration:none;border:0;">
       <tr class="lfmEmbed"><td>
       <object type="application/x-shockwave-flash" data="" 
         id="lfmEmbed_210272050" width="184" height="199"> 
       <param name="movie" value="" /> 
      <param name="flashvars" value="type=recenttracks&user=bblfish&theme=blue&lang=en&widget_id=chart_0bbc5b054e26d39362c0a10c7761f484" /> 
       <param name="allowScriptAccess" value="always" /> 
        <param name="allowNetworking" value="all" /> 
        <param name="allowFullScreen" value="true" /> 
        <param name="quality" value="high" /> <param name="bgcolor" value="6598cd" /> 
        <param name="wmode" value="transparent" /> <param name="menu" value="true" /> 
        </object></td></tr><tr class="lfmFoot">
        <td style="background:url( repeat-x 0 0;text-align:right;">
        <table cellspacing="0" cellpadding="0" border="0" style="width:184px;">
        <tr><td class="lfmConfig">
       <a href="" 
        title="Get your own widget" target="_blank" 
              no-repeat 0px -20px;text-decoration:none;border:0;">
        </a></td><td class="lfmView" 
        <a href="" title="View bblfish's profile" 
         target="_blank" style="display:block;overflow:hidden;width:74px;height:20px;background:url(
            no-repeat -85px -20px;text-decoration:none;border:0;">
        </td><td class="lfmPopup"
        <a href="" 
           title="Load this chart in a pop up" 
                 no-repeat -159px -20px;text-decoration:none;border:0;" 
           onclick=" + '&resize=0','lfm_popup','height=299,width=234,resizable=yes,scrollbars=yes'); return false;"

    So that as you can see is quite a lot of extra html every time someone wants to download my web page. This would not be too bad, but the above javascript widgets themselves go and fetch a lot of html, javascript, code and other content to further slow down the responsiveness of the web pages. This data is served to everyone whether they want to see all that information or not. Well, if they don't they can subscribe to the rss feed by dragging this page into a feed reader. In which case they will just see the blog posts themselves, and not the sidebar.

    Why add this information to my blog? Well it gives people an idea of where they can find out more about me. A lot of people don't know that I have a feed, so they may not know that they can follow what I am reading over there. This gives the initial feeling of what it would be like to have a deeper view on my activities.

    But as mentioned previously, there are a few problems with this.

    • This makes this page heavier.
    • Every page view on my blog will download that information and start those applets. ( A great way for those services to track the number of people directly visiting these pages btw. )
    • This can become tedious. People who want to follow me can do so by coming to this web page from time. But with enough sites like that this is going to become a bit difficult to do. One does not want to spend all day reading the different feeds of information of one's friends. This is what Facebook does for people: it is a giant web based feed reader of social information.
    • Difficult to track change: If I switch to a different book marking service, perhaps a semantic one like faviki, I will have to redo this page, and all my friends are going to have to update their feeds.
    • If I add more of the resources I am working on this page is going to become unmaintainably long
    • People who read my feed will not notice the changes occurring here.

    So those are the problems that Web 3.0, the semantic web is going to solve. By just downloading my foaf file, you should have access to my network of friends via linked data, and via pointers to all the other resources on the web that I may be using. Whatever tool you use will be able to then keep all this data easily up to date, and with great search tools, enhance your view of the many linked networks you will be part of and tracking.

    The whole code you see above could then be replaced with one link to my foaf file. That foaf file can itself be point to further resources in case it becomes large. To give a list of some of my the most interesting accounts I have I added the following N3 to my foaf file today:

    @prefix : <> .
    @prefix foaf: <> .
    @prefix rdfs: <> .
    :me foaf:holdsAccount 
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's skype account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's flickr pictures account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                    foaf:accountProfilePage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's music account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                    foaf:accountProfilePage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's delicious bookmarking account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                    foaf:accountProfilePage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's developer account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's twitter micro blogging account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                    foaf:accountProfilePage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's twine semantic aggregation account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                    foaf:accountProfilePage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's facebook social networking account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                  [ a foaf:OnlineAccount ;
                    rdfs:label "Henry Story's linked in business social network account"@en;
                    foaf:accountName "bblfish";
                    foaf:accountServiceHomepage <>
                    foaf:accountProfilePage <>
                  ] .

    First of all it should be clear that the above is a lot more readable that the javascript code shown earlier in this post. Secondly I listed over twice as many online accounts there than I currently have in my side bar. And finally this is in a file that a client would not need to download unless it had an interest in knowing more about me. This could easily be cached over a period of time, and need not be served up again on each page request.

    Again for one possible view on the above data it is worth installing the Tabulator Firefox extension and then clicking on my foaf icon. There are of course many more things specialized software could do with that infomation than present it like that.

    On this topic, you may want to continue by looking at the recently published, excellent and beautiful presentation on the subject of the Social Semantic Web, by John Breslin.

    variation on @timoreilly: hyperdata is the new intel outside

    Context: Tim O'Reilly said "Data is the new Intel Inside".

    Recently in a post "Why I love Twitter":

    What's different, of course, is that Twitter isn't just a protocol. It's also a database. And that's the old secret of Web 2.0, Data is the Intel Inside. That means that they can let go of controlling the interface. The more other people build on Twitter, the better their position becomes.

    The meme was launched in the well known "What is Web 2.0" paper in the section entitled "Data is the next Intel Inside"

    Applications are increasingly data-driven. Therefore: For competitive advantage, seek to own a unique, hard-to-recreate source of data.

    Most of the data is outside your database. It can only be that way, the world is huge, and you are just one small link in the human chain. Linking that data is knowledge and value creation. Hyperdata is the foundation of Web 3.0.

    Thursday Nov 20, 2008

    foaf+ssl: a first implementation

    The first very simple implementations for the foaf+ssl protocol are now out: the first step in adding simple distributed security to the global open distributed decentralized social network that is emerging.

    Update Feb 2009: I put up a service to create a foaf+ssl service in a few clicks. Try that out if you are short on time first.

    The foaf+ssl protocol has been discussed in detail in a previous blog: "FOAF & SSL: creating a global decentralised authentication protocol", which goes over the theory of what we have implemented here. For those of you who have more time I also recommend my JavaOne 2008 presentation Building Secure, Open and Distributed Social Network Applications, which explains the need for a protocol such as this, gives some background understanding of the semantic web, and covers the working of this protocol in detail, all in a nice to listen to slideshow with audio.

    In this article we are going to be rather more practical, and less theoretical, but still too technical for the likes of many. I could spend a lot of time building a nice user interface to help make this blog a point and click experience. But we are not looking for point and click users now, but people who feel at home looking at some code, working with abstract security concepts, who can be critical and find solutions to problems too, and are willing to learn some new things. So I have simplified things as much as needs be for people who fall into that category (and made it easy enough for technical managers to follow too, I hope ).

    To try this out yourself you need just download the source code in the So(m)mer repository. This can be done simply with the following command line:

    $ svn checkout sommer --username guest
    (leave the password blank)

    This is downloading a lot more code than is needed by the way. But I don't have time to spend on isolating all the dependencies, bandwidth is cheap, and the rest of the code in there is pretty interesting too, I am sure you will agree. Depending on your connection speed, this will take some time to download, so we can do something else in the meantime, such as have a quick look at the uml diagram of the foaf+ssl protocol:

    foaf+ssl uml sequence diagram

    Let us make clear who is playing what role. You are Romeo. You want your client - a simple web browser such as Firefox or Safari will do - to identify yourself to Juliette's Web server. Juliette as it happens is a semantic web expert and she trusts that if you are able to read through this blog, understand it, create your X509 certificate and set up your foaf file so that it publishes your public key information correctly then you are human, intelligent, avant-garde, and you have enough money to own a web server which is all to your advantage. As a result her semantically enabled server will give you the secret information you were looking for.

    Juliette knows of course that at a later time things won't be that simple anymore, when distributed social networks will be big enough that the proportion of fools will be large enough for their predators to take an interest in this technology, and the tools for putting up a certificate will come packaged with everyone's operating system, embedded in every tool, etc... At that point things will have moved on and Juliette will have added more criteria to give access to her secret file. Not only will your certificate have to match the information in your foaf file as it does now, but given that she knows your URL and what you have published there of your social graph, she will be able to use that and your position in the social graph of her friends to enabling her server to decide how to treat you.

    Creating a certificate and a foaf file

    So the first thing to do is for you to create yourself a certificate and a foaf file. This is quite easy. You just need to do the following in a shell.

    $ cd sommer/misc/FoafServer/
    $ java -version
    java version "1.5.0_16"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b06-284)
    Java HotSpot(TM) Client VM (build 1.5.0_16-133, mixed mode, sharing)
    $ ant jar

    Currently one needs at least Java 5 to run this.

    Before you create your certificate, you need to know what your foaf URL is going to be. If you allready have a foaf file, then that is easy, and the following will get you going:

    $ java -cp dist/FoafServer.jar  -shortfoaf
    Enter full URL of the person to identify (no relative urls allowed): 
    for example:
    Enter password for new keystore :enterAnewPasswordForNewStore     
    publish the triples expressed by this n3
    # you can use use cwm to merge it into an rdf file
    # or a web service such as to convert it to rdf/xml
    # Generated by
    @prefix cert: <> .
    @prefix rsa: <> .
    @prefix foaf: <> .
    <> a foaf:Person; 
        is cert:identity of [ 
              a rsa:RSAPublicKey;
              rsa:public_exponent "65537"\^cert:decimal ;
              rsa:modulus """b6bd6ce1a5ef51aaa69752c6af2e71948ab6da
    74d1eb32ced152084cfb860fb8cb5298a3c0270145c5d878f07f6417af"""\^cert:hex ;
              ] .
    the public and private keys are in the stored in cert.p12
    you can list the contents by running the command
    $ openssl pkcs12 -clcerts -nokeys -in cert.p12 | openssl x509 -noout -text

    If you do then run the openssl command you will find that the public key components should match the rdf above.

    $  openssl pkcs12 -clcerts -nokeys -in cert.p12 | openssl x509 -noout -text
    Enter Import Password:
    MAC verified OK
            Version: 3 (0x2)
            Serial Number: 1 (0x1)
            Signature Algorithm: sha1WithRSAEncryption
            Issuer: CN=
                Not Before: Nov 19 10:58:50 2008 GMT
                Not After : Nov 10 10:58:50 2009 GMT
            Subject: CN=
            Subject Public Key Info:
                Public Key Algorithm: rsaEncryption
                RSA Public Key: (1024 bit)
                    Modulus (1024 bit):
                    Exponent: 65537 (0x10001)
            X509v3 extensions:
                X509v3 Basic Constraints: critical
                X509v3 Key Usage: critical
                    Digital Signature, Non Repudiation, Key Encipherment, Key Agreement, Certificate Sign
                Netscape Cert Type: 
                    SSL Client, S/MIME
                X509v3 Subject Key Identifier: 
                X509v3 Authority Key Identifier: 
                X509v3 Subject Alternative Name: 
        Signature Algorithm: sha1WithRSAEncryption

    Notice also that the X509v3 Subject Alternative Name, is your foaf URL. The Issuer Distinguished name (starting with CN= here) could be anything.
    This by the way, is the certificate that you will be adding to your browser in the next section.

    If you don't have a foaf file, then the simplest way to do this is to:

    1. decide where you are going to place the file on your web server
    2. decide what the name of it is
    3. Put a fake file there named cert.rdf
    4. get that file with a browser by typing in the full url there
    5. your foaf url with then be

    Then you can use the following command to create your foaf file:

    $ java -cp dist/FoafServer.jar

    That is the same as the first one but without the -shortfoaf argument. You will be asked for some information to fill up your foaf file, so as to make it a little more realistic -- you might as well get something useful out of this. You can then use either cwm or a web service to convert that N3 into rdf/xml, which you can then publish at the correct location. Now entering your url into a web browser should get your foaf file.

    Adding the certificate to the browser

    The previous procedure will have created a certificate cert.p12, which you now need to import into your browser. The software that creates the certificate could I guess place it in your browser too, but that would require some of work to make it cross platform. Something to do for sure, but not now. On OSX adding certs programmatically to the Keychain application is quite easy.

    So to add the certificate to your browsers store, open up Firefox's preferences and go to the Advanced->Encryption tab as shown here

    Firefox 3.03 Advanced Preferences dialog

    Click on "View Certificates" button, and you will get the Certificate Manager window pictured here.

    Firefox 3.03 Certificate manager

    Click the import button, and import the certificate we created in the previous section. That's it.

    Starting Juliette's server

    In a few days time Ian Jacobi will have a python based server working with the new updated certificate ontology. I will point to that as soon as he has it working. In the mean time you can run Juliette's test server locally like this:

    $ ant run

    This will start her server on your computer on localhost on port 8843 where it will be listening on a secure socket.

    Connecting your client to Juliette's server

    So now you can just go to https://localhost:8843/servlet/CheckClient in your favorite browser. This is Juliette's protected resource by the way, so we have moved straight to step 2 in the above UML diagram.

    Now because this is a server running locally, and it has a secure port open that emits a certificate that is not signed by a well established security authority things get more complicated than they usually need be. So the following steps appea only because of this and so, to make it clear that this is just a result of this experiment, I have placed the following paragraph in a blue background. You will only need to do this the first time you connect in this experminent, so be weary of the blues.

    Firefox gave me the following warning the first time I tried it.

    Firefox 3.03 certificate error dialog

    This is problematic because it just warns that the server's certificate is not trusted, but does not allow you to specify that you trust it (after all, perhaps you just mailed you the public key in the certificate and you could use that information to decide that you trust the server).

    On trying again, shift reloading perhaps, I am not sure, I finally got Firefox to present me with the following secure connection failed page:

    Firefox 3.03 secure connection failed page

    Safari had done the right things first off. Since we trust localhost:8843 (having just started it and even inspected some of the code ) we just need to click the "Or you can add an exception ..." link, which brings up the dialog below:

    Firefox 3.03 secure add exception page

    They are trying to frighten users here of course. And so they should. Ahh if only we had a localhost signed certificate by a trusted CA, I would not have to write this whole part of the blog!

    So of course you go there and click "Add Exception...", and this brings up the following dialog.

    Firefox 3.03 get Certificate dialog

    So click "Get Certificate" and get the server certificate. When done you can see the certificate

    Firefox 3.03 get Certificate dialog

    And confirm the security Exception.

    Again all of this need not happen. But since it also makes clear what is going on, it can be helpful to show it.

    Choose your certificate

    Having accepted the server's certificate, it will now ask you for yours. As a result of this Firefox opens up the following dialog.

    Firefox 3.03 Server requesting Client Certificate

    Since you only have one client certificate this is an easy choice. If you had a number of them, you could choose which persona to present to the site. When you click Ok, the certificate will be sent back to the server. This is the end of stage 2 in the UML diagram above. At that point Juliette's server ( on localhost ) will go and get your foaf file (step 3), and compare the information about your public key to the one in the certificate you just presented (step 4) by making the following query on your foaf file, as shown in the CheckClient class:

          TupleQuery query = rep.prepareTupleQuery(QueryLanguage.SPARQL,
                                    "PREFIX cert: " +
                                    "PREFIX rsa: " +
                                    "SELECT ?mod ?exp " +
                                    "WHERE {" +
                                    "   ?sig cert:identity ?person ." +
                                    "   ?sig a rsa:RSAPublicKey;" +
                                    "        rsa:modulus [ cert:hex ?mod ] ;" +
                                    "        rsa:public_exponent [ cert:decimal ?exp ] ." +
           query.setBinding("person", vf.createURI(uri.toString()));
           TupleQueryResult answer = query.evaluate();
    If the information in the certificate and the foaf file correspond, then the server will send you Juliette's secret information. In a Tabulator enabled browser this comes out like this:

    Firefox 3.03 displaying Juliette's secret info

    The source code for all that is not far, and you will see that the algorithms used are very simple. This proves that the minimal piece, which is equivalent to what OpenID does, works. Next we will need to build up the server so that it can make decisions based on a web of trust. But by then you will have your foaf file, and filled up your social network a little for this to work.

    Further Work

    Discussions on this and on a number of other protocols in the same space is currently happening on the foaf protocols mailing list. You are welcome to join the sommer project to work on the code and debug it. As I mentioned Ian Jacobi has a public server running which he should be updating soon with the new certificate ontology that we have been using here.

    Clearly it would be really good to have a number of more advanced servers running this in order to experiment with access controls that add social proximity requirements.

    Things to look at:

    • What other browsers does this work with?
    • Can anyone get this to work with Aladdin USB e-Token keys or similar tools?
    • Work on access controls that take social proximity into account
    • Does this remove the need for cookie identifiers on web sites?

    I hope to be able to present this at the W3C Workshop on the Future of Social Networking in January 2009.




    « February 2017