Wednesday Jan 21, 2009

Outline of a business model for open distributed social networks

illustration of SN data silos by the Economist

The organisers of the W3C Workshop on the future of social networking had the inspiration to include a track on Business Models. What would be the interest of large players to open up? to play ball in a larger ecosystem of social networks? What would the business model be for new entrants? This question clearly was on many of the attendees minds, and it is one I keep coming across.

First without Linked Social Networks there are a lot of disincentives to create new web services. This is because users of these services need to duplicate the data they have already created elsewhere, and also because the network effect of the data they are using is bound to be much smaller. I have found myself uninterested in trying out many new web 2.0 offerings for just these reasons. It is much more rewarding for developers to create applications for the Facebook platform for example, where the users just need to click a button to try out the application on the data they are already using, and that may yet enrich this data further.

Open Secure Linked Social Networks, such as that made possible by foaf+ssl, give one the benefits enjoyed by application developers of the closed social networks but in a distributed space, which should allow the unleashing of a huge amount of energy. Good for the consumer. But good for the big network players? I will argue that they need consider the opportunities this is opening up.

First of all one should notice that the network effect applies just as much to the big players as to the small ones. A growing distributed social network (SN) is a tide that will lift all boats, and since metcalf's law states that the value of the network grows exponentially with the number of nodes in it, then doubling the number of nodes in the network may just quadruple the value of the network to all. Perhaps the big operators are thinking that they control such a large slice of the market that there is not much doubling that they can do by linking to the smaller players. As it happens most social networks are geographical monopolies, which would go to strengthen that point of view (see slide 8 of the opening presentation at the W3C workshop). [ Nothing stays still, and everyone should watch out for the potential SN controlled by the telecoms.]

But the network effect applies to the big players also in another way. Namely that if they wish to create small networks then the value of those networks will be just as insignificant as those of other smaller players. So let me take a simple example of very valuable smaller networks which have a huge and unsatisfied demand for social networking technologies and the money to pay for tools that could help them quench their need: companies! Companies need to connect their employees to their managers and colleagues inside the same company, to knowledge resources, to paychecks and many other internal resources such as wikis, blogs, etc... Usually companies of a large enough size have software to deal with these. But even in a company such as Sun Microsystems, that is relatively large, the network effect of that information is not interesting enough for people to participate gladly in keeping the information up to date. I often find it easier to go to Facebook to find a picture of someone in Sun. Why? Well there is a very large value in adding one's picture to facebook: 200 million other users to connect to. In Sun Microsystems only 34 thousand people to connect to, and it is true a financial incentive. Clearly the value of 200 million squared is larger than the incentive of being efficient at work.

One thing is clear: it is impossible to imagine that such large software companies can allow their employees to just use the closed SN sites directly to do their work - I mean here: have all their employess just use the tools on facebook.com for example. This would give those SN companies way too much insight into the dealings of the company. Certainly very large sections of industry, including defence contractors, large competitive industries such as car manufacturers, and governments, cannot legally and strategically permit such outsourcing to happen, even though the value of the information in such a large network would be astronomically larger. In some case there are serious privacy and security issues that just cannot be ignored however attractive the value calculation is.

So large SN players would have to enter the Fortune 1000 with Social Networking software that did not leak information. But in that case they won't be in any seriously better position than the software that is already in there, and they won't be able to compete any better than any of the smaller companies that are working in this space, as they will not find it easy to leverage their main advantage, namely the network effect of their core offering.

And even if they did manage to leverage this in some ways, they would find it impossible to leverage that advantage in the ways that really count. Companies don't just want their employees to link up with their colleagues, they need them to link up with people outside their company, be it customers, government agencies, researchers, competitors, external contractors, local governement, insurance or health agencies, etc, etc..... The list is huge. So even if a large Social Network could convince one of these players of the advantage of their offering, they will never be able to convince every single partner of that company - for that would be to convince the whole world. Companies really want global SN, and that is what Emergency Response teams really need.

To make such a globaly linkable platfrom a reality one needs to build at the level of abstraction and clarity at which the only such global network in existence is built: the web itself. By using Linked Data one can create distributed social networks where each player can maintain the information they feel the business need to maintain. With the very simple foaf+ssl protocol we have the lightest possible means to build security into a distributed social network. Simplicity and clarity - mathematical clarity - is essential when developing something that is to grow to a global scale.

Companies therefore that work at providing tools for distributed social networks, will if they play well together, find a huge opportunity opening up in front of them in helping enterprises in every industry segment link up their employees to people in every walk of life.

Tuesday Dec 16, 2008

Link roundup before rebooting

The time has come for an OS upgrade to OSX 10.5.6, and as Firefox 3.1 beta 2 has kept up very well for a long time, I now have a huge number of tabs open. So here are some worth reporting on here, the rest are on my delicious feed.

I started looking into cloud computing and the semantic web and found a few really nice links:

  • Virtuoso cloud edition for Amazon's EC2 from OpenLink Software the company powering DBPedia and many of the Linked Data Cloud servers.
  • a new highly scalable java based semantic web database named Big Data which was presented at O'Reilly conference earlier this year (PDF of presentation). From their web site:

    Bigdata(R) is an open-source scale-out storage and computing fabric supporting optional transactions, very high concurrency, and very high aggregate IO rates. Bigdata was designed from the ground up as a distributed database architecture optimized for very high aggregate IO rates running over clusters of 100s to 1000s of machines, but can also run in a single-server mode. Bigdata offers a distributed file system, similar to the Google File System but also useful for workflow queues, a data extensible sparse row store, similar to Googles widely recognized bigtable project, and map/reduce processing for parallelizing data intensive workflows over a cluster.

    Bigdata(R) comes packaged with a very high-performance RDF store supporting RDF(S) and OWL Lite inference.[...]The Bigdata RDF Store was designed specifically to meet requirements for very large scale semantic alignment and federation.

    Looking around in their javadoc I found that they use the Sesame API

  • article by RedMonk on 15 ways to tell it's not cloud computing
  • The semantic grid project probably has some very good resources on cloud computing, so I should look there next.

I am sure there is more on that subject of cloud computing but that's what I have for now.

On social networks

For a bit of stimulation go look at Microsoft's rdf api.

Saturday Dec 13, 2008

Typealizer: analyzing your personality through your blog

illustration of the scientist Thanks to Mark Dixon I discovered Typealizer, a service that reads your blog and finds your psychological type. So of course I tried it on my own blog, as you will on yours shortly :-) . This is what it had to say:

INTJ - The Scientists

The long-range thinking and individualistic type. They are especially good at looking at almost anything and figuring out a way of improving it - often with a highly creative and imaginative touch. They are intellectually curious and daring, but might be pshysically hesitant to try new things.

The Scientists enjoy theoretical work that allows them to use their strong minds and bold creativity. Since they tend to be so abstract and theoretical in their communication they often have a problem communcating their visions to other people and need to learn patience and use conrete examples. Since they are extremly good at concentrating they often have no trouble working alone.

Well that not bad for flattery. So I reward them with this blog post.

They accompany their analysis with a brain activity diagram. This is the one I got:

Brain activity diagram for main blog

illustration of the travel category There is a lot in the cross section intuition and thinking, with some but not a lot of positioning in the practical. So being all happily scientifical I decided to try out what it would say if I pointed Typealiser to the Travel category on this blog. This is what it has to say on that aspect of my personality, perhaps it is true, a little in retreat recently.

ESTP - The Doers

The active and play-ful type. They are especially attuned to people and things around them and often full of energy, talking, joking and engaging in physical out-door activities.

The Doers are happiest with action-filled work which craves their full attention and focus. They might be very impulsive and more keen on starting something new than following it through. They might have a problem with sitting still or remaining inactive for any period of time.

This also came with a brain activity diagram for that part of the blog

So clearly a lot more biased towards action, as a travel blog should.

Still both of these blogs are not allowing me to capture around half of my brain activity. The spiritual idealistic side is not very visible. I wonder if that means I should speak more about open source and linux? ;-) I tried the Art category of my blog but that did not move me more to the feeling type, nor did the philosophy section make me more idealistic, just again more of a thinker, which they characterise like this:

INTP - The Thinkers

The logical and analytical type. They are espescially attuned to difficult creative and intellectual challenges and always look for something more complex to dig into. They are great at finding subtle connections between things and imagine far-reaching implications.

They enjoy working with complex things using a lot of concepts and imaginative models of reality. Since they are not very good at seeing and understanding the needs of other people, they might come across as arrogant, impatient and insensitive to people that need some time to understand what they are talking about.

Now what could be interesting would be some way then to do the inverse search. Find out what your brains activity diagram should look like, and ask to find blogs that fit those categories, which one could then use as a guide to help one develop that aspect of one's personality - or find a partner :-)

Ps. a thought: after categorizing people into 16 different groups this still leave you with 8 billion people/16 = 500 million people to chose from and if every person just had 1000 web pages that would leave you with half a trillion pages to look at. So this character analysis can be useful, but there still has to be a lot of other criteria to make a good judgement call.

PPS. Oddly enough - or not - Ken Wilber's blog is categorised as being of the "executive type".

Wednesday Oct 01, 2008

Historical perspective on the 2008 crash

Dr Martin

In July 2006, Dr David Martin gave a talk: "Asymmetric Collateral Damage, Basel II, the Mortgage House of Cards, and the Coming Economic Crisis" (with audio) where he gives a historical perpective on the 2008 crash he predicts with remarkable accuracy. He starts his story the Battle of Waterloo that made the fortunes of Baron Nathan Von Rothschild, in a little known speculative episode at the end of the war. Moving on to the coming introduction of the Basel II banking reform, and describing quickly what we now know the be the very dubious lending practices of many institutions, he goes on to predict the coming of a spectacular clash in 2008.

His talk is very lively and entertaining:

Now, I'm not going to point fingers at, you know, Treasury secretaries or anybody in that kind of environment. Seriously, I'm not pointing fingers. I'm just saying we did a very interesting thing. We decided that we had a very dynamic need in 2001 to get our economy back on track. And so, what we did was, we poured a significant amount of capital into what market? What market in 2001 wound up contributing to the majority of the growth of the GDP in 2001? What market?

“Housing.”

Housing. Ninety percent of the growth of the GDP in 2001 to 2002 was in the housing market… 90 percent. Did we all get, like, castles? Are we Europe? I mean, did we all, like, move into castles in 2001? This is bigger than 86 percent, isn't it? Did we do that? What did we do with that alleged housing market money?

A little further on he ties this to the current election.

Something happened last year. I don't know if any of you were watching, but something happened last year for the first time since 1933. Does anybody know what significant financial function happened last year?

(Comment from audience.)

Not only inverted growth but for the first time since 1933, and that's true. For the first time since 1933 something very interesting happened. We actually went negative in savings. Negative in savings. We had a four percent… in one calendar year, a four percent reduction in savings.

...David Ferren and I were very interested in looking at historical default rates. And we were looking at different types of credit and different types of defaults. And one of the things we were looking for is whether or not default rates happen in normal curves. Whether you're as likely to default Month one as Month 12 as Month 24, as Month 36, as Month… and you know what? It's not very linear. Debt actually looks like it works really well for about 14 months. And if you look past 14 months and you go out a little further and you go to about 17 months, you actually find out that debt starts feeling a little squirrelly.

That's when you start not making your payments. That's where you get into things like technical defaults and these kinds of little bumps in the road. You try to make a payment. You can't make a payment. You can't this, you can't that, and so on. But somewhere between the 17th month and the 24th month, all of a sudden, the fecal matter hits the rotary oscillator and it's bad. Everything that looked like a good credit starts to look like a bad credit.

So let’s see… Christmas of 2005, January of 2006 is, kind of, when we over extended our credit. So let's see, 17 to 24 months would put us in January 2008, wouldn't it? The day that banks have to report their capital adequacy happens to be that magical day when a new president gets sworn in. Oh, hold on a second. That actually was kind of funny, wasn't it? Ladies and gentlemen…

Having predicted the coming crisis he explains how the US should be prepared to have to go begging for money in the countries where there is a lot of cash. As a large number of these countries have lending policies linked to Shari'ah law, linking lending to moral precepts - not necessarily those of the US - this was going to lead, he claimed, to some pretty difficult times.

I had found this talk on Nova Spivack's blog. Dr Martin certainly had the event and the time right. Is the explanation also correct? The current debate one finds on TV on these issues seems very superficial.

The Economist has also been going on about the mortgage crisis for a long time. Some people clearly were not listening... Another case of SEP?

Thursday Jul 24, 2008

My Semantic Web BlogRoll

I have not had time to automate my blog roll publication yet. Here is the first step down that path. The following are the semantic web blogs I follow closely. I am sure I must be missing many others that are interesting. Though I already am way past the point of information overload. (For those in the same position here are some tips (via Danny))

AI3:::Adaptive Information - Atom
Mike Bergman on the semantic Web and structured Web
About the social semantic web - RSS
Web 2.0 - what's next?
Bnode - atom
bobdc.blog - RSS
Bob DuCharme's weblog, mostly on technology for representing and linking information.
Bill de hOra - atom
Bill de HOra's blog
captsolo weblog - RSS 1.0
CaptSolo weblog
connolly's blog - RSS
Dan Connolly's blog
Cloudlands - RSS
John Breslin's Blog
Daniel Lewis - RSS
A technological, personal, spiritual, and academic blog.
Dave Beckett - Journalblog - RSS 1.0
RDF and free software hacking
David Seth - RSS
Semantic Web & my backyard
dowhatimean.net - RSS
Richard Cyganiak's Weblog
Elastic Grid Blog - RSS
The ultimate blog about the Elastic Grid solution...
Elias Torres - RSS
I'm working on a tagline. I promise.
Inchoate Curmudgeon - RSS
I'm getting there. What's the rush? It's about the journey, right?
Internet Alchemy - RSS
Seeing the world through RDF goggles since 2007
Kashori - RSS
Kingsley Idehen's Blog Data Space - RSS atom
Data Space Endpoint for - Knowledge, Information, and Raw Data
Les petites cases - Fourre-tout personnel virtuel de Got - RSS
Lost Boy - RSS 1.0
A journal of no fixed aims or direction by Leigh Dodds. If you see him wandering, point him in the direction of home.
Mark Wahl, CISA - RSS
Discussions on organizing principles for identity systems
Michael Levin's Weblog and Swampcast! - RSS
Software development, technobuzz, and everything else.
Minding the Planet - RSS
Nova Spivack's Journal of Unusual News & Ideas
More News - RSS
Nodalities - RSS
From Semantic Web to Web of Data
opencontentlawyer.com - RSS
copyright, content, and you
Perspectives - RSS
Interfaces, web sémantique, hypermédia
Planet Kiwi - RSS
... where all the KiwiKnows is!
Planet RDF - RSS
It's triples all the way down
Planete Web Semantique - RSS
French Semantic Web planet
Raw - RSS 1.0
Danny's linkiness
Rinke Hoekstra - RSS
"Time is nature's way to keep everything from happening at once." - John Wheeler
S is for Semantics - Atom
Dean Allemang's Blog - Check out our new book on the Semantic Web!
Semantic Focus - RSS
On the Semantic Web, Semantic Web technology and computational semantics
Semantic Wave - RSS
News feeds and commentary maintained by semantic web developer Jamie Pitts.
Semantic Web Interest Group Scratchpad - RSS
Semantic Web Interest Group IRC scratchpad where items mentioned and commented on in IRC get collected.
Semantic Web Wire - RSS
Comprehensive News Feed for Semantic Web.
semantic weltbild 2.0 (Building the Semantic Web is easier together) - RSS 1.0
Building the Semantic Web is easier together
SemanticMetadata.net - Atom
Speaking my mind - RSS
The whole is more than the sum
TagCommons - RSS
toward a basis for sharing tag data
TechBrew - RSS
Informative geekery on software and technology
Technical Ramblings - RSS
Ramblings of a GIS Hacker
Thinking Clearly - RSS
Make lots of money through stealth in shadows
W3C Semantic Web Activity News - RSS

I automated the creation of this blogroll by transforming the opml of my blog reader with the following xquery

declare namespace loc = "http://test.org/";

declare function loc:string($t as xs:string) {
             $t
};


<html>
<body>
<dl>
{
   for $outline in //outline
   order by $outline/@title
   return 
     <span>
          <dt><a href="{ $outline/@htmlUrl}">{ loc:string($outline/@text) }</a> - <a href="{ $outline/@xmlUrl}">{ loc:string($outline/@version)}</a> </dt>
          <dd>{ loc:string($outline/@description) }</dd>
     </span>
}
</dl>
</body>
</html>

I then had to edit a bit of the generated html by hand to make it presentable.

Thanks to the Oxygen editor for making this really easy to do.

Wednesday Jun 18, 2008

My Mail.app is unstable

Mail.app is getting to be a real pain to work with. This is the 4th time in 2 months that I have to spend over 4 hours debugging it. As of writing this I can longer send or receive mail!
It used to just crash, which was useful because I could use dtrace to find all the files it had opened and just remove the directories in Library/Mail where it had last looked at. I could then re-import those folders later.
Since the 10.5.3 update it no longer crashes. A week ago it just either spent a huge amount of time thinking, using up over 100% of the cpu (there are two cores so it can use up to %200), and then finally recovered, but I had time to study a few chapters of "Semantic Web for the Working Ontologist before that happened. Today it just consumed so much cpu that all other applications became irresponsive. I reniced Mail with

$ sudo renice -20 -p 16410  #where 16410 was the process id of Mail.app at the time
which made it possible to use my shell at least. Then it crashed.

I am clearly not the only one with this problem. Searching the web I found that

Mail.app is really a key application of OSX. If Apple can't get this right, or don't have enough resources to dedicate to this, would it perhaps help to Open Source Mail.app? At least some of us could hunt down the problem and give them a fix. Currently I am not sure what they are doing about this. I will try once more to fix it, but I am really really close to switching to something else...

2 hours later - Solved: I had a mail folder for an internal Sun apple mailing list. I had suspected that there was a problem here as it would crash when I opened that folder. So I went to /Users/hjs/Library/Mail/IMAP-hsXXXXX@mail.sun.net and moved the apple.imapbox to a temporary folder. I then started Mail and it fetched all the threads from the server again. Having the mail on a remote server helps a lot. For one it should make moving to another client a lot easier...

Could it be that I have too many e-mails? The following seems to suggest that I have 273 thousand.

hjs@bblfish:0$ cd Library/Mail
hjs@bblfish:0$ find . -name "\*.emlx" | wc
 273761  322242 17719867
I have one gmail account, my personal imap server and sun work imap server if that helps...

Updates

It is 27 August now, and I have not had any serious crashes anymore. It could be that the last time I really cleaned up those broken folders. It could be luck...

Firefox 3 is out

Firefox 3.0 is out. It looks really, really good! Get it here! and help set a world record :-)

Thursday Nov 08, 2007

Why Apple Spaces is broken

Space: they way I would like to set up my workspaces

I have been using virtual desktops, what Apple now calls Spaces™, since 1995 on X11 Unix. So I have quite good experience with this feature. I know what it needs, and I can tell very clearly that although Apple's implementations is the most beautiful version available, it clearly has not been thought through correctly. As a lot of Apple users will be novel to this, they may not quite understand what is broken immediately, but may come to the conclusion that it is not very useful. So first we have to explain why virtual desktops are useful. Then I can explain why Apple's implementation is broken.

The use case

The reason for developing multiple spaces is to be able to clearly separate one's work. I for example, have one desktop for Mail and other communication related activities, one for programming, one for blogging, and one for other tasks such as giving a presentation.

When I read mail, I sometimes need to browse the web to check up on links that people may have sent me. I don't want that to make me jump over to the browser I opened in my development space where I was reading javadoc. That would both mess up my development environment, and switch the context I was in. If no browser is open in my mail environment I would like to just be able to hit ⌘-N and have a new browser open up there. Then pressing ⌘-⇥ (command-tab) - which should only list applications available in that space or at least offer those applications available there in priority - I should be able to switch between my mail and the browser instances open in that space.

Having read my mail, especially the mail by my manager, telling me to stop helping Apple improve its copy of something that was available over 12 years ago on unix, and to get working on the next great ideas, I switch back to my NetBeans space, where I am developing a Semantic Address Book. Here I would like to switch quickly between the applications that are open in that space: Netbeans, my AddressBook, the shell, Safari and Firefox. So at the minimum, I would like the applications that are present in that space again to be the first in the ⌘-⇥ list. And! I would like it that when I switch to Safari to read the docs, I don't get thrown into my communications space.

There is no way I can have only one browser open for all of my work. I need different browsers open for different purposes in each spaces. The same is true with the shell. Sometimes it may even be true with Mail. Perhaps someone sends me an email relating to a piece of code I am working on. I would like to move that window to my editing environment (easy using F8 of course), in order to be able to switch between it and my editor with ease.

What's broken

So currently it is not possible to work like this with Apple's Spaces™. When switching between applications using ⌘-⇥, Spaces™ throws you across virtual desktops without any check to see if a window of that application is not already open in your space. Spaces™ always switches one to the virtual desktop that an application was first opened in, or where the first opened window from that application actually is. One cannot use the F9 or F10 Exposé keys either. Even though they only show the applications open in one space, they will still in some unpredictable way, switch you to a different space. They do this even if you clearly select a window from the space you are working in. So there is no way to switch reliably between applications open in one virtual desktop space, and so there really is no way to separate your different work related tasks. The way it is set up you need to have all your browsers in the same space, all your shells in the same space, etc... etc... So really these Spaces™ are not designed around a person's work habits, but around software components. That is the most basic of all User Interface failings.

Updates

30 May, 2008 Many, if not most, of the issues I complained of in this post have been fixed with release 10.5.3 of OSX. It seems useable now. ⌘-⇥ no longer randomly switches between workspaces, which was the biggest problem. John Gruber explains how 10.5.3 fixes spaces in detail on his blog.

Nov 20, 2007 eliottcable proposes a solution on quintessentially open. I am not convinced that ⌘-⇥ should create new windows on a space by default if there is non there. It should certainly not switch to another space if there is a window on the current space. In any case I find that the F9-F10 expose keys are clearly broken, since they do have me jump across spaces, when they never should.

This post received a huge number of readers from daring fireball. Thanks. Dave Dribin has a good write-up on this issue. Some further discussion is developing on the reddit discussion forum.

Friday Jul 20, 2007

5 secrets

It is my turn to get tagged, a fun game of chain linking, where someone tags you and you have to bare yourself by telling 5 things nobody knew. I have been tagged by Peter Reiser, depicted working here in Menlo Park.
So here are 5 things you probably did not know about me:

  1. My mother is Austrian, my father is English, they lived in France and so did I from the age of 6 to 18. Before that I lived a while in Washington DC, where I learned the alphabet watching Sesame Street. I have direct cousins in England, France, Germany, Austria, Italy, and South Africa. I am a Freeman of Newcastle which means I can graze a cow on the commons there. That side of the family has led to some prestigious people like Supreme court Justice Joseph Story and less prestigious one such as Sidney Story who legislated the new Orleans Storyville prostitution district into being. So I am half European half American (as I have lived 9 years in the US) though I only have a British passport.
  2. I once lived on the streets of London. At the time I was studying philosophy and was completely peniless. I soon found out that one could live for less than £40 pounds a month if one was part of a housing co-op, and I managed to get into one all the way out in the distant North East, at Totenham Hale. Needless to say I met a lot of very colorful people along the way. Some of the skills I learned then have remained with me.
  3. At some point as I was living day to day studying philosophy, I remembered that I knew how to program. I had learnt on a DEC2020 available at my father's office when I was a kid. I was playing with the rubix cube at the time, and I thought I would be able to just ask the computer how to solve it for me. I think I was imagining a screen with a big voice that was going to tell me the answer. Jay Wortman took care of showing me a terminal and left me to my own devices. Slowly, in between many games of Zork, I learned my way around the machine. I started learning Basic and was proud of my first program that consisted of only goto loops. I noticed that it was very difficult to change that program. Every time I changed a line, I had to change most other goto pointers. That's how the discovery of Pascal turned out to be a revelation. But that revelation lead me to wonder: what else have I missed?
  4. So somehow I discovered the Centre Mondial Informatique in Paris, which was founded by Jean-Jacques Servan-Schreiber and directed by Nicholas Negroponte (!). The centre mondial made available a bunch of computers for anyone who wanted to play with them. There were VAXes with modern looking vt220ies, amazing Lisp machines that could show rotating three dimensional color shapes on their A4 sized vertical screens, people working on fractals, ... I even think someone told me about a prototype of a web at that point and even about open source software. I had learnt lisp by the time. But computing was not on the school syllabus, so all that work was labelled 'a distraction'. I decided to focus on mathematics and philosophy instead.
  5. I had consciously decided to forget about computing, because I knew that if I ever got close to a computer, I would never be able to stop using them. Time had gone, and I was living poorly on the streets of London, having to make decisions about what was a more worthwhile expense: a bus or a pencil. It occurred to me that asking questions like this was a waste of time. So through a contact at the philosophy department I met some people at the university of westminster's computing department. After offering to help them out in return for being able to access their computers, I got a one weeks job by the transport studies group there, which earnt me £2000 in a little over a week. That was more money than I had ever had in my hand. The decision was easy, though I swore I would find my way back to philosophy somehow. It turns out that those studies were a good investment because "the web is now philosophical engineering".
So it is now my turn to tag a few people. I will tag Anja Jentzsch and Richard Cyganiak who appeared in the picture of my previous popular blog. I can tag Harold Carr with whome I had a great conversation on the Semantic Web at Jazoon, and who build a very interesting lisp interpreter for Java that can use the Jena libraries. Since we met whilst speaking with Dean Allemang I should tag him too. Here is for Paul Sandoz who is working on JSR311, the RESTful java API. Ok here, I'll tag Tim Berners Lee too just for fun :-).

Friday Jul 13, 2007

The limitations of JSON

A thread on REST-discuss recently turned into a JSON vs XML fight. I had not thought too deeply about JSON before this, but now that I have I though I should summarize what I have learnt.

JSON has a clear and simple syntax, described on json.org. As far as I could see there is no semantics associated with it directly, just as with XML. The syntax does make space for special tokens such as numbers, booleans, ints, etc which of course one automatically presumes will be mapped to the equivalent types: ie things that one can add or compare with boolean operators. Behind the scenes of course a semantics is clearly defined by the fact that it is meant to be used by JavaScript for evaluation purposes. In this it differs from XML, which only assumes it will be parsed by XML aware tools.

On the list there was quite a lot of confusion about syntax and semantics. The picture accompanying this post shows how logicians understand the distinction. Syntax starts by defining tokens and how they can be combined into well formed structures. Semantics defines how these tokens relate to things in the world, and so how one can evaluate the truth, among other things of the well formed syntactic structure. In the picture we are using the NTriples syntax which is very simple defined as the succession of three URIs or 2 URIs and a string followed by a full stop. URIs are Universal names, so their role is to refer to things. In the case of the formula

<http://richard.cyganiak.de/foaf.rdf#cygri> <http://xmlns.com/foaf/0.1/knows> <http://www.anjeve.de/foaf.rdf#AnjaJentzsch> .
the first URI refers to Richard Cyganiak on the left in the picture, the second URI refers to a special knows relation defined at http://xmlns.com/foaf/0.1/, and depicted by the red arrow in the center of the picture, and the third URI refers to Anja Jenzsch who is sitting on the right of the picture. You have to imagine the red arrow as being real - that makes things much easier to understand. So the sentence above is saying that the relation depicted is real. And it is: I took the photo in Berlin this Febuary during the Semantic Desktop workshop in Berlin.

I also noticed some confusion as to the semantics of XML. It seems that many people believe it is the same as the DOM or the Infoset. Those are in fact just objectivisations of the syntax. It would be like saying that the example above just consisted of three URIs followed by a dot. One could speak of which URI followed which one, which one was before the dot. And that would be it. One may even speak about the number of letters that appear in a URI. But that is very different from what that sentence is saying about the world, which is what really interests us in day to day life. I care that Richard knows Anja, not how many vowels appear in Richard's name.

At one point the debate between XML and JSON focused on which had the simplest syntax. I suppose xml with its entity encoding and DTD definitions is more complicated, but that is not really a clinching point. Because if syntactic simplicity were an overarching value, then NTriples and Lisp would have to be declared winners. NTriples is so simple I think one could use the well known very light weight grep command line tool to parse it. Try that with JSON! But that is of course not what is attractive about JSON to the people that use it, namely usually JavaScript developers. What is nice for them is that they can immediately turn the document into a JavaScript structure. They can do that because they assume the JSON document has the JavaScript semantics. [1]

But this is where JSON shows its greatest weakness. Yes the little semantics JSON datastructures have makes them easy to work with. One knows how to interpret an array, how to interpret a number and how to interpret a boolean. But this is very minimal semantics. It is very much pre web semantics. It works as long as the client and the server, the publisher of the data and the consumer of the data are closely tied together. Why so? Because there is no use of URIs, Universal Names, in JSON. JSON has a provincial semantics. Compare to XML which gives a place for the concept of a namespace specified in terms of a URI. To make this clearer let me look at the JSON example from the wikipedia page (as I found it today):

{
    "firstName": "John",
    "lastName": "Smith",
    "address": {
        "streetAddress": "21 2nd Street",
        "city": "New York",
        "state": "NY",
        "postalCode": 10021
    },
    "phoneNumbers": [
        "212 732-1234",
        "646 123-4567"
    ]
}

We know there is a map between something related to the string "firstName" and something related to the string "John". [2] But what exactly is this saying? That there is a mapping from the string firstName to the string John? And what is that to tell us? What if I find somewhere on the web another string "prenom" written by a French person. How could I say that the "firstName" string refers to the same thing the "prenom" name refers to? This does not fall out nicely.

The provincialism is similar to that which led the xmlrpc specification to forget to put time stamps on their dates, among other things, as I pointed out in "The Limitations of the MetaWeblog API". To assume that sending dates around on the internet without specifying a time zone makes sense, is to assume that every one in the world lives in the same time zone as you.
The web allows us to connect things just by creating hyperlinks. So to tie the meaning of data to a particular script in a particular page is not to take on the full thrust of the web. It is a bit like the example above which writes out phone numbers, but forgets to write the country prefix. Is this data only going to get used by people in the US? What about the provincialism of using a number to represent a postal code. In the UK postal codes are written out mostly with letters. Now those two elements are just modelling mistakes. But if one is going to be serious about creating a data modelling language, then one should avoid making mistakes that are attributable to the idea that string have universal meaning, as if the whole world spoke english, and as if english were not ambigous. Yes, natural language can be disambiguated when one is aware of the exact location and time and context of the speaker. But on a web were everything should link up to everything else, that is not and cannot be the case.
That JSON is so much tied to a web page should not come as a surprise if one looks at its origin, as a serialisation of JavaScript objects. JavaScript is a scripting language designed to live inside a web page, with a few hooks to go outwards. It was certainly not designed as a universal data format.

Compare the above with the following Turtle subset of N3 which presumably expresses the same thing:

@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix : <http://www.w3.org/2000/10/swap/pim/contact#> .

<http://eg.com/joe#p>
   a foaf:Person;
   foaf:firstName "John";
   foaf:family_name "Smith";
   :home [
         :address [
              :city "New York";
              :country "New York";
              :postalCode "10021";
              :street "21 2nd Street";
         ]
   ];
   foaf:phone <tel:+1-212-732-1234>, <tel:+1-646-123-4567>;
.

Now this may require a little learning curve - but frankly not that much - to understand. In fact to make it even simpler I have drawn out the relations specified above in the following graph:

(I have added some of the inferred types)

The RDF version has the following advantages:

  • you can know what any of the terms mean by clicking on them (append the prefix to the name) and do an HTTP GET
  • you can make statements of equality between relations and things, such as
    foaf:firstname = frenchfoaf:prenom .
  • you can infer things from the above, such as that
    <http://eg.com/joe#p> a foaf:Agent .
  • you can mix vocabularies from different namespaces as above, just as in Java you can mix classes developed by different organisations. There does not even seem to be the notion of a namespace in JSON, so how would you reuse the work of others?
  • you can split the data about something in pieces. So you can put your information about <http://eg.com/joe#p> at the "http://eg.com/joe" URL, in a RESTful way, and other people can talk about him by using that URL. I could for example add the following to my foaf file:
    <http://bblfish.net/people/henry/card#me> foaf:knows <http://eg.com/joe#p> .
    You can't do that in a standard way in JSON because it does not have a URI as a base type (weird for a language that wants to be a web language, to miss the core element of the web, and yet put so much energy into all these other features such as booleans and numbers!)

Now that does not mean JSON can't be made to work this way, as the SPARQL JSON result set serialisation does. But it does not do the right thing by default. A bit like languages before Java that did not have unicode support by default. The few who were aware of the problems would do the right things, all the rest would just discover the reality of their mistakes by painful experience.

This does not take away from the major advantage that JSON has of being much easier to integrate with JavaScript, which is a real benefit to web developers. It should possible to get the same effect with a few good libraries. The Tabulator project provides a javascript library to parse rdf, but it would probably require something like a so(m)mer mapping from relations to javascript objects for it to be as transparent to those developers as JSON is.

Notes

[1]
Now procedural languages such as JavaScript don't have the same notion of semantics as the one I spoke of previously. The notion of semantics defined there is a procedural one: namely two documents can be said to have the same semantics if they behave the same way.
[2]
The spec says that an "object is an unordered set of name-value pairs", which would mean that person could have another "firstName" I presume. But I also heard other people speak about those being hash maps, which only allow unique keys. Not sure which is the correct interpretation...

Vote for this: |dzone

Wednesday Jun 06, 2007

RESTful Web Services: the book

RESTful Web Services is a newly published book that should be a great help in giving people an overview of how to build web services that work with the architecture of the Web. The authors of the book are I believe serious RESTafarians. They hang out (virtually) on the yahoo REST discuss newsgroup. So I know ahead of time that they will most likely never fail on the REST side of things. Such a book should therefore be a great help for people desiring to develop web services.

As an aside, I am currently reading it online via Safari Books, which is a really useful service, especially for people like me who are always traveling and don't have space to carry wads of paper around the world. As I have been intimately involved in this area for a while - I read Roy Fielding's thesis in 2004, and it immediately made sense of my intuitions - I am skipping through the book from chapter to chapter as my interests guide me, using the search tool when needed. As this is an important book, I will write up my comments here in a number of posts as I work my way through it.

What of course is missing in Roy's thesis, which is a high level abstract description of an architectural style, are practical examples, which is what this book sets out to provide. The advantage of Roy's level of abstraction is that it permitted him to make some very important points without loosing himself in arbitrary implementation debates. Many implementations can fit his architectural style. That is the power of speaking at the right level of abstraction: it permits one to say something well, in such a way that it can withstand the test of time. Developers of course want to see how an abstract theory applies to their everyday work, and so a cook book such as "RESTful Web Services" is going to appeal to them. The danger is that by stepping closer to implementation details, certain choices are made that turn out to be in fact arbitrary, ill conceived, non optimal or incomplete. The risk is well worth taking if it can help people find their way around more easily in a sea of standards. This is where the rubber hits the road.

Right from the beginning the authors, Sam Ruby and Leonard Richardson coin the phrase "Resource Oriented Architecture".

Why come up with a new term, Resource-Oriented Architecture? Why not just say REST? Well, I do say REST, on the cover of this book, and I hold that everything in the Resource-Oriented Architecture is also RESTful. But REST is not an architecture: it's a set of design criteria. You can say that one architecture meets those criteria better than another, but there is no one "REST architecture."

The emphasis on Resources is I agree with them fundamental. Their chapter 4 does a very good job of showing why. URIs name Resources. URLs in particular name Resources that can return representations in well defined ways. REST stands for "Representation of State Transfer", and the representations transferred are the representations of resources identified by URLs. The whole thing fits like a glove.

Except that where there is a glove, there are two, one for each hand. And they are missing the other glove, so to speak. And the lack is glaringly obvious. Just as important as Roy Fielding's work, just as abstract, and developed by some of the best minds on the web, even in the world, is RDF, which stands for Resource Description Framework. I emphasize the "Resource" in RDF because for someone writing a book on Resource Oriented Architecture, to have only three short mentions of the framework for describing resources standardized by non less that the World Wide Web Consortium is just ... flabbergasting. Ignoring this work is like trying to walk around on one leg. It is possible. But it is difficult. And certainly a big waste of energy, time and money. Of course since what they are proposing is so much better than what may have gone on previously, which seems akin to trying to walk around on a gloveless hand, it may not immediately be obvious what is missing. I shall try to make this clear in the series of notes.

Just as REST is very simple, so is RDF. It is easiest to describe something on the web if you have a URL for it. If you want to say something about it, that it relates to something else for example, or that it has a certain property, you need to specify which property it has. Since a property is a thing, it too is easiest to speak about if it has a URL. So once you have identified the property in the global namespace you want to say what its value is, you need to specify what the value of that property is, which can be a string or another object. That's RDF for you. It's so simple I am able to explain it to people in bars within a minute. Here is an example, which says that my name is Henry:

<http://bblfish.net/people/henry/card#me> <http://xmlns.com/foaf/0.1/name> "Henry Story" .

Click on the URLs and you will GET their meaning. Since resources can return any number of representations, different user agents can get the representation they prefer. For the name relation you will get an html representation back if you are requesting it from a browser. With this system you can describe the world. We know this since it is simply a generalization of the system found in relational databases, where instead of identifying things with table dependent primary keys, we identify them with URIs.

So RDF, just as REST, is at its base very easy to understand and furthermore the two are complementary. Even though REST is simple, it nevertheless needs a book such as "RESTful web services" to help make it practical. There are many dispersed standards out there which this books helps bring together. It would have been a great book if it had not missed out the other half of the equation. Luckily this should be easy to fix. And I will do so in the following notes, showing how RDF can help you become even more efficient in establishing your web services. Can it really be even easier? Yes. And furthermore without contradicting what this book says.

http://openid.sun.com/bblfish

That's my new OpenId.

I was able to successfully log onto:

(Using Safari but not Firefox 2.004 !?)

And I did not have to invent a new username and password, nor fill out any form, other than to fill out my id. I did not have to wait for an email confirmation, nor send an email response or go to a web verification site. I did not have to add one more password to my keychain.
A really small step, but oh what a useful one!

I can add this info to my foaf file with the simple relation :

<http://bblfish.net/people/henry/card#me> <http://xmlns.com/foaf/0.1/openid> <http://openid.sun.com/bblfish> .
This will come in very useful, one way or another. See the article "foaf and openid", for an example.

Sunday Jun 03, 2007

Al Gore: The Assault on Reason

A little tired yesterday, I walked into a bookshop on University Avenue in Palo Alto, looking for something to change my mind. I picked up Al Gore's new book "The Assault on Reason", and started reading it. Less than 24 hours later, I am in San Francisco, and have just finished it. It's a good read, covers very important issues of democracy, focusing mostly on US history, the balance of powers, and the danger of ignoring these checks and balances that the founding fathers put in place. Executive power of course is always looking to speed up its ability to act, and finds the restrictions put on it by the legislative and the juridical a bothersome constraint. In the name of speed of action the executive will have a tendency to want to pull as much power towards itself. Doing that of course it risks detaching itself from reason, and committing grave mistakes, which does seem indeed to be what has befallen the current administration in a number of times in very serious ways, the details of which Gore draws out clearly.

The average American still spends 4 hours a day in front of a television. TV is the ultimate one way communication medium. Programs are broadcast to a wide audience, who have the choice (more now it is true) to switch to another channel. Switching channel won't give them a voice though, or enable to participate in the forming of the message. Similarly politicians are forced to spend huge amount of money for 30 second commercials to air their views, which removes them from the public floor of detailed debate of issues and forces them instead on the road of collecting money from special interests. As a result the public debate is impoverished in each citizen's house and in the houses of congress. The public appears to be much dumber than it is, which fosters a cynical view of the country by its leaders.

Thursday May 17, 2007

Webcards: a Mozilla microformats plugin

Last week Jiri Kopsa pointed me to the very useful mozilla extension for microformats called Webcards. Install it, reboot, and Mozilla will then pop up a discreet green bar on web pages that follow the microformats guidelines. So for example on Sean Bechhofer page 1 vcard is detected. Clicking on that information brings up a slick panel with more links to other sources on the web, such as delicious if tags are detected, or in this case linkedin and a a stylish button that will add the vcard to one's address book [1].

Microformats are a really simple way to add a little structure to a web page. As I understand it from our experimentation a year ago [2] in adding this to BlogEd[3], one reason it was successful even before the existence of such extensions, is that it allowed web developers to exchange css style sheets more easily, and it reduced the need to come up with one's own style sheet vocabulary. So people agreeing to name the classes the same way, however far apart they lived, could then build on each other's work. As a result a lot of data appeared that can the be used by extensions such as Webcards.

Webcards really shows how useful a little structure can be. One can add addresses to one's address book, and appointments to one's calendar with the click of a button. The publisher gains by using these in improvements to their web site design. So everybody is happy. One downside as far as structure goes is that due to lack of namespaces support, there is a bottleneck in extending the format. One has to go through the the very friendly microformats group, and they have stated that they really only want to deal with the most common formats. So it is not a solution to any and every data needs. For that one should look at eRDF or RDFa extensions to xhtml and html. I don't have an opinion on which is best. Perhaps a good starting point is this comparison chart.

The structured web is a continuum, from the least structured (plain text), on through html, to rdf. For a very good but long analysis, see the article by Mike Bergman An Intrepid Guide to Ontologies which covers this in serious depth.

As rdf formats such as foaf and sioc gain momentum, similar but perhaps less slick, mozilla extensions have appeared on the web. One such is the Semantic Radar extension. Perhaps Webcards will be able to detect the use of such vocabularies in RDFa or eRDF extended web pages too, using technologies similar to that offered by the Operator plugin as described recently by Ellias Torres.

[1] note this does not work on OSX. I had to save the file to the hard drive, rename it with a vcf extension, before this could get added to Apple's Address Book.
[2] Thanks to the help of Anotoine Moreau de Bellaing (no web page?)
[3] I know, BlogEd has not been moving a lot recently. It's just that I felt the Blog Editor space was pretty crowded allready, and that there were perhaps more valuable things to do on the structured data front in Sun.

Thursday Apr 19, 2007

Supply Networks

The Supply Chain is dead. Long live the Supply Network!

Ok, that's my take on a very interesting presentation that was given by two young researchers at the Semantic Desktop Workshop in Berlin I attended last week (as attested by numerous photos posted on flickr). In their Amerigo presentation, Sonja Pajkovska-Goceva and Fanny Kozanecki, made a very clear point: in the interconnected world we live in, speaking of supply chains is old hat - companies have complex interrelated supply networks, where one supplier depends on another which in turns depends on a further one, some or more of which may depend on each other. The quality of the network is important in many different ways, perhaps the most topical being to do with the environment. It is no good to outsource your pollution to third world countries; a company is responsible not only for their pollution, but for the pollution of all the players in their supply network: that is their direct suppliers, but also the suppliers of their suppliers, and so on. Being green would be too easy if one could just outsource the dirty part of one's business to Africa...

So in their presentation Sonja and Fanny worked on putting together an ontology for a supply chain network using the Protege Ontology editor. Starting with very little knowledge of the Semantic Web, they have been able to put together a small ontology quite quickly as a proof of concept. Protege is still rough around the edges at times. There are other editors out there such as SWOOP or Top Braid Composer, but none of them are as slick as tools that Java Programmers are used to such as NetBeans or IntelliJ. Still that they got this far is a good sign as to how far we have come. (Btw. I still prefer to edit ontologies by hand using N3 and vim, but I only recommend that with care).

Tools aside, having developed such an ontology, it is easy to see how in the future every product every company will produce will have its own URL, describing what it is made of (further urls) and where it was shipped and to whom. You think we are being open now? Wait until every product, and every action has one too! What we have now is a like a grimy yellowed window compared to the transparency to come.

The Semantic Web is clickable data.

Wednesday Mar 21, 2007

James Gosling on Web N

James Gosling had a couple of slides on Web N during his presentation on the Java Platform. Is it "a piece of Jargon" as Tim Berner's Lee is quoted as saying? Well James seems to agree in part with that assessment. It is a lot of hype for what seems to be a very simple thing: just different User Interfaces on ways of storing data on servers. The one consistent similarity of these services, he points out in the next slide, is the way they build communities, using the input of millions to create services that no single organization could have provided.

But in that respect, how does that differ from projects such as Linux, which I was using as my desktop OS in the 90ies? That was a huge piece of engineering developed on the internet, using the web and other tools, in a communal fashion. How does that differ from services such as imdb, the largest online database of films, which I was happily using ten years ago, whose whole content was updated by its users? Is it that the improvements in the web interface are making it easier and easier for people to contribute content? Partly so. If adding photos to a flickr account forced one to fetch a new page for every change, it would be a lot less appealing. But how much then does bandwidth improvements have to do with this? Services such as flickr would have been unbearable in the early web. Certainly YouTube would have gotten nowhere, not even taking into account the difficulty of editing videos on 400Mhz machines. So is Web 2.0 a technical thing, or is it something else?

I'll agree that Web 2.0 is a social phenomenon, in more ways than one. It is a meme that also has a psychological dimension. People who thought that by 2000 they had understood all about the web, the .com aspect, never quite grokking the huge open source wave, those people then declared the Web bubble burst. As more and more amazing things continued happening after the .com bust, they need a way to change their tune without feeling that they had gotten something wrong. Hence Web 2.0. The web just keeps evolving. It's always more than you thought it could be.

Another thought is that if we can trace Web 2.0 all the way back to Open Source programming, then my feeling is that this is where one should look to sow the seeds for Web 3.0. The Open Source community is full of small little Island projects. True they can all exchange code between each other, but the interaction between the groups could be a lot better, just as the interaction between Web 2.0 sites could be. If one could make the interactions between these communities a lot more fluid, then one will certainly be able to unleash a whole new wave of energy. This is why I am so enthusiastic about Baetle, the bug ontology we are developing, which should be an important element in helping open source project work together.

The next generation of the Web is not going to be obvious: how could it be? If it were obvious it would, technical issues aside, already be here. The people most apt to be able to move those technical issues aside, are of course going to be developers themselves. As they see the benefits, these will be distilled into something useful and easy to understand for everyone else.

Thursday Mar 01, 2007

OpenId and SAML

Paul Madsen illustrates the relation between OpenId and SAML

Having looked at OpenId I got to wonder a little how this links in with other technologies such as SAML.

One nice thing is it looks like we can have one URL Identifier and use both services. Pat Patterson recently showed with a nice video how one can use the same id to work with OpenId and SAML. His solution is simply to add a meta tag in the head of the html like this

<meta http-equiv="X-XRDS-Location" content="http://patlinux.red.iplanet.com/superpat/yadis.xml">
This brings one to a YADIS file which lists the various types of identification services one wishes to use with one's id. [0] The YADIS file links to a SAML file with identification information, and the url of the authentication server. From there on it looks like the processes are quite similar to those of OpenID, except that the information passed to and fro is in more complex xml documents.

So we have two more indirections, than the simplest OpendId example, or only one more indirection from Sam Ruby's nice OpeniId howto[1]. So what does one gain? Well the SAML is understood to be enterprise ready and proven to work with very large installations, which are the use cases it attempted to solve. This of course comes at the cost of more complexity, which may or may not be covered by open source projects such as OpenSSO.

Some interesting links I came across doing this research:

[0] It also shows a horrible oasis urn, why does oasis always use urns instead of urls?
[1] Notice how this could have been cut down to no indirection with the use of rdf vocabularies. The YADIS and the SAML files could have been combined, and they could have in turn have been combined with the information at the openid resource...

Tuesday Feb 27, 2007

OpenId for blogs.sun.com ?

The volume of posts on OpenId, is clearly growing in importance, and big players such as AOL and Microsoft are joining the party. The technical introduction for web developers on the openid wiki will help make more sense of the following discussion:

Given the Web 2.0 is so very much about Micro Killer Apps single sign on is an absolute necessity. As Paul Diamond notes web 2.0 has created a huge number of services that need to be integrated. Indeed, there are services (eg Convinceme) I have not used recently, or blogs I have not responded to, just because I did not want to go through yet another sign on service.

Having OpenId on blogs.sun.com would allow many nice features. Once someone had been allowed to answer a comment on a blog, they could be enabled for every other comments they make without requiring any further aproval. One could generalize this to allow anyone who had ever been allowed by someone on blogs.sun.com to comment, or to all of one's friends as specified in a foaf file.
Danbri points to Doxory.com (tag line: life by committee) as being one such service that uses both the openid information with a foaf file to provide some interesting service. Danny Ayers points to videntity.org as one of the many open id identity registrars that offers you a foaf file. Open Data Spaces, which is built on Virtuoso uses the same url for the openid and the foaf file, and furthermore that URL is editable using WebDav!

Having read the technical introduction carefully, I think the meshing with foaf is simply accomplished like this:
The Foaf url can simply be the open id. According to current OpenId specs the id would have to be able to return a text/html representation, so that the consumer (the blog that is requiring authentication for example), can search the html for the openid.server link relation. The foaf id would then also be able to return and xml/rdf representation by a client on request. This would save the end user from having to learn two different ids, and it would be a way of authenticating a foaf file on top of it. In this scenario the html representation should have a foaf link relation pointing back the the same url.

Otherwise it would probably be useful to have a sioc property to link to an open id.

Thursday Feb 15, 2007

sorry for the updates!

Roller the engine behind this weblog, does not give one the ability to make minor changes to a post. That is every change, however minor - be it just adding a new tag to a post - changes the updated time stamp in the associated rss 1.0 or atom feed. For people reading this post from my official blog site this won't be noticeable at all. But for those reading this with aggregators such as JNN or BlogBridge, or even for those web aggregators such as Planet RDF, they will be forced to either never take account of updates and only order posts by created time stamps, or they will have to suffer what may seem like SPAM behavior. So this is a plea for forgiveness from my Planet RDF readers.

To solve this problem Roller's editing window should have a small checkbox with the text "minor updated" next to it. Ticking that checkbox would leave the updated time stamp as is, though the change could be noted by an edited time stamp. Note that having an edited time stamp is not at all necessary. But it would make a couple of things more easy, such as helping clients to synchronize with minor edits on the server without disturbing run of the mill readers. The app:edited element was added to the Atom Application Protocol, for just this reason in fact.

Update: I have created feature enhancement request 1358 on the roller JIRA site. Please add your comments or support for this feature there. It should require only a very little amount of coding.

Monday Feb 12, 2007

Apple Address Book lost

My Apple Address Book today disappeared. It's just empty. Gone!
Luckily I have a backup somewhere, but that will be a few weeks out of date. So that means I will need to add all the great people I met in Zürich again. I really have other things to do...
I hope this is not a sign that my hard drive is about to fail once more! This is when I wish that ZFS were already released for OSX. Perhaps I should get the developer previews...

The last few things I did before this happened, was that I tunneled into Sun Intranet using Cisco's VPN. There I added a couple of vcards to my Address Book. Soon afterwards my email, address Book and calendar were frozen. I disconnected from VPN, but after force quitting them I could no longer start them. So I tried to reboot, the first reboot in a couple of weeks...

About

bblfish

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today