Friday Feb 13, 2009

I ♡ NetBeans 6.7 !

picture of NetBeans 7 Daily build 200902061401

As I was developing my recently released foaf+ssl server to demonstrate how to create distributed secure yet open social networks, I stumbled across a daily build of NetBeans 7 (build 200902061401), that is stable, beautiful and that has really helped me get my work done. NetBeans 7 is really going to rock!

Update: What I called NetBeans 7 is now called NetBeans 6.7.

Here is a list of some of the functionality that I realy appreciated.

Maven Integration

Haleluia for Maven Integration! I got going on my project by setting up a little Wicket project which I easily adapted to include the Sesame semantic web libraries and more (view pom).

The nicest part of this is that it then becomes extreemly easy to link the source and the javadoc together. Two commands which should be integrated as menu options have finally made it possible for me to work with NetBeans.

# get the javadoc
$ mvn dependency:resolve -Dclassifier=javadoc
# get sources
$ mvn dependency:sources

This simple thing just used to be a nightmare to do, especially as the number of jars one's project depended on increased. The Sesame group have split a lot of their jars up nicely, so that one could use subset of them, but the way NetBeans was set up it bacame a real huge amazing astounding pain to link those to the source. And what is an open source IDE worth if it can't help you browse the source code and see its documentation easily?

Now I don't think Maven is in any way the final word in project deployement. My criticims in short is that it is not RESTful in a few ways, not least of which is that it fails to use URLs to name things and it makes the cache the central element. It is as if they had turned web architecture upside down web, where people would name things by trying to identify the caches in which they were located rather than their Universal Locator. My guess is that as a result things are a lot less flexible than they could be. As Roy Fielding pointed out recently REST APIs must be hypertext driven. Software is located in a global information space, so there is no good reason in my opinion to not follow this precept.

Clearly though this is a huge huge improovement!

A better file explorer

I have sworn a few times at the previous versions of the NB file manager! Even more so when I had to use it to tie the javadoc to the source code - at that point it became a scream. Finally we have a command line File Explorer with tab completion. This is so beautiful I have to take a picture of it: file explorer

We use the keyboard all the time, and one can get many things done much faster that way. Navigating the File System with a keyboard is just much nicer. So why oh why is it still impossible to use up and down arrow keys in the classic view when some files are greyed out? ( Writing this I noticed there seems to be no way to get back from the classic view to the new command line view - please make it possible to get back! )

GlassFish 3 Integration

Well it is a real pleasure to work with a web server that loads a war in half a second. I use hardly any of the J2EE features so it's a good thing those don't get loaded.

I tried the HTTP Server Monitor and that could be useful if it were more informative. In RESTful development it is really important to know the response codes 303, etc... so that one can follow the conversations between the client and the server. Currently that piece is trying to tie things up too much into baby steps: just as with the File Explorer there should be an easy UI into a feature and an advanced mode. I'd like to see the full pure unadulterated content going over the wire, highligted perhaps to make it easy to find things. (It turns out this has been filed as feature request 36706)

GlassFish integration really helped me get my develop and deploy my foaf+ssl service.

User Interface

As you can see from the main picture the NetBeans UI seems to be going through a big transformation. Gone are some of the huge fat aqua buttons. The pieces are layed out in similar ways as in NB6.5, but this is a lot more elegant. A welcome change.

There is a very useful search bar at the top right of NB 7 now, which prooved to be very helpful at finding documentation, maven repositories, and many other things. It prooved to be very helpful a couple of times in my project.

One simple thing I would like would be to have a menu on each of the windows to open a file in its default OS viewer. So when I edit HTML which is a pleasure to do in NB, I would like to be able to quickly view that code in Firefox, Safari or Opera. Other XML files may have their default viewers, and so I think this is quite generalisable. In any case it should be easy to copy the file path of an open window, as one often has to do external processing of it. For files that are located on the internet, it would be great to be able to get their URL. This would help when chatting to people about source code one is working on for example.


  • There are IntelliJ key bindings now. I really needed this a year or so ago, as I was switching between the IDEs. I have forgotten them now so it's less of a problem for me, but it will be very important for people switching between the IDEs.
  • I think this was part of NB6, but being able to browse the local history of source code is a really great feature. (I noticed that this does not diff html or xml for the moment)
  • Geertjan's Wicket integration Module partly works on this daily build. You may require starting of with NB7 milestone 1 to get going as it seemed still to be fully functional there.
  • I find this daily build needs restarting every day, as it seems to slow down after a while, perhaps using up a lot of memory.

Where is this going

Well those are the features that really stood out for me. And I am very happy to work with NB now.

I still think that the next big step, for NB 8 perhaps, should be the webification of the IDE. I think there is a huge amount to gain by applying Web Architecture principles to an IDE, and then the Net in NetBeans would fully reveal it's meaning.

Wednesday Jun 25, 2008

NetBeans and Semantic Wikis

The Kiwi team is meeting at the Prague Sun offices for the next few days to discuss the roadmap of this cutest of all semantic wikis. I completely empathize with Jana Herwig, when she writes:

An IT project is like herding cats, they say - in our case, we’ll be herding kiwis, and if we can enjoy it only half as much as these guys, I’ll be fine:-)

And illustrates it with this video:

VIDEO CLIP -- this should only display if shockwave does not work. Why does it not do the right thing in Firefox 3? If no video appears try it directly on youtube.

I like to think of us developers as being the cats, and the kiwis as the things we want to herd. Tasty :-) That would indeed also explain why I am still in France - herding cats ain't easy.

Why am I still in France and not tasting free beer in Prague? Well, last month's conferences in California has given me a conference overdose, from which I am still recovering. Also I have been speaking so much about the Address Book, that I really need to sit down, roll up my sleeves, and just work on it. I did try to write something on topic yesterday, relating the semantic web and NetBeans - since Prague is the center of NetBeans development. I hope that excuses me somewhat.

A list of on the spot updates to this meeting can be found on kiwi planet.

Tuesday Jun 24, 2008

Webifying Integrated Development Environments

IDEs should be browsers of code on a Read Write Web. A whole revolution in how to build code editors is I believe hidden in those words. So let's imagine it. Fiction anticipates reality.

Imagine your favorite IDE, a future version of NetBeans perhaps or IntelliJ, which would make downloading a new project as easy as dragging and dropping a project url onto your IDE. The project home page would point to a description of the location of the code, the dependencies of this project on other projects, described themselves via URL references, which themselves would be set up in a similar manner. Let's imagine further: instead of downloading all the code from CVS, think of every source code document as having a URL on the web. ( Subversion is in fact designed like this, so this is not so far fetched at all.) And let's imagine that NetBeans thinks about each software component primarily via this URL.
Since every piece of code and every library has a URL, the IDE would be able to use RESTful architectural principles of the web. A few key advantages of this are

  • Caching: web architecture is the ability to cache information on the network or locally without ambiguity. This is how your web browser works ( though it could work better ). To illustrate: once a day Google changes its banner image. Your browser and every browser on earth only fetches that picture once a day, even if you do 100 searches. Does Google serve one image to each browser? No! numerous caches (company, country, or other) cache that picture and send it to the browser without sending the request all the way to the search engine, reducing the load on their servers very significantly.
  • Universal names: since every resource has a URL, any resource can relate in one way or another to any other resource wherever it is located. This is what enables hypertext and what is enabling hyperdata.

Back to the IDE. So now that all code, all libraries, can be served up RESTfully in a Resource Oriented Architecture what does this mean to the IDE? Well a lot. Each may seem small, but together they pack a huge punch:
  • No need to download libraries twice: if you have been working on open source projects at all frequently you must have noticed how often the same libraries are found in each of the projects you have downloaded. Apache logging is a good example.
  • No need to download source code: it's on the web! You don't therefore need a local cache of code you have never looked at. Download what you need when you need it (and then cache it!): the Just in Time principle.
  • Describe things globally: Since you have universal identifiers you can now describe how source code relates to documentation, to people working on the code, or anything else in a global way, that will be valid for all. Just describe the resources. There's a framework around just for that, that is very easy to use with the right introduction.

The above advantages may seem rather insignificant. After all, real developers are tough. They use vi. (And I do). So why should they change? Well notice that they also use Adobe Air or Microsoft Silverlight. So productivity considerations do in fact play a very important factor in the software ecosystem.
Don't normal developers just work on a few pieces of code? Well speaking for myself here, I have 62 different projects in my /Users/hjs/Programming directory, and in each of these I often have a handful of project branches. As more and more code is open source, and owned and tested by different organizations, the number of projects available on the web will continue to explode, and due to the laziness principle the number of projects using code from other projects will grow further. Already whole operating systems consisting of many tens of thousands of different modules can be downloaded and compiled. The ones I have downloaded are just the ones I have had the patience to get. Usually this means jumping through a lot of hoops:

  1. I have to finding the web site of the code. And I may only have a jar name to go by. So Google helps. But that is a whole procedure in itself that should be unecessary. If you have an image in your browser you know where it is located by right-clicking over it and selecting the URL. Why not so with code?
  2. Then I have to browse a web page, which may not be written in my language, and find the repository of the source code
  3. Then I have to find the command line to download the source code, or the command in the IDE and also somehow guess which version number produced the jar I am using.
  4. Once downloaded, and this can take some time, I may have to find the build procedure. There are a few out there. Luckily ant and maven are catching on. But some of these files can be very complicated to understand.
  5. Then I have to link the source code on my local file system to the jar on my local file system my project is using. In NetBeans this is exceedingly tedious - sometimes I have found it to be close to impossible even. IntelliJ has a few little tricks to automate some of this, but it can be pretty nasty too, requiring jumping around different forms. Especially if a project has created a large number of little jar files.
  6. And then all that work is only valid for me. Because all references are to files on my local file system, they cannot be published. NetBeans is a huge pain here in that it often creates absolute file URLs in its properties files. By replacing them with relative urls one can get publish some of the results, but at the cost of copying every dependency into the local repository. And working out what is local and what is remote can take up a lot of time. It will work on my system, but not on someone else's.
  7. Once that project downloaded one may discover that it depends on yet another project, and so we have to go back to step 1.

So doing the above is currently causing me huge headaches even for very simple projects. As a result I do it a lot less often than I could, missing valuable opportunities as a result. Each time I download a project in order to access the sources to walk through my code and find a bug, or to test out a new component I have to do all that download rigmarole described above. If you have a deadline, this can be a killer.

So why do we have to tie together all the components on our local file system? This is because the IDE's are not referring to the resources with global identifiers. The owner of the junit project should say somewhere, in his doap file perhaps that:

   @prefix java: <> . #made this up
   @prefix code: <> .

   <> a java:Jar;
         code:builtFrom <> .

   #what would be needed here needs to be worked out more carefully. The point is that we don't
   #at any point refer to any local file.

Because this future IDE we are imagining together will then know that it has stored a local copy of the jar somewhere on the local file system, and because it will know where it placed the local copy of the source code, it will know how the cached jar relates to the cached source code, as illustrated in the diagram above. So just as when you click on a link on your web browser you don't have to do any maintenance to find out where the images and html files are cached on your hard drive, and how one resource (you local copy of an image) relates to the web page, so we should not have to do any of this type of work in our Development Environment either.

From here many other things follow. A couple of years ago I showed how this could be used link source code to bugs, to create a distributed bug database. Recently I showed how one could use this to improve build scripts. Why even download a whole project if you are stepping through code? Why not just fetch the code that you need when you need it from the web? One HTTP GET at a time. The list of functional improvements is endless. I welcome you to list some that you come up with in the comments section below.

If you want to make a big impact in the IDE space, that will be the way to go.

Thursday Apr 17, 2008

KiWi: Knowledge in a Wiki

KiWi logo

Last month I attended the European Union KiWi project startup meeting in Salzburg, to which Sun Microsystems Prague is contributing some key use cases.

KiWi is a project to build an Open Source Semantic Wiki. It is based on the IkeWiki [don't follow this link if you have Safari 3.1] Java wiki, which uses the Jena Semantic Web frameworks, the Dojo toolkit for the Web 2.0 functionality, and any one of the Databases Jena can connect to, such as PostgreSQL. KiWi is in many ways similar to Freebase in its hefty use of JavaScript, and its emphasis on structured data. But instead of being a closed source platform, KiWi is open source, and builds upon the Semantic Web standards. In my opinion it currently overuses JavaScript features, to the extent that all clicks lead to dynamic page rewrites that do not change the URL of the browser page. This I feel unRESTful, and the permalink link in the socialise toolbar to the right does not completely remove my qualms. Hopefully this can be fixed in this project. It would be great also if KIWI could participate fully in the Linked Data movement.

The meeting was very well organized by Sebastian Schaffert and his team. It was 4 long days of meetings that made sure that everyone was on the same page, understood the rules of the EU game, and most of all got to know each other. (see kiwiknows tagged pictures on flickr ). Many thanks also to Peter Reiser for moving and shaking the various Sun decision makers to sign the appropriate papers, and dedicate the resources for us to be part of this project.

You can follow the evolution of the project on the Planet Kiwi page.

Anyway, here is a video that shows the resourceful kiwi mascot in action:

Wednesday Feb 06, 2008

replacing ant with rdf

Tim Boudreau just recently asked "What if we built Java code with...Java?". Why not replace Ant or Maven xml build documents with Java (Groovy/Jruby/jpython/...) scripts? It could be a lot easier to program for Java programmers, and much easier to understand for them too. Why go through xml, when things could be done more simply in a universal language like Java? Good question. But I think it depends on what types of problem one wants to solve. Moving to Java makes the procedural aspect of a build easier to program for a certain category of people. But is that a big enough advantage to warrant a change? Probably not. If we are looking for an improvement, why not explore something really new, something that might resolve some as yet completely unresolved problems at a much higher level? Why not explore what a hyperdata build system could bring to us? Let me start to sketch out some ideas here, very quickly, because I am late on a few other projects I am meant to be working on.

The answer to software becoming more complicated has been to create clear interfaces between the various pieces, and have people specialise in building components to the interfaces. It's the "small is beautiful" philosophy of Unix. As a result though, as software complexity builds up, every piece of software requires more and more pieces of other software, leading us from a system of independent software pieces to networked software. Let me be clear. The software industry has been speaking a lot about software containing networked components and being deployed on the network. This is not what I am pointing to here. No I want to emphasise that the software itself is built of components on the network. Ie. we need more and more a networked build system. This should be a big clue as to why hyperdata can bring something to the table that other systems cannot. Because RDF is a language whose pointer system is build on the Universal Resource Identifier (URI) it eats networked components for lunch, breakfast and dinner. (see my Jazoon presentation).

Currently my subversion repository consists of a lot of lib subdirectories full of jar files taken from other projects. Would it not be better if I referred to these libraries by URL instead? The URL where they can be HTTP gotten from of course? Here are a few advantages:

  • it would use up less space in my SubVersion repository. A pointer just takes up less space than an executable in most cases.
  • it would use up less space on the hard drive of people downloading my code. Why? Because I am referring to the jar via a universal name, a clever IDE will be able to use the local cached version already downloaded for another tool.
  • it would make setting up IDE's a lot easier. Again because each component now has a Universal Name, it will be possible to link up jars to their source code once only.
  • the build process, describing as it does how the code relates to the source, can be used by IDEs to jump to the source (also identified via URLs) when debugging a library on the network. (see some work I started on a bug ontology called Baetle)
  • Doap files can be then used to tie all these pieces together, allowing people to just drag and drop projects from a web site onto their IDE, as I demonstrated with Netbeans
  • as IDE gain knowledge of which components are successors to which other components, from such DOAP files, it is easy to imagine them developing RSS like functionality, where it scans the web for updates to your software components, and alerts you to those updates which you can then test out quickly yourself.
  • The system can be completely decentralised, making it a WEB 3.0 system, rather than a web 2.0 system. It should be as easy as having to place your components and your RDF file on a web server served up with the correct mime types.
  • It will be easy to link up jars or source code ( referred to as usual by URLs ) to bugs (described via something like Baetle ). Making it easy to describe how bugs in one project depend on bugs in other projects.

So here are just a few of the advantages that a hyperdata based build system could bring. They seem important enough in my opinion to justify exploring this in more detail. Ok. Well, let me try something here. When compiling files one needs the following: a classpath and a number of source files.

@prefix java: <> .

_:cp a java:ClassPath;
       java:contains ( <> <> ) .

_:outputJar a java:Jar;
       java:buildFrom <src>;
       java:classpath _:cp .

        :pathtemplate "dist/${date}/myprog.jar";
        :fullList <outputjars.rdf> .
If the publication mechanism is done correctly the relative URLs should work on the file system just as well as they do on the http view of the repository. Making a jar would then be a matter of some program following the URLs to download all the pieces (if needed), put them in place and use that to build the code. Clearly this is just a sketch. Perhaps someone else has already had thoughts on this?

Friday Oct 05, 2007

Doap Bean available

I have just made the NetBeans Doap Bean available on the plugin portal. Just download onto your desktop and install in a version of NetBeans 6 (check Tools < Plugins in the menu)

This is the module I demonstrated at James Gosling's 'fun things' presentation on NetBeans day in San Francisco. I have updated the code to make it easy to understand for people who would wish to emulate and enhance it. It is easy to do that. Install the plugin, and go to the project. Then drag the blue button next to the URL

from your browser (I have checked that it works with Safari and Firefox on OSX) onto the DOAP button on the toolbar. This will fetch the information from the web page and pop up a window with a human readable representation of the RDF. This window should look like this:

window describing the so(m)mer project

Clicking on the other tabs will show you the original RDF/XML or an easier to read Turtle representation of the data. It is really important to show these tabs so that you can distinguish good from bad doap. Of course one can also go to the W3C Validator for an independent opinion.
In any case if the source code is available via a CVS or Subversion repository, you should be able to download it with just the click on the "download" button. (Make sure that NetBeans knows where your svn command line tool is though, by going to the menu Versioning &gr; Subversion > Checkout... )

If you want to try dropping other projects onto the button go to DoapSpace, they have put together a large collection of doap files for all the projects on SourceForge, Freshmeat and PyPi.

As I mention this is really only version 0.1 of the doap integration of Netbeans. Clearly one could do a lot more, such as:

  • Having it produce Doap for a project automatically
  • Tying it into NetBeans's Project panel
  • describing the relationships a project and others it depends on
  • Linking bug reports to information gleaned from the doap:bugdatabase relation
  • Perhaps see if one can set things up so that one can immediately find the javadoc online for a doap project one has information of
  • find a way to view source on a jar, by relating jars to source code repositories... (more difficult this one)
  • and a lot more...

Now you may wonder: How is one going to know that there is a doap link on some project's source page? Searching for the doap link seems a lot of work, right? Well to get an idea of how things will integrate you can install the Firefox Semantic Radar plugin, and go to the So(m)mer project again. You will then see displayed at the bottom of your browser an icon of square smiley faces, as shown on the following screenshot

semantic radar icon in Firefox

I should probably add this icon to the Doap button come to think of it...
The Doap button is in the So(m)mer repository, which is all published under the very generous BSD licence, so you are welcome to help out and add your own features... I may be having to work on a few other things next, so I won't be getting in your way :-)

Wednesday Oct 03, 2007

Turtle support for NetBeans 6

Yesterday I added NTriples support for NetBeans. Today it was the turn of Turtle, a notation for RDF that takes human writers into account, and that is carefully being looked after by Dave Beckett, who now works at Yahoo!, on some project which seems to be leading to employment opportunities . Of course making things simple for humans, makes things more complicated for the computer. But not so complex, that I did not get most of it done in one day.

Turtle makes things more readable because it allows one to

  • declare namespaces, so as not to have to constantly write out the URLs in full
  • declare a base url
  • use relative urls
  • some punctuation shorthands:
    • use "," when you have sentences that have the same subject and predicate but different objects
    • use ";" when you have sentences that have the same subject but different predicates and objects
  • [] for anonymous nodes (nodes you can't be bothered to give a URL to). You can place predicate object statements into the brackets, meaning that their subject is the anonymous node.
  • ( a b c ) shorthand for lists
Here for example is a section of my foaf file in Turtle:
@prefix foaf: <>.
@prefix : <> .

:me    a foaf:Person;
       foaf:depiction <>;
       foaf:openid <> ;
       foaf:gender "male";
       foaf:birthday "07-29";
       foaf:title "Mr";
       foaf:family_name "Story";
       foaf:givenname "Henry";
       foaf:name "Henry J. Story";
       foaf:homepage <>;
       foaf:schoolHomepage <>,
       foaf:mbox <>,
       foaf:nick "bblfish".
This is clearly much easier to read and to write that NTriples, but it hides somewhat the fact that everything is named by a URL.

There are again two main sections to the NetBeans Schliemann file (view the current version)

TOKEN:space:( [" " "\\t" "\\n" "\\r"]+ ) #unicodify
TOKEN:comment:("#" [\^ "\\n" "\\r"]\* ["\\n" "\\r"]+ )
TOKEN:bnode:( "_:" ["A"-"Z" "a"-"z" "0"-"9"]+ ) 
TOKEN:uriref:( "<" [\^ "<" ">" " " "\\t"]\* ">" ) #unicodify
TOKEN:string:( "\\"" [\^ "\\"" "\\n" "\\r"]\* "\\"" )
TOKEN:qname:(["A"-"Z" "a"-"z" "0"-"9"]\* ":" ["A"-"Z" "a"-"z" "0"-"9" "_"]+)
TOKEN:longString:("\\"\\"\\"" .\* "\\"\\"\\"" )
TOKEN:punct:(";" | "," | "." | "\^\^" )
TOKEN:integer:([ "+" "-"]? ["0"-"9"]+)
TOKEN:decimal:(["+" "-"]? ((["0"-"9"]+ "." ["0"-"9"]\*) | ( "." ["0"-"9"]+) )) # I leave out the decimals that can't be distinguished from integers 
TOKEN:exponent:(["e" "E"]["+" "-"]?["0"-"9"]+)
TOKEN:exists:("[" | "]")
TOKEN:list:( "(" | ")" )
TOKEN:prefix:("@base" | "@prefix")
TOKEN:prefixName:(["A"-"Z" "a"-"z"]["A"-"Z" "_" "a"-"z"]\* )
TOKEN:shortrels:("a" | "=")
TOKEN:lang:("@" ["A"-"Z" "a"-"z"]["A"-"Z" "a"-"z"]) #wrong but simpler

I have not yet filled in all the unicode special cases as I wanted to first make sure the main pieces would be working.

Then there is the grammar that goes with it.

S = ( Statement )\*;
Statement = ( Directive "."  ) | ( Triples  "."  ) ;
Directive = PrefixID | Base ;
PrefixID = "@prefix"  [ <prefixName> ] ":"  <uriref> ;
Base = "@base"  <uriref> ;
Triples = Subject  PredicateObjectList ;
#PredicateObjectList = ( Verb  ObjectList [";"] )+ ; #we have to force the ";" even though this is not necessary
PredicateObjectList = Verb  ObjectList More ; #Here it gets confused about whether the last ";" belongs here or below
More = (  ";"  Verb  ObjectList) \*; 
ObjectList = Object (  ","  Object )\* ;
Verb = Predicate | "a" | "=";
Subject = Resource | Blank ;
Predicate = Resource ;
Object = Resource | Blank | Literal ;
Literal = ( QuotedString [ <lang> ] ) | DatatypeString | <integer> | Double | <decimal> | <boolean> ;
DatatypeString = QuotedString "\^\^" Resource ;
QuotedString = <string> | <longString> ;
Resource = <uriref> | <qname>;
Blank = <bnode> | <exists,"["><exists,"]"> | <exists,"["> PredicateObjectList <exists,"]"> | Collection ;
Double = <integer> <exponent> | <decimal> <exponent> ;
Collection = <list,"("> [ ItemList ]  <list,")"> ;
ItemList = Object (  Object )\*;

The Grammar is a little different from Dave's official Turtle spec. For one I added the "=" sign as a tease. More importantly I ignore all blank spaces and all comments. It tried not to, but the parser in NBS only looks ahead by one token I think, so the white spaces was confusing it a lot. Removing them does not seem to be problematic, but only time will tell. I did this with the two lines:

More problematic was that I could not get the optional ending of sentences with ";" to work. In Turtle one can have sentences like
:me foaf:knows      [ a foaf:Person;
                   foaf:name "Tim Boudreau";
                   foaf:weblog <>
                                ] .
Notice how the last line does not need a semicolon as it is followed by a "]" which clearly closes the sentence too. But it is nice to be able to add the semicolon anyway, as it is one less thing for the programmer to worry about. But I could not get it to work. Following the Turtle spec I tried:
Triples = Subject  PredicateObjectList ;
PredicateObjectList =  Verb  ObjectList More [";"] ; 
More = (  ";"  Verb  ObjectList) \*;

But this confuses the parser, who does not know if the final ";" is the one at the end of the PredicateObjectList line, or at the beginning of the More line... So for the moment I decide not to allow extra semicolons....

Again since everything is built on URIs (and so also URLs) in RDF, it is nice to add functionality so that one can click on links. Just as in yesterday's demo, I added a line to the nbs file:


And then wrote out a clearly more complicated class. This is more complicated because Turtle allows relative urls, and defines the base e in different parts of the document. Hacking the AST libraries I put something together that works well enough for me to be satisfied at having done a great days work, and having spent a great time here in Prague.

Tuesday Oct 02, 2007

working on NTriples support in Netbeans 6

In the last couple of days I have been learning about the very powerful Schlieman project. The result of the project is that one can now very simply create language support for NetBeans, and is what has allowed the very quick recent growth of a huge number of languages such as Groovy, Ruby, Prolog, Erlang, and many many others...

I did not take more than a day for me to get the basics for NTriples support done. NTriples is the simplest serialization of RDF. It is extremely explicit, one line per statement.

<SubjectURL> <PredicateURL> <ObjectURL> .
This is RDF at its purest. It is not very humanly writable, but it is easiest to understand. If you download cwm you can transform any rdf/xml into this format. For example you can transform my foaf file into NTriples like this:
hjs@bblfish:0$ cwm --rdf --ntriples | head
<> <> <> .
<> <> <> . 
<> <> <> .
<> <> <> .

So I now have basic syntax highlighting and clickability: i.e. you can control-hover of a URL, then click, and it will fetch a representation of that resource and open it in a browser. There is a lot more that can be done: if the representation were RDF it would be nice if it could be translated into NTriples and opened inside of NetBeans itself...

All of the source code is available under a BSD licence on the so(m)mer project. Download it with subversion from The project is in the misc/Editor directory. Once opened, just right click on the project and choose "Install/Reload in the development IDE". For debugging it is better to do this in the Target Platform IDE. Then you can open any ntriples extension file and get a little syntax highlighting.

The main tokenizer and grammar code itself is in the Ntriples.nbs file. As you can see it is split into token definitions

TOKEN:space:( [" " "\\t"]+ )
TOKEN:comment:( [" " "\\t"]\* "#" [\^ "\\n" "\\r"]\* ["\\n" "\\r"]+ )
TOKEN:bnode:( "_:" ["A"-"Z" "a"-"z" "0"-"9"]+ ) 
TOKEN:absoluteURI:( "<" [\^ "<" ">" " " "\\t"]+ ">" )
TOKEN:qliteral:( "\\"" [\^ "\\"" "\\n" "\\r"]\* "\\"" )
TOKEN:eol:(["\\n" "\\r"]+)
and a simple grammar
S = (Triple | BlankLine )\*;
BlankLine = Space <eol>;
Triple =   Space Subject <space> Predicate <space> Object Space "." Space <eol>;
Subject = UriRef | <bnode>;
Predicate = UriRef;
Object = UriRef | <bnode> | Literal ;
Literal =   <qliteral> [ <type,"\^\^"> UriRef ];
UriRef =  <absoluteURI>;
Space = (<space>)\*;

As you can see, this is an incredibly simple grammar for the most powerful of all languages. You can express anything in NTriples, clearly and distinctly, or even fuzzily if you wish to. Of course it is not very practical for human editing as it is. But for machine consumption it is excellent, and it compresses very very well as you can imagine. :-) Making it more human friendly will be the topic of my next blogs.

The Token processing comes first, so one has to be careful there not to create ambiguity for the parser. The reference for Schlieman is currently the Schliemann NBS Language Description Wiki page.

One can then add color information (and one could easily do a lot better than this)
COLOR:bnode: {
    foreground_color: "blue";

COLOR:qliteral: {
    foreground_color: "green";

COLOR:absoluteUri: {
    foreground_color: "blue";

COLOR:type: {
    foreground_color: "red";
Then finally I added hyperlink functionality by adding the line
I then wrote the very simple class with the hyperlink(Context context) method. This was easy to do by following Geertjan's excellent hints.
public class HyperLink {

    public static Runnable hyperlink(Context context) {
        SyntaxContext scontext = (org.netbeans.api.languages.SyntaxContext) context;
        ASTPath path = scontext.getASTPath();
        ASTToken t = (ASTToken) path.getLeaf();
        String url = t.getIdentifier();
        if (url.startsWith("<"))
            url = url.substring(1);
        if (url.endsWith(">"))
            url = url.substring(0,url.length()-1);
        final String cleanUrl = url;
        return new Runnable() {

            public void run() {
                try {
                } catch (MalformedURLException ex) {
                    Logger.getLogger(HyperLink.class.getName()).log(Level.SEVERE, null, ex);

You will find Gertjan's blog to be a great source of information on how to get going with Schlieman and NetBeans programming in general. One very useful thing to help with writing a grammar and a parser is a debugging tool of course. To get this you need to open the Tools>Plugins menu and get the Generic Languages Framework Studio plugin. Once it is installed you will find in the Windows > Others menu an AST and Tokens submenu, which when clicked will open two very helpful views on the left of your editor you can see in the main image. This apparently only currently works on the latest developer release of NetBeans, but should be distributes standard with Netbeans beta 2 .

I will next try to add support from Turtle and N3, then it would be nice to add a more intelligent way of displaying linked to RDF files. I can think of quite a lot of different features here...

Monday Sep 10, 2007

The Church of NetBeans

If there is anyone else who is close to being as homeless as me at Sun, it is certainly Tim Boudreau, who is now on a World Tour in a truck he bought for $1000. As he is the ultimate NetBeans evangelist, he painted up his car with the NetBeans logo, and will evangelise to whomever wants to hear the word :-) Read up on his story on his blog.

Tim is also a creative guitar player and song composer so don't hesitate to ask him to play you a song.
This reminds me of Timbuk 3's song "Reverend Jack and his Roamin Cadillac Church" (iTunes).

Come hell or high water
A soul's got to find some release
Some find it in power
And some in heavenly peace
Some look to the preacher
As he speaks from his holy perch
Me, I back Rev. Jack & his Roamin Cadillac Church
So if you're stuck at the station
On the road to the Glory on High
If you need some inspiration
He's got more than your money can buy
If you're lookin for salvation
Well my friend it's the end of your search
Here comes Rev. Jack & his Roamin Cadillac Church
Ain't no use watchin the road, son
When you ride in his automobile
Cause we're all back seat drivers,
& there's nobody at the wheel
Now for the well-to-do doctor
There's a home & a summer retreat
And for the jet-settin banker
There's a place in the social elite
But for the poor & the hungry
All the lost souls left in the lurch
There's just Rev. Jack & his Roamin Cadillac Church

Monday Jul 02, 2007

refactoring xml

Refactoring is defined as "Improving a computer program by reorganising its internal structure without altering its external behaviour". This is incredibly useful in OO programming, and is what has led to the growth of IDEs such as Netbeans, IntelliJ and Eclipse, and is behind very powerful software development movements such as Agile and Xtreeme programming. It is what helps every OO programmer get over the insidious writers block. Don't worry too much about the model or field names now, it will be easy to refactor those later!

If maintaining behavior is what defines refactoring of OO programs - change the code, but maintain the behavior - what would the equivalent be for XML? If XML is considered a syntax for declarative languages, then refactoring XML would be changing the XML whilst maintaining its meaning. So this brings us right to the question of meaning. Meaning in a procedural language is easy to define. It is closely related to behavior, and behavior is what programming languages do their best to specify very precisely. Java pushes that very far, creating very complex and detailed tests for every aspect of the language. Nothing can be called Java if it does not pass the JCP, if it does not act the way specified.
So again what is meaning of an XML document? XML does not define behavior. It does not even define an abstract semantics, how the symbols refer to the world. XML is purely specified at the syntactic level: how can one combine strings to form valid XML documents, or valid subsets of XML documents. If there is no general mapping of XML to one thing, then there is nothing that can be maintained to retain its meaning. There is nothing in general that can be said to be preserved by transformation one XML document into another.
So it is not really possible to define the meaning of an XML document in the abstract. One has to look at subsets of it, such at the Atom syndication format. These subset are given more or less formal semantics. The atom syndication format is given an english readable one for example. Other XML formats in the wild may have none at all, other than what an english reader will be able to deduce by looking at it. Now it is not always necessary to formally describe the semantics of a language for it to gain one. Natural languages for example do not have formal semantics, they evolved one. The problem with artificial languages that don't have a formal semantics is that in order to reconstruct it one has to look at how they are used, and so one has to make very subtle distinction between appropriate and inappropriate uses. This inevitably ends up being time consuming and controversial. Nothing that is going to make it easy to build automatic refactoring tools.

This is where Frameworks such as RDF come in very handy. The semantics of RDF, are very well defined using model theory. This defines clearly what every element of an RDF document means, what it refers to. To refactor RDF is then simply any change that preserves the meaning of the document. If two RDF names refer to the same resource, then one can replace one name with the other, the meaning will remain the same, or at least the facts described by the one will be the same as the one described by the other, which may be exactly what the person doing the refactoring wishes to preserve.

In conclusion: to refactor a document is to change it at the syntactic level whilst preserving its meaning. One cannot refactor XML in general, and in particular instances it will be much easier to build refactoring tools for documents with clear semantics. XML documents that have clear RDF interpretations will be very very easy to refactor mechanically. So if you are ever asking yourself what XML format you want to use: think how useful it is to be able to refactor your Java programs. And consider that by using a format with clear semantics you will be able to make use of similar tools for your data.

Monday Mar 19, 2007

NetBeans Day, Paris

The Sun Tech Days converence has moved to Paris and is taking place under the Grande Arche de la Defense, the third arch of triumph, from which you can see, on a clear day, the second arch at Etoile, and in the far distance the first arch that is standing in front of the Louvre.

NetBeans is becoming more and more impressive. I just uploaded a number of photos on flickr. There you can see

  • Romain Guy (blog) elegantly bringing us up to date with Matisse and the ease of linking it to data sources.
  • Ludovic Champenois (blog) and Alexis Moussine-Pouchkine (blog) giving an overview of the latest J2EE and Web Services abilities of NetBeans.
  • Roman Strobl (blog) going into the NetBeans profiler, how to build on the NetBeans platform, and a lot more
  • Petr Suchomel (blog) showing off some really impressive elements of the mobility pack: most memorable of all a graphical programming environment to write cell phone applications, and Java powered vector graphics cell phones.
  • John Treacy (blog) directing all of this from the front bench.
  • An attentive crowd of developers come from all over France. (The other developers were at the Solaris track in another room)

Time to go to bed, for the next installment, tomorrow morning.

Friday Mar 02, 2007

Baetle: Bug And Enhancement Tracking LanguagE ( version 0.00001 )

So here is an absolute first draft version of a Bug ontology, just starting from information I can glean from bugs in the NetBeans repository. It really is not meant to be anything else than a place to start from, to see what the field looks like, and to try out a few ideas. There is a lot that needs to changed. If possible other ontologies should be reused. But for the moment we just need to work out if this can be of any use...

The idea is that this should end up being a solid Bug Ontology that could be used by bugzilla and other repositories to enable people to query for bugs across repositories. So if people want to join in helping out, I am currently developing it on the mailing list at, just because I have access to that repository, and so that I don't need to create another space immediately.

Before sitting down and writing out the OWL it is just a lot easier to diagram these in UML. So that's what I have done here.

Here is a description of bug 18177 from the NetBeans bug tracker, using the above outline. It is in N3.

@prefix : <> .
@prefix foaf: <> .
@prefix sioc: <> .
@prefix awol: <> .

       a :Feature;
       :status "resolved";
       :resolution "fixed";
       :qa_contact [ foaf:mbox <> ];
       :priority "P1";
       :component [ :label "openide"
                    :version "3.6";
       :bugType "looks";
       :assigned_to  <>";
       :reporter [ :nick "jtulach"];
       :interested [ :mbox <> ];
       :interested [ :mbox <> ];
       :updated "2003-12-11T06:25:15"\^\^xsd:dateTime;
       :dependsOn <>,
       :blocks <>,
       :hasDuplicate <>;
       :description <>;
       :comment <>,
       :attachment <>.

<> a :Description;
     :content [ awol:body """We need a reasonable way how to organize looks so we can easily locate them for
given representation object in a way that makes it probable that correct look
will be found. We need a way for different modules to replace (or lower
priority) of some look and provide their own. We need use to be present with
available looks (which make some sence on that object) and being able to
pernamently change the order of looks available for given object.""";
               awol:type "text/plain";
     :author [ foaf:mbox <> ];
     :created "2001-11-29T08:17:00"\^\^xsd:dateTime .     

I have hyperlinked a couple of the relations above, to show exactly how what we are doing is just describing the web. (RDF stands for Resource Description Framework after all) Click on the links and you will get a description of what that thing is about. There is already one big improvement here over the xml format provided by the service (see the xml for bug 18177 for example) and that is that we don't have to send the content of an attachment around. We just refer to it by a url (

The bugs have the #it anchor, to distinguish the bug from the page which describes it.

The URL for the code should clearly be the urls of the http repository versions. Then we should be able to link the code to packages, which should link up to libraries, which link up to products; and we can link any of those to bugs.

So what could one do with all that information available from a SPARQL endpoint?

Thursday Mar 23, 2006

Google Video introduces the Semantic Web

A few months ago I put together a slide show to introduce the Semantic Web. In order to make the problem less abstract I try to present the Semantic web through a very practical problem faced by Software engineers and first presented by Fred Books in the 70ies in a very influential book The Mythical Man Month [1]. Simply put, adding more engineers to a project does not make it go faster. So how could the SemWeb affect software development in an Open Source world, where there are not only many more developers, but also these are distributed around the world with no central coordinating organisation? Having presented the problem, I then introduce RDF and Ontologies, how this meshes with the Sparql query language, and then show how one could use these technologies to make distributed software development a lot more efficient.

Having given the presentation in November last year, I spent some time over Xmas putting together a video of it (in h.264 format). The result is not too bad for a first attempt at adding sound to a slide show, though it may at points be a little slow I have to admit. It takes time to do this well, and I don't have time to improve it. So the video of my slide show presenation is a little long at 30 minutes, but it should be a good introduction for people with software engineering experience.

Then last week I thought it would be fun to put it online, and so I placed it on google video, where you can still find it. But you will notice that Google video reduces the quality quite dramatically, so that you will really need to have the pdf side by side, if you wish to follow. But if you can view the latest mpeg4 format (H.264) then you will find the this movie a lot clearer to watch [2].

  1. the great thing about making things public is that 10 minutes after I did this, it was pointed out to me that I had mis named Brooks in the presentation. Ouch! It's easy to fix the slides, but fixing the video is going to be less pleasant :-(
  2. This movie is served with mime-type video/h264 but for some reason it does not open quicktime on OSX automatically when using Safari.



« April 2014