The limitations of JSON

A thread on REST-discuss recently turned into a JSON vs XML fight. I had not thought too deeply about JSON before this, but now that I have I though I should summarize what I have learnt.

JSON has a clear and simple syntax, described on json.org. As far as I could see there is no semantics associated with it directly, just as with XML. The syntax does make space for special tokens such as numbers, booleans, ints, etc which of course one automatically presumes will be mapped to the equivalent types: ie things that one can add or compare with boolean operators. Behind the scenes of course a semantics is clearly defined by the fact that it is meant to be used by JavaScript for evaluation purposes. In this it differs from XML, which only assumes it will be parsed by XML aware tools.

On the list there was quite a lot of confusion about syntax and semantics. The picture accompanying this post shows how logicians understand the distinction. Syntax starts by defining tokens and how they can be combined into well formed structures. Semantics defines how these tokens relate to things in the world, and so how one can evaluate the truth, among other things of the well formed syntactic structure. In the picture we are using the NTriples syntax which is very simple defined as the succession of three URIs or 2 URIs and a string followed by a full stop. URIs are Universal names, so their role is to refer to things. In the case of the formula

<http://richard.cyganiak.de/foaf.rdf#cygri> <http://xmlns.com/foaf/0.1/knows> <http://www.anjeve.de/foaf.rdf#AnjaJentzsch> .
the first URI refers to Richard Cyganiak on the left in the picture, the second URI refers to a special knows relation defined at http://xmlns.com/foaf/0.1/, and depicted by the red arrow in the center of the picture, and the third URI refers to Anja Jenzsch who is sitting on the right of the picture. You have to imagine the red arrow as being real - that makes things much easier to understand. So the sentence above is saying that the relation depicted is real. And it is: I took the photo in Berlin this Febuary during the Semantic Desktop workshop in Berlin.

I also noticed some confusion as to the semantics of XML. It seems that many people believe it is the same as the DOM or the Infoset. Those are in fact just objectivisations of the syntax. It would be like saying that the example above just consisted of three URIs followed by a dot. One could speak of which URI followed which one, which one was before the dot. And that would be it. One may even speak about the number of letters that appear in a URI. But that is very different from what that sentence is saying about the world, which is what really interests us in day to day life. I care that Richard knows Anja, not how many vowels appear in Richard's name.

At one point the debate between XML and JSON focused on which had the simplest syntax. I suppose xml with its entity encoding and DTD definitions is more complicated, but that is not really a clinching point. Because if syntactic simplicity were an overarching value, then NTriples and Lisp would have to be declared winners. NTriples is so simple I think one could use the well known very light weight grep command line tool to parse it. Try that with JSON! But that is of course not what is attractive about JSON to the people that use it, namely usually JavaScript developers. What is nice for them is that they can immediately turn the document into a JavaScript structure. They can do that because they assume the JSON document has the JavaScript semantics. [1]

But this is where JSON shows its greatest weakness. Yes the little semantics JSON datastructures have makes them easy to work with. One knows how to interpret an array, how to interpret a number and how to interpret a boolean. But this is very minimal semantics. It is very much pre web semantics. It works as long as the client and the server, the publisher of the data and the consumer of the data are closely tied together. Why so? Because there is no use of URIs, Universal Names, in JSON. JSON has a provincial semantics. Compare to XML which gives a place for the concept of a namespace specified in terms of a URI. To make this clearer let me look at the JSON example from the wikipedia page (as I found it today):

{
    "firstName": "John",
    "lastName": "Smith",
    "address": {
        "streetAddress": "21 2nd Street",
        "city": "New York",
        "state": "NY",
        "postalCode": 10021
    },
    "phoneNumbers": [
        "212 732-1234",
        "646 123-4567"
    ]
}

We know there is a map between something related to the string "firstName" and something related to the string "John". [2] But what exactly is this saying? That there is a mapping from the string firstName to the string John? And what is that to tell us? What if I find somewhere on the web another string "prenom" written by a French person. How could I say that the "firstName" string refers to the same thing the "prenom" name refers to? This does not fall out nicely.

The provincialism is similar to that which led the xmlrpc specification to forget to put time stamps on their dates, among other things, as I pointed out in "The Limitations of the MetaWeblog API". To assume that sending dates around on the internet without specifying a time zone makes sense, is to assume that every one in the world lives in the same time zone as you.
The web allows us to connect things just by creating hyperlinks. So to tie the meaning of data to a particular script in a particular page is not to take on the full thrust of the web. It is a bit like the example above which writes out phone numbers, but forgets to write the country prefix. Is this data only going to get used by people in the US? What about the provincialism of using a number to represent a postal code. In the UK postal codes are written out mostly with letters. Now those two elements are just modelling mistakes. But if one is going to be serious about creating a data modelling language, then one should avoid making mistakes that are attributable to the idea that string have universal meaning, as if the whole world spoke english, and as if english were not ambigous. Yes, natural language can be disambiguated when one is aware of the exact location and time and context of the speaker. But on a web were everything should link up to everything else, that is not and cannot be the case.
That JSON is so much tied to a web page should not come as a surprise if one looks at its origin, as a serialisation of JavaScript objects. JavaScript is a scripting language designed to live inside a web page, with a few hooks to go outwards. It was certainly not designed as a universal data format.

Compare the above with the following Turtle subset of N3 which presumably expresses the same thing:

@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix : <http://www.w3.org/2000/10/swap/pim/contact#> .

<http://eg.com/joe#p>
   a foaf:Person;
   foaf:firstName "John";
   foaf:family_name "Smith";
   :home [
         :address [
              :city "New York";
              :country "New York";
              :postalCode "10021";
              :street "21 2nd Street";
         ]
   ];
   foaf:phone <tel:+1-212-732-1234>, <tel:+1-646-123-4567>;
.

Now this may require a little learning curve - but frankly not that much - to understand. In fact to make it even simpler I have drawn out the relations specified above in the following graph:

(I have added some of the inferred types)

The RDF version has the following advantages:

  • you can know what any of the terms mean by clicking on them (append the prefix to the name) and do an HTTP GET
  • you can make statements of equality between relations and things, such as
    foaf:firstname = frenchfoaf:prenom .
  • you can infer things from the above, such as that
    <http://eg.com/joe#p> a foaf:Agent .
  • you can mix vocabularies from different namespaces as above, just as in Java you can mix classes developed by different organisations. There does not even seem to be the notion of a namespace in JSON, so how would you reuse the work of others?
  • you can split the data about something in pieces. So you can put your information about <http://eg.com/joe#p> at the "http://eg.com/joe" URL, in a RESTful way, and other people can talk about him by using that URL. I could for example add the following to my foaf file:
    <http://bblfish.net/people/henry/card#me> foaf:knows <http://eg.com/joe#p> .
    You can't do that in a standard way in JSON because it does not have a URI as a base type (weird for a language that wants to be a web language, to miss the core element of the web, and yet put so much energy into all these other features such as booleans and numbers!)

Now that does not mean JSON can't be made to work this way, as the SPARQL JSON result set serialisation does. But it does not do the right thing by default. A bit like languages before Java that did not have unicode support by default. The few who were aware of the problems would do the right things, all the rest would just discover the reality of their mistakes by painful experience.

This does not take away from the major advantage that JSON has of being much easier to integrate with JavaScript, which is a real benefit to web developers. It should possible to get the same effect with a few good libraries. The Tabulator project provides a javascript library to parse rdf, but it would probably require something like a so(m)mer mapping from relations to javascript objects for it to be as transparent to those developers as JSON is.

Notes

[1]
Now procedural languages such as JavaScript don't have the same notion of semantics as the one I spoke of previously. The notion of semantics defined there is a procedural one: namely two documents can be said to have the same semantics if they behave the same way.
[2]
The spec says that an "object is an unordered set of name-value pairs", which would mean that person could have another "firstName" I presume. But I also heard other people speak about those being hash maps, which only allow unique keys. Not sure which is the correct interpretation...

Vote for this: |dzone

Comments:

Hi Henry,

You continue to provide a very useful service by explaining semantic Web concepts in ways that mere mortals (me!) can understand.

Thanks, and please continue to take the time to provide these helpful pieces.

Mike

Posted by Mike Bergman on July 13, 2007 at 01:42 PM CEST #

I think JSON is perfect for small amounts of data with simple structure, used only as an internal implementation detail of an AJAX application. It is not appropriate as a stable interchange format between multiple applications.

Posted by Nico on July 13, 2007 at 04:54 PM CEST #

I find it ironic you berate JSON for it's lack of semantics, and as an examplar of semantics you provide an RDF document encoded using FOAF, specifical foaf:Person. a foaf:Person is defined as "Something is a foaf:Person if it is a person". And this is the flagship RDF ontology Give me JSON any day. At least it solves a syntax problem and has no pretentions to semantics. RDF is a massive cock-up with respect to syntax and semantics

Posted by guest on July 14, 2007 at 01:34 AM CEST #

Nico: I agree JSON is very good for simple client-server communication which is the way it is used by AJAX applications. Like everything it has its limits, and I was interested in exposing the limitations here. The Let me repeat here that JSON has many good sides: the SPARQL serialization format I mention at the end of the article is a very good example of how this Web 2.0 language can come in very useful as a bridge to Web 3.0. It is also true that one can do a lot of very good stuff in client server mode, like gmail, flickr and other such apps are constantly demonstrating.

anonymous coward: The english definition of foaf:Person as being the set of what we would consider a person is a perfectly good explanation of the concept. Concepts can be vague, as long as they are useful. And I think we all have very good intuitions as to what a person is. What the semantic web allows us to do is to give this concept a URL, so that if later someone wants to come up with another concept of a person, perhaps a more ethically oriented one, or one that would allow them to say that companies are persons, then they can do that too, by giving their concept a URL. It is then possible to distinguish the two uses, and relate them in various ways. The proof that it is useful is in the pudding. It is widely used and well understood.
I don't agree that JSON has no pretensions to have any semantics. In fact my argument is that it has the semantics imparted to it by JavaScript, which is exactly what makes it easier to read and understand. These semantics are good for client server communication, when the person writing the client is mostly also the person writing the server, or if not, the code in the client is very closely tied to one particular server. To break out of this one needs something more general, which is what RDF provides. (And btw, RDF is not RDF/XML. RDF is a syntax free semantics, bizaarely enough. Learn N3, it is much easier to understand)

Posted by guest on July 14, 2007 at 03:05 PM CEST #

By the way the picture illustrating the syntax semantics distinction on this blog is one I took in Berlin mid April this year, at a Semantic Desktop meeting. It is available my flickr account, with more Web 2.0 annotations, using a service that presumably uses a lot of JSON. Now here is one very interesting example of what one could do with RDF that one cannot do with JSON. To annotate flickr, one could just drag and drop ones foaf url onto the picture, flickr could fetch the information at that URL, and display some key information on the screen. As that information gets updated, so in due course would the annotations on the picture....

Posted by Henry Story on July 14, 2007 at 03:22 PM CEST #

Hey look, that's me, with the subject lasso around my neck!

I think the great thing about JSON is that it's very close to the data structures we use in our programs all the time -- lists, maps, scalars. Getting some data from my software out into JSON is trivial, and getting it back into a data structure is just as easy. This isn't about Javascript really, it's about the impedance mismatch. Going from objects to JSON is simpler than going from objects to XML or RDF, no matter what programming language you use. JSON is great for client/server style data exchange, I use it extensively, and it has pretty much replaced XML for me.

So, what are the strengths of RDF over JSON? I don't think the explicit semantics and inferencing in RDF are such a big deal; they are just icing on the cake.

RDF's strength is the hyperlink. You can build a Web of Data with RDF. You can't do that with JSON, because JSON doesn't have hyperlinks. There's no way to say, &amp;ldquo;and more data about that thing is found over there&amp;rdquo;. You have to ship everything in one JSON file, or specify out-of-band how a client can find other pieces of data. Imagine HTML without hyperlinks (but with the ability to embed Javascript that updates document.location in response to user actions). That's what JSON looks like to RDF people.

But if you don't need Web of Data features, e.g. when you're building client/server-style apps, then JSON is quite fine.

Posted by Richard Cyganiak on July 15, 2007 at 07:28 AM CEST #

Richard, I don't agree that you can't do hyperlinks in JSON. I don't see this as any different than plain-old-XML, where you can embed a URI as an XML attribute (or even as text); in JSON, you can embed a URI as a string, anywhere are a string could be placed. It's all about who can make sense of that string. Who knows it's there, that it's a link, that if it's relative, it's relative to what, etc. Using JSON in this implies that "the other side" (client or server) has some understanding of the the data, that it's not completely self descriptive. I'm fine with that for now.

Posted by Patrick Mueller on July 15, 2007 at 12:07 PM CEST #

Patrick:
JSON is different from XML. XML comes with namespaces, so you can tell from the namespace of an xml element how it is to be interpreted. Namespaces are URIs, so you have a unique identifier for that. The xhtml namespace is <code>http://www.w3.org/1999/xhtml</code> for example. When you then find the xhtml:a anchor element you know to interpret its href attribute as a URL.
In JSON you don't have anything like this. You would only be able to determine the meaning of particular JSON document by using mime types, and if you did that you would have a non extensible format, like html, since there would be no way to distinguish between different extensions for the meaning of the JSON elements. So when you say:
Using JSON in this implies that "the other side" (client or server) has some understanding of the the data, that it's not completely self descriptive. I'm fine with that for now.
you are right. The point is that either
  • "the other side" has to be a particular server, and the client has to know the particular intention of that server for this to work. And we all agree that at least for that case, JSON makes more sense than XML because it is a lot easier to write and parse, and in fact in those circumstances has a clear semantics, the one imparted on it by the server and the javascript interpreters of JSON.
  • or the JSON has to be served up with a special mime type, and then it no longer is extensible (a bit like RSS 2.0) without going through a centralized agreement process
In both cases you have a format that is extensible only through a centralized process. With RDF you can decentralize meaning. You can mix and match different vocabularies written out around the web without risking name clashes and whilst keeping very clear rules semantic rules, so much so that an agent can work confidently even when he only understand parts of the words in the document. The Semantic Web also allows robots to automatically discover the meaning of unknown "words", by fetching documents describing them (also known as ontologies) at the location of the word itself (since words are URIs).
These are some quite stupendous advantages, derived from some very very simple elements. It is definitively worth learning, because once you know it, some very new horizons will start opening up. If you do learn RDF go for N3, section A is really all you need to learn to get going. cwm will help you convert most of the rdf/xml documents into this easier to read format.

Posted by Henry Story on July 15, 2007 at 10:56 PM CEST #

Carmen has a short post "JSON: best RDF format for real world usage", that highlights two advantages of JSON over RDF:
  • Speed of using eval() on JSON over using a javascript rdf parser
  • the same site origin limitations of browsers: JavaScript can only fetch pages that come from the same site, a hacky security limitations, that is really showing its age.

Both of those will disappear in due course. It would of course be great if browsers built in better RDF support. But don't forget, the browser is not the only user agent possible. As every application becomes web enabled, publishing your data in RDF will open up the possibility of many other types of applications.

Posted by Henry Story on July 15, 2007 at 11:34 PM CEST #

Patrick: No, JSON can't do hyperlinks. Of course you can send around URIs as JSON strings, but that doesn't make it a hyperlink, any more than embedding a URI into a plain text document makes it a hyperlink. JSON relies on your custom application logic to turn the URI into a hyperlink. As I said, that isn't a problem for client/server-style applications, but it's a showstopper for web-of-data applications. You need formats with built-in hyperlinks for that -- like RDF, XLink, microformats, OPML -- though of course they all have disadvantages compared to JSON in other areas.

Posted by Richard Cyganiak on July 16, 2007 at 01:55 AM CEST #

Henry, I strongly disagree. First, you correctly mention that JSON is a syntax, but then you complain that this "syntax" is not "semantic" enough. Some commentators compare RDF (a model) with JSON (a syntax). This does not make any sense! Your main argument "Because there is no use of URIs" is wrong, because you can easily use URIs in JSON, there is absolutely no problem with that. Also a lack of namespaces is totally misleading -- one can define namespaces semantics and store it using JSON syntax (same way it is done in XML). You take some specific JSON example and prove it wrong, instead of showing that it is possible to encode any semantics in JSON. I agree that JSON is not optimal for describing semantics, because JSON defines only binary relations e.g. "a" : "b", and all higher-level relations must be composed from these binary relations (which is possible). But this is not a "limitation" of a JSON syntax. On the same note we could argue that N3 (Ntriples, Turtle) is limited too, because it uses triples, while real world demands still higher-level relations! Which are, of course, constructible from triples. There's only one true syntax -- LISP. There's nothing simpler and there's nothing more powerful.

Posted by Alexander Pohoyda on July 16, 2007 at 03:04 AM CEST #

Also, in your "good" N3 example you have: :city "New York" and foaf:firstName "John". So both "New Your" and "John" are strings instead of objects referenced by URIs. This is as wrong as using "firstName" in the JSON example. So N3 can also be used is a wrong way.

Posted by Alexander Pohoyda on July 16, 2007 at 04:18 AM CEST #

Alexander said: “one can define namespaces semantics and store it using JSON syntax (same way it is done in XML)”

That's misleading. With XML, the namespace semantics are standardized and universally supported by the tools. With JSON, you have to roll your own, and everybody who wants to use your namespace mechanism has to code from scratch. That makes all the difference.

“Some commentators compare RDF (a model) with JSON (a syntax). This does not make any sense!”

JSON and RDF are both formats for data exchange. Why shouldn't we compare them?

Posted by Richard Cyganiak on July 16, 2007 at 04:29 AM CEST #

Alexander Pohoyda: you say
First, you correctly mention that JSON is a syntax, but then you complain that this "syntax" is not "semantic" enough.
My claim with respect to JSON syntax is that its gains its semantics by the way it is interpreted by what is usually a JavaScript process. This makes it easy to understand JSON for people developing in JavaScript. But it also ties the semantics to a particular process, which results in the client-server lock in. The advantages of being easy to parse and develop for with JavaScript are serious advantages, which explains its success. But this also comes with limitations: that it cannot be the basis for a web of data.
Some commentators compare RDF (a model) with JSON (a syntax). This does not make any sense! Your main argument "Because there is no use of URIs" is wrong, because you can easily use URIs in JSON, there is absolutely no problem with that. Also a lack of namespaces is totally misleading -- one can define namespaces semantics and store it using JSON syntax (same way it is done in XML).
Well, I do point out that using URIs in JSON is possible, just like writing C programs that use unicode strings. It is just not something that comes naturally. In fact I give as example the "Serializing SPARQL Query Results with JSON" spec as one that does use URIs and has a clear mapping to an XML format. It is worth noticing the difference between the XML and the JSON versions though. Notice again that in order to make sense of the resulting JSON document described in the above spec, the client has to know that it is receiving a SPARQL result back in JSON format. So the message is still very closely tied to the server, because there is no standard way in JSON to give the words "head", "vars","results","binding","type","value", etc... a unique name. All these strings could appear in many different contexts with different meanings, even though all of them may be perfectly legitimate JSON documents. Compare that to the XML documents it is derived from. There all the elements and attributes are relative to the "http://www.w3.org/2005/sparql-results#" namespace. XML provides a standard to namespace elements and attributes, JSON does not. JSON probably could, but you'd think that a syntax that defines booleans and numbers, as primary types, would have done something special for URIs, if it had really wanted to distinguish itself as a web language! And for a language so closely derived from Java, why no package naming, which would have been a step in the right direction?
I agree that JSON is not optimal for describing semantics, because JSON defines only binary relations e.g. "a" : "b", and all higher-level relations must be composed from these binary relations (which is possible). But this is not a "limitation" of a JSON syntax. On the same note we could argue that N3 (Ntriples, Turtle) is limited too, because it uses triples, while real world demands still higher-level relations! Which are, of course, constructible from triples. There's only one true syntax -- LISP. There's nothing simpler and there's nothing more powerful
I would never criticise a language's lack of power because it could only describe relational data, now that I understand the semantic web. Every data structure can be built out of relations: lists, trees, tables, graphs. Lisp is not the simplest language: lists can be specified by using two relations :first and :rest and the :null final element. The simplest data structure is the relation:
subject ----relation---->object
or in other terminology
subject ----property---->value
.

Posted by Henry Story on July 16, 2007 at 04:48 AM CEST #

Hi Henry: Worth pointing out that URIs are \*not\* "universal names". URIs are merely "uniform resource identifiers", i.e. they are resource identifiers (or "names" if you will) that have a uniform syntactic structure and semantic payload. (And here I am talking about the semantics of the identifier components, e.g. scheme, name authority, path, querystring, etc. - and not about the ultimate referent of the URI.) There is nothing universal about URIs. But, having said that, they are an extremely potent device and possibly the most important invention of the three pillars of the web architecture - identification, interaction, representation. Tony

Posted by Tony Hammond on July 16, 2007 at 05:09 AM CEST #

Alexander said: &ldquo;one can define namespaces semantics and store it using JSON syntax (same way it is done in XML)&rdquo; Richard said: "That's misleading. With XML, the namespace semantics are standardized and universally supported by the tools. With JSON, you have to roll your own, and everybody who wants to use your namespace mechanism has to code from scratch. That makes all the difference." So what's the problem of using the same standardized namespace semantics and write it down using JSON syntax? Do you argue that this is impossible? There's nothing in XML syntax special for namespaces. How you interpret some "reserved" attributes and prefixes -- is pure semantics. Even if some XML parser has no idea of namespaces, it will parse a well-formed XML document. Alexander said: &ldquo;Some commentators compare RDF (a model) with JSON (a syntax). This does not make any sense!&rdquo; Richard said: "JSON and RDF are both formats for data exchange. Why shouldn't we compare them?" RDF is a model! Model! Model! Resource-property model! Object-Predicate-Subject model. JSON is a syntax! This has been debated to death so many times!

Posted by Alexander Pohoyda on July 16, 2007 at 05:10 AM CEST #

subject ----relation---->object right, and this can be written in JSON syntax as: "subject" : { "relation" : "object" } so there's no semantics which couldn't be written using JSON syntax. That's my primary point.

Posted by Alexander Pohoyda on July 16, 2007 at 05:21 AM CEST #

Alexander Pohoyda: you write
Also, in your "good" N3 example you have:
[] :city "New York"
and
[] foaf:firstName "John"
. So both "New York" and "John" are strings instead of objects referenced by URIs. This is as wrong as using "firstName" in the JSON example. So N3 can also be used is a wrong way.
Heh! An interesting point.
One answer is that strings in RDF can be thought of as URIs that refer to themselves. URIs usually refer to other things than themselves, so it is better not to say the above, but rather to say that literals uniquely identify themselves. Semantically what you have to imagine is an arrow going from the person John (wherever he happens to be) to a string "John". Imagine further that the arrow has a little tag "foaf:firstName" attached to it, which you can click on to find out what it means. "John" does not refer to John (the real person) directly, but indirectly via the unique foaf:firstName relationship. Note that many other people can have a similar relationship to the string "John", which is why on the web you cannot uniquely identify a person by their first name. Imagine here that you are looking at the "John" string and you look at all the foaf:firstName relations pointing to it. You will find a lot of other people at the other end of those arrows. It is also true that things other than people can have relationships other than foaf:name to the string "John". For example the class of christian names has the :contains relationship to that string. The nameplate John wears at his Javascript conference, has the "xxx:inscribedOn" relation to it too, etc... So again looking at this universal string John, you will find a lot of other types of arrows pointing to it too. Another point you could be making is that foaf:firstName is an (unstable) foaf term which relates a person to his name. Since people can have different names in different languages, it is true that it may be better to write
[] foaf:firstName "John"@en
.

The same arguments hold for the contact:city relation. But here your point that it should perhaps be a URL is even more powerful. The contact:city relation does not say what the range of the relation should be. Since cities are indifividuals, I could very well understand someone who argued that it should be an ObjectProperty. contact:city has clearly had less work on it than the foaf vocabulary. A quick:

 cwm http://www.w3.org/2000/10/swap/pim/contact | less
will show that it is very underspecified. Underspecified is not a problem: things can be specified later... But yes the code here to parse this may have to be very open to the type of thing it relates to... Mind you there are really only two choices I think: an object defined by a URI or a string. Given those two options it is quite easy to see how to interpret them.

Posted by Henry Story on July 16, 2007 at 05:30 AM CEST #

I would rather propose this: [] foaf:firstName [ rdfs:label "John"@en ]. Or define name:john rdfs:label "John", "Johannes"@de, "Иван"@ru. and use it like this: [] foaf:firstName name:john. There may be many cities with the name "New York", so using it as a string is not wise. Same about the country, the street and so on. We should use URIs almost everywhere. Or, look at the dc:creator property. The reference says: Typically, the name of a Creator should be used to indicate the entity. But in my eyes, [] dc:creator "Alexander Pohoyda" is absolutely wrong! A string cannot be a creator! That string may be my name, but I am not a string. Saying that "Alexander Pohoyda" rdfs:type foaf:Person is also wrong.

Posted by Alexander Pohoyda on July 16, 2007 at 06:22 AM CEST #

Alexander Pohoyda, wrote:
I would rather propose this: [] foaf:firstName [ rdfs:label "John"@en ].
Names are strings, and not things, so I think when a relation is clearly to a literal, it is perfectly ok to have relations from things to strings.
Or define name:john rdfs:label "John", "Johannes"@de, "Иван"@ru. and use it like this: [] foaf:firstName name:john.
This indirection seems a little uncessary. You might as well skip the name:john middleman. Write it like this:
[] foaf:firstName "John", "Johannes"@de, "Иван"@ru .
On the other hand I understand your point better here:
There may be many cities with the name "New York", so using it as a string is not wise. Same about the country, the street and so on. We should use URIs almost everywhere.
The nice thing about using URIs as relations is that it is easy to create another relation that better suits your needs. And better vocabularies are certainly being developed... I would have found it easier to understand had the contact:city relation had more information about it or had it been called contact:cityName.
Or, look at the dc:creator property. The reference says: Typically, the name of a Creator should be used to indicate the entity. But in my eyes, [] dc:creator "Alexander Pohoyda" is absolutely wrong! A string cannot be a creator! That string may be my name, but I am not a string. Saying that "Alexander Pohoyda" rdfs:type foaf:Person is also wrong.
I kind of agree with you here. Dublin Core is a very old ontology. I think it was one of the first. dc:creator is not very precisely defined. It seems to me that its range covers both strings and objects. It would be wise to create a dcowl:creator subproperty of dc:creator with the restriction that its range is foaf:Agents.

Posted by Henry Story on July 16, 2007 at 07:13 AM CEST #

Names are strings, and not things
I disagree. FirstName is a concept. It can be spoken, written, may be reasoned about. Names may have different forms and may even have a history. In Germany, for example, a person may have up to 7 first names. So it's clearly not a simple string.

Posted by Alexander Pohoyda on July 16, 2007 at 07:25 AM CEST #

Alexander Pohyda said
FirstName is a concept. It can be spoken, written, may be reasoned about. Names may have different forms and may even have a history. In Germany, for example, a person may have up to 7 first names. So it's clearly not a simple string.
Why is this a problem? Why can't a person have a number of foaf:firstName relations to a number of different strings, as I showed above:
<http://john.name/me.rdf#j> 
            a foaf:Person;
            foaf:firstName "John", "Johannes"@de, "Иван"@ru .
This is perfectly valid rdf.

Now let me also agree with you that one could indeed create some object that spanned the name over time, such as

@prefix name: <http://names.eg/ns/names#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<http://names.com/nm/john> 
          a name:Name;
          rdfs:label "John"@en;
           name:firstUsed "800-10-10"\^\^xsd:date .
This may be very useful in some areas of research. The foaf ontology does not specify such a precise ontology because their use cases are more mundane: connecting people in an easy to understand way. But the two are not exclusive. This name group could even provide some extra relations to relate foaf:Person-s to name:Name-s:
@prefix name: <http://names.eg/ns/names#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .

name:firstName a owl:ObjectProperty;
           rdfs:label "first name"@en;
           rdfs:domain foaf:Person;
           rdfs:range name:Name .
Now one could write your foaf file out using both ways of doing things:
@prefix name: <http://names.eg/ns/names#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .

<http://john.name/me.rdf#j> 
            a foaf:Person;
            foaf:firstName "John", "Johannes"@de, "Иван"@ru ;
            name:firstName <http://names.com/nm/john> .
And one could probably write rules in forms of SPARQL queries to infer the one relation from the other. Perhaps something like:
PREFIX name: <http://names.eg/ns/names#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/> 
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
CONSTRUCT {
  ?p foaf:name ?name . 
} WHERE {
   ?p name:firstName ?realName .
   ?realName rdfs:label ?name .
}
Alexander, I think this discussion has really helped reveal the power of the semantic web. Just try to imagine the endless fights that people discussing such an issue would have had if working with a language that did not allow them to make fine distinctions of meaning such as that between name:name and foaf:name, or speak about the implications of statements in a clear and precise way. I have seen this so often. Everyone would be out there discussing the "real" meaning of the string "name". Each side would have understood something different, there would have been no way to clearly distinguish the meanings, so the conversation would have lasted forever as people kept misunderstandings each other. There would be huge fights on newsgroups and on the blogosphere. All quite entertaining for a while, especially during a bust no doubt, when time is cheap and people need to let of steam. But certainly a huge waste of time. All these relations exist! You just have to distinguish them. And the best way to do that is to use URIs.

Posted by Henry Story on July 16, 2007 at 08:37 AM CEST #

The picture accompanying this post shows how logicians understand the distinction. Syntax starts by defining tokens and how they can be combined into well formed structures. Semantics defines how these tokens relate to things in the world, and so how one can evaluate the truth, among other things of the well formed syntactic structure.
What you are describing is in fact the philosophical / linguistic notion of semantics - the relations between symbols and the real world. Logic and logicans are concerned with drawing valid inferences from statements - something that can be done without recourse to the real world. That's why an inference process can inform you about the validity of a conclusion, but in general can NOT evaluate the truth of a statement (with respect to the real world). A logican can tell you how the truth conditions for complex sentences can be reduced to those of their constituents - but that's a bit different. When you hear a logican talk about the semantics of X, she's not talking about the relation between statements and the real worlds but about a characterization of the inferences that can be drawn.

Posted by Valentin on July 16, 2007 at 04:10 PM CEST #

Why is this a problem? Why can't a person have a number of foaf:firstName relations to a number of different strings, as I showed above:
So how are you going to differentiate whether "Johannes"@de and "Иван"@ru are translations of one name or distinct names given to one person? How would you define an order, possibly assigned to names? If you just use strings, you cannot assign a semantic to those strings. Well, you can, but it will be ambiguous and thus not very useful. A solution is to use objects everywhere. Even a blank (unnamed) object is better than a string.

Posted by Alexander Pohoyda on July 17, 2007 at 03:51 AM CEST #

Alexander: You are right to point out that the name:Name class allows you to express certain types of things that you can't do with the foaf:firstName relation to Strings. Or at least not directly. But I disagree with you when you say:
Well, you can, but it will be ambiguous and thus not very useful. A solution is to use objects everywhere. Even a blank (unnamed) object is better than a string.

Which relation is useful cannot be decided in the abstract. It depends on what the person consuming the information is doing, or needs the information for. The foaf ontology is clearly very useful. I have not yet been in a position where I wondered if someone had two first names or if they were translations of each other. And I doubt that many people have yet been in that position. But who knows. If it turns out to be very useful, it will be easy to add new relations to foaf for those cases, or for people to add relations such as the name:firstName relation to the name:Name class.

Let natural selection do its work here too.

Posted by Henry Story on July 17, 2007 at 04:45 AM CEST #

Rereading the above commentaries, I came across Alexander's reply to Richard:
Richard said: "That's misleading. With XML, the namespace semantics are standardized and universally supported by the tools. With JSON, you have to roll your own, and everybody who wants to use your namespace mechanism has to code from scratch. That makes all the difference."

So what's the problem of using the same standardized namespace semantics and write it down using JSON syntax? Do you argue that this is impossible? There's nothing in XML syntax special for namespaces. How you interpret some "reserved" attributes and prefixes -- is pure semantics. Even if some XML parser has no idea of namespaces, it will parse a well-formed XML document.

A very good point. XML does not come with namespaces built in by default. But it does come with a very well accepted standard for how to do this, which does not feel alien to xml. XML at its basic minimal is really just syntax. (I am not sure how much one can say that xml namespaces are a semantic construct though...)

JSON on the other hand comes with syntactic structures for things called numbers and booleans, but without namespaces. And there is no well established way of adding namespaces to it. Will it be able to grow such a thing in the clean way XML did?

If you look at N3 on the other hand you will see that it is built on URIs. This is clearest by looking at the NTriples subset of N3, which is just lines of three URIs or 2 URIs and a string. In fact you can think of N3 (or rather the Turtle subset of N3) as just syntactic sugar for NTriples. This makes me think that N3 mat be the best language long term for data interchange on the web.

Posted by Henry Story on July 17, 2007 at 05:24 AM CEST #

YAML, http://www.yaml.org/, http://en.wikipedia.org/wiki/YAML, is a superset of JSON. YAML has much the same purpose as JSON, exchanging or storing state information, with a bit more emphasis on human readability:

"YAML ... is a straightforward machine parsable data serialization format designed for human readability and interaction with scripting languages such as Perl and Python. YAML is optimized for data serialization, configuration settings, log files, Internet messaging and filtering."

YAML has a feature, tags, that can be used much as you would use XML namespace names to disambiguate you vocabulary (set of type names). And yes, you can use an URI as a tag (the 'tag' URI scheme is preferred).

Now, if the state that you want to serialize looks like a bunch of RDF statements, you will obviously be better off using some convenient RDF serialization format. But quite a lot of state information has no natural fit to triples and no intrinsic value in being Web-enabled.

Here is an interesting comparison of textual (serialization) formats for the Web:

<XML/> without the X - the return of {{Textual}} markup

Posted by Peter Ring on July 17, 2007 at 06:10 PM CEST #

Peter Ring wrote: Wikipedia says:
"YAML ... is a straightforward machine parsable data serialization format designed for human readability and interaction with scripting languages such as Perl and Python. YAML is optimized for data serialization, configuration settings, log files, Internet messaging and filtering."
Thanks for reminding me of YAML. I noticed that the YAML spec is pretty long, compared to the Turtle spec. Though wikipedia says it is a data format, my problem is that it does not even come with a semantics: so there is no general way to tell if two data documents are saying something different or the same (unless they are character for character equivalent), or if one document is saying a subset of what another is saying, or a way to define equivalences between terms in different vocabularies. All that work and still it delivers so little!

The idea of using whitespace as a bracketing mechanism is really not a good idea. That makes it impossible to use in emails for example when chatting in newsgroups. The Yahoo groups munges whitespaces for breakfast, as you can see on the mail I wrote that lead to this blog. I had written out the Turtle example there with nice indentation to make it easy to read, but Yahoo decided to remove them all. Now with Turtle the result is a document that still makes sense, if perhaps more difficult to read. Do that to a YAML document and you have killed it.

YAML has a feature, tags, that can be used much as you would use XML namespace names to disambiguate you vocabulary (set of type names). And yes, you can use an URI as a tag (the 'tag' URI scheme is preferred).
Thanks for pointing that out. It seems tagged on though, so to speak. The simplest form of Turtle, NTriples, is just URIs. There it is the core of the language.
Now, if the state that you want to serialize looks like a bunch of RDF statements, you will obviously be better off using some convenient RDF serialization format. But quite a lot of state information has no natural fit to triples and no intrinsic value in being Web-enabled.
Could you give me an example of some state information that does not fit well into triples? People don't realise that this is the basic data format. Even objects are really just a bunch of relations with methods attached, as so(m)mer demonstrates.

Thanks for the pointer to the presenation on textual markup languages. That is really good.

Posted by Henry Story on July 18, 2007 at 03:50 AM CEST #

I just came across a cool Turtle to JSON translator. I placed the Turtle from the article above into the top box, pressed the "Turtle to JSON" button and got the following:

{
  "@prefix:foaf": "<http://xmlns.com/foaf/0.1/>",
  "@prefix": "<http://www.w3.org/2000/10/swap/pim/contact#>",
  "@about": "http://eg.com/joe#p",
  "a": "foaf:Person",
  "foaf:firstName": "John",
  "foaf:family_name": "Smith",
  "home": {
    "address": {
      "city": "New York",
      "country": "New York",
      "postalCode": "10021",
      "street": "21 2nd Street",
      "]": [
        "foaf:phone",
        "<tel:+1-646-123-4567>"
      ]
    }
  }
}
Cute! Well apart from the bug that it misreads the closing "]" of the address object as a new relation... Anyway, if the user agent parsing the fixed translation could guess the origin of the JSON, it would be able to deal with it as another encoding of RDF.

That site also provides a link to Jim Ley's Javascript parser. I wonder if one could do for Javascript what so(m)mer is setting out to do for Java...

Posted by Henry Story on July 18, 2007 at 04:27 PM CEST #

Using whitespace for bracketing works fine for Python. Just try it. Anyway, in YAML, there is also flow-style bracketing, much like in most computer languages.

Should the fact that any data structure could be expressed in terms of s-expressions (or triples, with a bit of effort) imply that you would want to transmit it using an RDF serialization? The simple answer is, it depends! It depends on the nature of the information and why you want to transmit it.

I am somewhat confused about what the subject of the discussion is: information modeling or serialization formats? Anyway, I'll try to describe a few examples from my own experience in which XML and/or RDF would do more harm than good.

I used to work as a consultant in the metalworking industry, implementing data transmission between CAD/CAM systems and CNC mills and lathes. A product model in a CAD system comprise a geometrical description of surfaces and volumes. This geometrical description is transformed into a CNC program, a sequence of tool changes and movements for the CNC mill. CNC programs are usually encoded in simple formats, similar to HPGL. The geometry model can usually be exchanged in an XML-based form, but you wouldn't gain anything from expressing the geometry model in terms of triples -- except starting the discipline of geometry modeling all over again. And you wouldn't gain anything from expressing the CNC tool's trajectory along a double curvature, i.e. a loooong list of X,Y,Z coordinates, as triples -- except a larger CNC program.

I used to work in telecom instrumentation as a tech writer. To set up connections and otherwise do the business of a telecom system, a number of signaling protocols are employed. These protocols are usually described and encoded in terms of ASN.1. Using XML and/or RDF would only add enormous cost. Of course, you occasionally want to "see source", but you can't do that real-time at 140 Mbit/s, so you need recording and decoding instrumentation anyway.

Oh, but this is not really exchange of state info like I think of it, this is nothing like exchange of documents and forms on the Web, you might say. Well, there are worlds of exchange that barely touches the Web.

It's not that space-time, signals, programs, event logs etc. can't be described and serialized in useful ways using triples. But if you just want to consume the information, and you don't want to compose it with other information resources on the Web, why bother?

Oh, but someone might someday want to do interesting and fantastic things. Well, let's cross that bridge when we get to it.

I should add that I now work in legal publishing and do find RDF and SPARQL extremely useful ;)

Posted by Peter Ring on July 18, 2007 at 05:13 PM CEST #

IMHO, the binary syntax/semantic distinction is too blunt an instrument for the current discussion. The three level stack [syntax / data structure / semantics] is better: JSON is a a serialization of a particular variety of mathematical object (aka data structure): a tree with labeled edges that has literals as leaves - an array being a node with integers as edge-labels. Labeled trees can in turn be used to represent all kinds of things - the semantics being defined by the application (and I don't mean to restrict attention to the semantic web here - any useful data carries a meaning defined by the application that manipulates it)

The data model underlying RDF is a graph whose edges and non-literal nodes are labeled by URIs. It is an easy exercise to encode graphs as trees. JDIL (Json Data Integration Layer) defines a particular way of doing this - one that includes XML-style namespaces for concise expression of URIs This yields the four level stack: [JSON syntax/Labeled Tree/URI Graph/Semantics].

So, JSON can easily be used to encode RDF - all that's needed is the short hop from tree to graph. What are the merits of this encoding relative to the alternatives? This is a practical matter unrelated to the philosphical debates at the semantic levels: I have a well-defined data structure (a URI graph) that I want to store or transport - what's the best way of doing this? The criteria are: compactness, computational efficiency of generation and parsing, availability of tools for parsing and generation, and legibility. JSON is compact compared to any of the XML representations (though not compared to Turtle - it is about the same). JSON has excellent parser support not just in the JavaScript environment, but in all of the major Web programming environments:PHP,Python,Perl,Ruby, ActionScript.... JSON is all over the web now - it's not just a client-side technology. Finally, the legibility of JSON is good.

Granting the feasibility of the tree-to-graph hop, JSON is a practical candidate for serializing RDF. And, to repeat: the relevant issues are not philosphical but of the ordinary software engineering variety - what's fast, what's easy, what's supported.

Posted by Chris Goad on July 27, 2007 at 09:42 AM CEST #

Hi Chris,

I am not sure you get much out of your syntax, data structure, semantics distinction. JSON like XML is defined primarily syntactically. In fact all well defined artificial languages require that. N3 has it too. But JSON does not come with a default semantics, other than that it is easily parsed by a JavaScript process, and as I explain therefore is usually seen to have a behavioral semantics. Usually pieces of JSON have the semantics of the program that is meant to interpret them, which means they are very closely tied together, which as I pointed out keeps the technology in the client-server realm.

But nothing stops one from doing as JDIL does, add namespaces to JSON. This would if adopted help JSON break out of the client server realm. If this is going to be obvious to interpreting programs, then it had better be served with a special mime type though, perhaps "application/json+rdf" or "application/jdil", otherwise there would be no reliable way to know that it was meant to be interpreted in that way rather than any other way. JDIL looks like an interesting attempt therefore to give an open semantics to JSON, by tying it to RDF via URIs. My guess is that there is still a lot of work to be done there to make sure it ties in cleanly, and to define precisely the semantics of JDIL. Anyway, it will be interesting to see how it catches on. Is the namespace piece going to help it be correctly interpreted by JSON evals? Or will an intermediate layer need to be set up to make sure the namespaces get interpreted correctly? Will the JSON VM force certain interpretations on the code that are counterproductive otherwise? Lots of questions... Thanks for pointing this out.

You may also want to follow the thread on another attempt at a JSONesque view of RDF: RDFON.

I myself currently am very happy with N3. It is built as a stack on simpler languages such as NTriples and Turtle, and shows the way forward to a rule based language. The syntax is simple and readable, and it has a lot of work behind it, and it interacts very well with SPARQL.

Posted by Henry Story on July 27, 2007 at 11:11 AM CEST #

Hello Henry,

>But JSON does not come with a default semantics, other than that it is easily parsed by a JavaScript process, and as I explain therefore is usually seen to have a behavioral semantics.

I don't quite agree. JSON has its origins in JavaScript, and has an operational definition in the JavaScript parser. But the JavaScript/ECMA 262 spec is clear enough on the data structure denoted by the object syntax and in this sense already provides denotational semantics. Then, Doug Crawford performed the very valuable service of separating out the object notation from its original JavaScript context, giving it a name, and writing down it's denotational semantics clearly and concisely at JSON.ORG. There are compatible implementations of parsers in a dozen languages, showing that the structure denoted by JSON is clear in practice as well as theory. By now, JSON's special connection to JavaScript is only historical.

>Is the namespace piece going to help it be correctly interpreted by JSON evals? Or will an intermediate layer need to be set up to make sure the namespaces get interpreted correctly?

The latter: to go from the JSON tree data structure with namespaces to the JDIL URI graph, an algorithm is needed. I have implementations in the languages that are involved in our own projects: JavaScript and PHP. I need to spiff these up and publish them on the JDIL site. Also, although I think that the tree-to-graph mapping is adequately specified on the JDIL page, I agree with you that the section about RDF needs elaboration and additional formality. I also agree with your point about mime types - and will think about the best course in this regard.

Meanwhile, Alistair Miles has implemented RDFOO, of which he says: "RDFOO is (more or less) an implementation of JDIL, using Jena and the Java classes for JSON from json.org."

I like Turtle too, but JSON is becoming the new XML, with very widespread support - giving it practical advantages for slinging object graphs around.

Posted by guest on July 30, 2007 at 11:21 AM CEST #

Then, Doug Crawford performed the very valuable service of separating out the object notation from its original JavaScript context, giving it a name, and writing down it's denotational semantics clearly and concisely at JSON.ORG
I don't see what I am thinking of as a denotional semantics on JSON.ORG. I just see a syntax. How to put strings together to form other strings. There is nothing there that will help me know when and how I can say of two objects A and B that they are the same. Which is the core of what semantics gives you. It is worth reading the RDF Semantics document on the W3C web site, to see what this is about.

This aside, check out the response Tim Berners Lee just wrote to the RDFON proposal on the Semantic Web mailing list. There are some very interesting points he brings up there.

Posted by Henry Story on July 31, 2007 at 01:25 PM CEST #

Hello Henry,

I don't see what I am thinking of as a denotional semantics on JSON.ORG.

I didn't mean anything fancy by "denotational semantics" - nothing as complex as the model theories of various logics (such as OWL's Description logic). I only meant "denotational" in the sense that JSON syntax encodes or denotes a mathematical object/data structure rather than an "operational" computation. (In the doc: "JSON is built on two structures: a collection of name/value pairs ... and an ordered list of values" - a labeled tree in other words with an option for consecutive integer labels) Call this an "encoding" rather than a "semantics" if you like, but either way JSON strings denote something quite definite; JSON is not only a collection of rules defining the notion of a well-formed string.

In the email about RDFON that you reference, TBL expresses the same view:

And in fact the JSON code is not program, is it JS object. JSONs { x: "3", y: 5} i snot assignment but a data structure.

The tree data structure that JSON encodes can in turn encode a URI graph - the same data structure encoded by RDF/XML, Turtle, triples, RDFON, and various other syntaxes. Building a model theory/semantics for the URI graph is where things get interesting. The encoding levels below are just plumbing. But good plumbing is nice to have! My original point: JSON works as plumbing, and has a few practical things going for it.

Posted by Chris Goad on August 05, 2007 at 05:28 PM CEST #

hi all, i may be late to the show but MAYBE this gives me an edge in getting the last word.

ok, put it simply: i think most of the above post and commentaries are fundamentally wrong wherever they miss the point that both JSON and XML/RDF are, fundamentally, nothing but series of octets that we visualize as strings of characters. they are absolutely identical in this respect when they go over the wire. this is their common ‘extensional’ ontology.

second, JSON and XML are very similar in that they are intended to represent possibly nested data structures of name/value pairs—both are extremely similar with respect to their ‘intentional’ ontology.

observe i am not talking here yet about what you can express with JSON or XML; i am talking about the foundations.

i guess that any structure that can be molded as XML is also expressable with JSON, and vice versa. this is a bit similar to programming languages: once a language is turing-complete, you can already express ‘anything’ in it, which makes all languages similar in that whatever facilities or syntactics one language boasts over another is, in the end, just a matter of surface appearance. that appearance \*does\* matter a lot as anyone who has ever compared say python to cobol will underwrite.

it is funny how many people lose sight of the simple fact that namespaces in XML and the structure of RDF are nothing but a convention how to structure names and values. that’s it. no meaning here.

much recent ontological discussions and some heavy but almost empty books suffer from people not realizing that putting on another metalayer onto their data and label it semantic doesn’t mean they have ultimately attained meaning. no, they just did this: they hung a label next to the road sign "HOBOKEN 4 MILES" saying {type:"roadsign", placename:"HOBOKEN", distance: { value:4, unit: "us-miles" }}. it’s nothing but more letters for fewer letters, period.

then, some people claim that something fundamentally changes when you say ‘http://en.wikipedia.org/wiki/Hoboken,_New_Jersey’ for ‘HOBOKEN’. but nothing has changed fundamentally. the data has just gained a little more fitness to be passed around within a larger audience. we still need a lot of external knowledge to get to the meaning.

over the years, i have gained the impression that XML has some pretty lousy characteristics that make it less than optimally suited for delivering structured data, and that some people are a little benumbed by all the hype that surrounds it.

one of the reasons you can do more with less in JSON as when compared to XML is that at least you have fundamental datatypes such as numbers and booleans clearly expressed in the language, and also that a data structure is always a hash or a list (i wished there were more, like sets and so on, but well).

the next reason JSON is superior to XML is that it painlessly switches between being a data object inside your virtual machine and a string that goes into a text file or over the wire. and this while almost completely eliminating the considerable overhead that XML processing entails.

Posted by loveEncounterFlow on February 14, 2008 at 11:00 AM CET #

This is something that is completely wrong, since Douglas was not the creator of JSON, because in fact JSON is originally described in the ECMA-262 specification as Literal Javascript. The other fact is that, Douglas JSON spec. is incomplete the way he specifies it. ECMA-262 specification for Literal javascript is the way all browsers javascript engines work. So it is not based on a subset of the Javascript programming language, it was stolen from Javascript itself.Why do we need to clarify this? Because if we already have the standarization problem between browser javascript implementation, we are going to have more incompatibilities in the future with a new pretended stolen standard.
We need a better understanding of ECMA-262 in its Literal scripting concept, because in fact, the Javascript programing language creator (a real genius) designed javascript, not only as a programing language for streaming, otherwise you will be always confused by JSON. Same thing happens these days with closures, which nobody really understands, because the closures disclosure is that in fact they are enclosures.

Posted by jose gomez on March 21, 2008 at 03:06 AM CET #

I need to correct some of Jose's comments. I have never claimed to have invented JSON. I only claim to have discovered it. I do not claim to be the first to have discovered it. I give JSON a name and a description and a little web site. Jose intends that credit go to someone else, but he chose not to name that person. That is an odd way of giving credit. The man behind JavaScript is Brendan Eich. Jose did get one thing right: Brendan is in fact a really smart guy.

Posted by Douglas Crockford on July 09, 2008 at 06:43 PM CEST #

Ha what a hot place:-)
Seems you mix syntax and semantics.
I facing this problem now when handle new ON.
I think Henry Story(http://blogs.sun.com/bblfish/entry/the_limitations_of_json#comment-1184794039000) have proved this problem.
JSON self no namespaces, BUT JSON post-process tool can do namespaces things.
Yes, post-process can much more things than pre-precess.
So can say so:
JSON not need namespaces, but JSON namespaces tool needed.:)

Posted by qinxian on March 27, 2009 at 11:23 PM CET #

Post a Comment:
Comments are closed for this entry.
About

bblfish

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today