Monday Jan 02, 2006

Prophet warning


Nick Cohen, a reporter in the Observer, writes some good pieces in his "Without prejudice" column, and the one for New Year is one of the best.

This being the new year and all prophets are makeing themselves heard. So.... a prophet should be judged:
  1. by the accuracy of their previous forecasts;
  2. how well their beliefs reflect observable reality; and
  3. if they update their beliefs in response to new evidence.
I predict that most prophets will fall quite short of all three of the above. However, I do not usually make predictions so i am willing to change my mind if found wrong :-)

Thursday Dec 08, 2005

Rich Turner on Binary XML in WCF

Rich Tuner, a product manager in Microsoft's Web Services Strategy team, writes:

"In our own work, we see WCF's text-XML encoder significantly outperform our current ASMX/WSE text-XML encoder engine. We also see the WCF BinaryXML serializer delivering hige performance and throughput improvements vs. serializing raw text-XML. Right now, the WCF BinaryXML format is proprietary because of the absence of a standard BinaryXML format. However, once the world agrees on one (or maybe more) BinaryXML formats, you can be sure that we or someone else will ship a compliant encoder for WCF.

Roll on, that day!"

Well, i think that day is here and now. Fast Infoset is a standard binary encoding of the XML Information Set that is jointly standardized in ITU-T and ISO.

Maybe someone will implement Fast Infoset for WCF? Then JWSDP 1.6 and JAX-WS 2.0 clients/services could interoperate with WCF clients/services using a standard binary encoding of the XML information set.

As a start one could look at the open source Java-based implementation here.

Tuesday Nov 29, 2005

Fast Infoset and WCF's binary XML encoding

When browsing Microsoft's WCF APIs i came across some interesting information on the WCF "binary XML" format.

XmlReader is a pull-based API for processing an XML infoset. The same API can be used for processing XML documents or "binary XML" documents. How?

The static XmlReader.Create method can take as input a Stream, of octets, that is an XML document or a "binary XML" document. The documentation of this method states:

"The first two bytes of the stream is checked first to determine if it is using tokenized binary XML format. If the first two bytes contain the 0xDF and 0xFF value the binary XML reader is returned."

Here is the bit i find interesting. Fast Infoset specifies that the first two octets of a fast infoset document are 0xE0 and 0x00. There is only a difference of 1 between the two!! (for an 8 bit integer or for a 16-bit integer, most significant byte first).

Given that the two binary formats use different "magic numbers" it should be possibly to integrate Fast Infoset into WCF without any conflicts :-)

The first two octets of a fast infoset document were chosen because they are different from the first two octets that can occur for an XML document encoded using a well-known character encoding scheme (see Appendix F of XML 1.0).

We have used the same type of factory mechanism based on the first couple of octets when prototyping solutions for the Java Web Services Developer Pack. For the final integration into JWSDP 1.6 we chose to rely on the MIME type instead.


Of further note is the XmlReader and XmlWriter have specific methods to read/write binary data, like octets or integers. When using the text-based implementations the data will be converted from/to characters in accordance with the lexical representations of data types specified by W3C XML Schema. But, when using a binary-based implementation such data could be encoded much more efficiently.

Having such methods on the XmlReader and XmlWriter is rather useful IMHO. Not only is it very useful aid for developers, it makes integration of special optimized binary encodings of data quite easy while hiding the implementation details from the developer.

Fast Web Services Tools by OSS Nokalva

OSS Nokalva have announced their Fast Web Service Tools, which provides a C/C++ implementation of ITU-T X.892 | ISO/IEC 24824-2 Fast Web Services. From the more detailed technical description it looks a thorough piece of work.

Fast Web Services specifies a very efficient encoding of SOAP message infosets using:

  • the ASN.1 Schema for SOAP;
  • X.694, for the mapping of W3C XML Schema to ASN.1 schema embedded in WSDL service descriptions; and
  • the ASN.1 Packed Encoding Rules.

This enables the transmission of very compact messages that can be efficiently produced and processed. An important property of Fast Web Services work is that it requires no modification to WSDL 1.1 that conforms to the WS-I Basic Profile 1.0. I think there are at least two interesting domains for Fast Web Services:

  • mobile and wireless, where there is limited bandwidth and devices with limited processing power; and
  • XML appliances, Fast Web Services could provide an interesting and useful feature to XML appliances to convert between XML SOAP messages and binary encoded SOAP messages for efficient server side and/or client side processing and production within a particular domain while still enabling larger interoperability over the Web.

 

Monday Nov 07, 2005

Scatter plots in Japex

I wrote here that:
Rather than present the individual results in all their the gory detail it is easier in this forum to show the average of all the results and provide a link to the zip file containing detailed results. Ideally I would have liked to show a log-log scale scatter plot of all the results (size/throughput vs parser time) but Japex does not have this functionality (not yet, but i want to add this capability!)

Well, before i could say 'scatter plot functionality', Santiago added such scatter plot functionality to Japex. Excellent, many thanks Santiago!

Now i have to update the Fast Infoset performance results with such plots

Thursday Oct 27, 2005

Fast Infoset and bnux performance results: "red delicious"-to-"golden delicious"

In a previous blog i discussed some Fast Infoset and bnux performance results and how measurement of Fast Infoset was very different from the measurement of bnux ('oranges-to-apples').

For the first iteration to get close to an 'apples-to-apples' comparison I have developed an optimal FI XOM parser that integrates into XOM in a manner such that it is assumed the hallowed status of a trusted parser that performs correct well-formed checking (see previous blog for the configuration). This is for reasons for expediency as it is easy to modify the existing FI DOM parser (which already performs well-formed checking). Such a solution is not practical given that it avails of the package private methods for XOM node construction. However, the results will give a good indication of Fast Infoset performance and whether it is worth pursuing a more practical implementation along the lines of how bnux is integrated.

The differences between the FI and bnux measurement has been reduced from different layers of integration and 2 sets of well-formed checks to the same layer of integration and one set of well-formed checks. The only difference that remains is where the well-formed checks are being performed. The FI XOM parser performs the well-formed checks and the bnux XOM parser defers such checks to XOM (which is probably one of the, if not the, most rigorous well-formed checkers there is).

This is not an exact 'apples-to-apples' comparison, but I would classify it as different types of apple, say 'red delicious' to 'golden delicious'. Perhaps I should have originally compared two fruits that have an old common ancestor (what ever that might be, I briefly looked at the tree of life but could not find apple or orange tree present in the database!) then chosen a fruit with a closer common ancestor that is not an apple.

A 'red delicious' is great for snacking but not good for baking, kind of like the FI XOM implementation since it is not designed to be baked into XOM for practical use. A 'golden delicious' is great for snacking, cooking and baking, kind of like the bnux XOM implementation. I digress somewhat but I hope the point is clear: the measurement process is a lot closer to 'apples-to-apples' than before. (I may have swung slightly in favour of FI over bnux because the FI well-formed checking is likely to be more efficient than the XOM well-formed checking. I will return to this point later on).

The data chosen to measure was the set of 56 XML documents supplied with the nux distribution. I made sure the FI XOM parser passed tests for round-trip consistency on all the XML documents to ensure that all information was reported correctly and the parser was not cheating and gaining an unfair performance advantage.

To measure performance I created Japex drivers to:

  • report the size of bnux documents; and.

  • report the average time to parse documents using the FI XOM and bnux XOM parsers.

(I reused the FI size Japex driver from the Japex XML driver library to report the size of FI documents.)

Then, I created Japex configuration files to describe the measurement process to be performed by the Japex runtime (one for size and one for parsing). Running both configuration files produced some nice pretty charts :-)

Rather than present the individual results in all their the gory detail it is easier in this forum to show the average of all the results and provide a link to the zip file containing detailed results. Ideally I would have liked to show a log-log scale scatter plot of all the results (size/throughput vs parser time) but Japex does not have this functionality (not yet, but i want to add this capability!). So average results it is, with a 'government health warning' that these averages are taken over a wide range of sizes and parse times, so please download the detailed results if you want to further analyze.

The size results are presented in the following chart (the detailed report may be found here).




The average results are presented as a percentage of the XML document size, so the smaller the better. The amount of compression achieved by FI can depend on the degree to which repeating text content and attribute values are replaced with small integer values (commonly referred to as indexing). The FI serializers can control this degree using a simple heuristic based on the length of the character strings for text content and attribute values, if the length is less than 'n' than the character string is is indexed, otherwise it is not. The red bar labeled 'FastInfoset7SizeDriver' presents results for the fast infoset documents with the indexing of character strings less than 7 in length (the default value for the implementation of the Fast Infoset serializers), the green bar labeled 'FastInfoset64SizeDriver' presents results for the indexing of character strings less than 64 in length. It is my understanding that bnux indexes all character strings up front at the head of the document.

When an appropriate degree of indexing is chosen FI is in most cases slightly more efficient than bnux. In three cases bnux is more efficient than FI. My initial guess as to why this is the case is that FI, for the most part, is slightly more efficient at 'packing' information but in some cases the length prefix encoding of character strings for FI is more costly than zero terminating encoding of character strings for bnux.

The parse results are presented in the following chart (the detailed report may be found here).




The average results are presented as a percentage of the bnux parse time, so the smaller the better. These results show that FI is slightly faster than bnux. In three cases bnux is slightly faster. Although the average results do not show a difference between the degrees of indexing (64 and 4096) there is a difference in individual results when correlated to differences in size. These results are encouraging!

As i said earlier FI is likely to have an unfair advantage over bnux because of where the well-formed checking is performed. To ascertain how unfair this might be I hacked together an FI SAX parser that performed no well-formed checking then measured this against the well-formed checking SAX parser and the bnux parser using a XOM null node factory (which performs no creation of nodes and therefore no well-formed checking). The detailed report may be found here.

From these and the previous results it is possible to obtain the approximate cost of a FI XOM parser with XOM well-formed checking by doing the following:

  1. subtract the null node factory parse time from the bnux parse time;

  2. add the value obtained 1 to the hacked FI SAX parser that performs no well-formed checking

  3. compare 2 against the bnux parse time.

The data indicates that such a FI XOM parser would be anywhere between 5 to 20% slower (depending on the document) with an average of 5%. This is also encouraging! Intuitively this is to be expected as bnux is a simpler format than Fast Infoset so i would expect it to be more efficient in this respect. (But note that the differences between the Fast Infoset encoding and the bnux encoding are not that great so, intuitively, i would not expect the differences to be that large.)

Overall the results using the FI XOM parser (with well-formed checking performed by the parser) and the possible results using a theoretical XOM parser (with well-formed checking performed by XOM) indicate that it is worth implementing an FI XOM parser in the same manner as bnux for practical use with XOM and also nux. Development of such a parser, in addition to an efficient serializer, is likely to take longer than the development of the prototype parser. I may need to rethink the design of vocabulary tables and external vocabularies such that implementations can optimize the vocabulary table representation and convert to and from a canonical vocabulary table representation.

Fast Infoset and SOA

Following hot on the heels of the Java Web Services Performance Analysis and Benefits of Fast Infoset paper here is another one! discussing Fast Infoset and SOA.

Wednesday Oct 26, 2005

Java Web Services Performance Analysis and Benefits of Fast Infoset

A paper on the benefits of Fast Infoset in JWSDP 1.6 has been published. The Web Services performance test kit from PushToTest (the SOA Kit) was used to measure the PushToTest Web services deployed using JWSPD 1.6, and the results are encouraging!

Friday Oct 21, 2005

Japex drivers for XML-based performance measurement

I have created a library of Japex drivers for measuring the performance of XML-based processing. There are drivers for:

  • Parsing of XML and fast infoset documents using various APIs, SAX, StAX and DOM, and implementations. There are separate drivers for comparing the default XML parser included with the JDK and the Xerces XML parser.

  • Serializing DOM Document instances to XML or fast infoset documents.

  • JAXB marshalling and unmarshalling to and from XML and fast infoset documents.

We are using this library internally to measure and track performance. Using Japex and this library means that developers do not need to create their own performance test suite to measure any performance improvements made to code and also track performance over builds to find any regressions. So hopefully the end result is developers are more productive and performance improves or does not regress.

The library, JapexXMLDriverLibrary, is currently part of the Fast Infoset project (although we would like to modify the project to place Japex and the library into a new project or projects) . There is a README that explains how to build the library.

I should really get around to adding some JavaDoc to explain the drivers in more detail. For the moment the best way to understand how it can be used is to look in the sub-directories of the FastInfosetPerformance directory.

What is Microsoft's experience with Efficient XML Interchange solutions?

Mike Champion states on his blog:

“For example, Microsoft's opposition to Binary XML standardization is based on actual experience wrestling with the different ways in which XML is inefficient and in analyzing how to improve that inside various XML-based products in ways that wouldn't impact interoperability. I suppose that to many on the outside it looks like stasis -- we recognize the XML is inefficient but can't endorse an industry-wide attempt to improve efficiency; from the inside I see the analysis -- the cost of XML's inefficiency seldom outweighs the benefit of data interoperability, and when it does, the optimizations for one product domain haven't been useful in another.”

I asked before (IIRC this will be the third time) what this experience is but have yet to receive a satisfactory answer. It would be great if some people from Microsoft could explain what difficulties they have encountered. I am no way implying that Microsoft has not encountered very real issues. Stating that there are problems is fine but such statements need to be backed up with reasons for these problems otherwise it is difficult to move the debate forward.

Two such proprietary binary formats developed by Microsoft are:

  1. the WCFs binary messing encoding, for the efficient encoding of SOAP message infosets; and

  2. the BAML binary encoding (see here and here), for the efficient encoding of XAML infosets.

What are the differences between these encodings that make it difficult for them to share a common encoding framework?

Thursday Oct 20, 2005

Fast Infoset and bnux performance results: oranges-to-apples?

Wolfgang Hoschek presented, at the GridWorld/GGF15, performance results comparing the XOM-based parsing and serializing of XOM documents encoded using Fast Infoset (FI) and encoded using the bnux format. These results do not show FI performing very well!

Before i delve into the performance comparisons a bit of background on bnux follows.

Bnux is a simple binary encoding of the XML Information Set that performs similar tricks to FI. I would classify bnux as supporting a subset of the functionality that FI supports: bnux is simpler than FI but supports less features. For example, the bnux encoding restricts streaming support to deserialization. When serializing bnux requires all string-based information of an XML infoset be known (e.g. from a tree representation such as a XOM document) since this string-based information is encoded at the head of the encoding. The advantage of this approach is that it is possible to 'pack' information based on the frequency of such string-based information. With FI it is possible to support the encoding of string-based information up front or in-line, which makes the encoding slightly more complex than bnux.

Bnux is part of nux a framework for highly-optimized XML processing. I am impressed with bnux and nux. A lot of good work has gone into developing this highly optimized toolkit.

Now back to the performance comparisons.

Although a lot of effort has been made to compare 'apples-to-apples' my view is it is more 'oranges-to-apples'. There are lots of additional costs due to the manner in which FI is being measured that when summed together result in very poor results. I will try and explain why this is might be so for case the of parsing.

Two forms of parsing are measured:

  1. Parsing to a XOM document. Thus measures the cost of parsing and creation of the Java objects associated with the XOM document. (Such forms are called 'bnux?' and 'fi?' models in the presentation for bnux and FI respectively, see slide 6.)

  2. Parsing using a 'Null' Node Factory. This measures parsing without the creation of the Java objects associated with the XOM document. (Such forms are called 'bunx?-NNF' and 'fi?-NNF' models in the presentation for bnux and FI respectively, see slide 6.)

For bnux measurements a very efficient parser is used that performs no well-formed checking and is optimally integrated with XOM such that features of the bnux encoding are taken advantage of to boost performance. The bnux parser relies on XOM to perform well-formed checks.

For FI measurements the FI SAX parser is used in conjunction with the well-formed checking XOM SAX handler. The FI SAX parser performs well-formed checking (for example, in-scope namespaces, duplicate attributes, incorrect use of the XML namespace and prefix, NCName and PCDATA character checks). So for 1) well-formed checking is being performed twice and once for 2) where as for bnux it is only being performed once for 1). The well-formed checking XOM SAX handler also sets up the FI SAX parser to perform string interning (this is a very expensive operation for small documents) and report namespace attributes as part of the array of attributes reported by the start element event (start and prefix mapping events will still occur but are ignored by the XOM handler). Because the SAX API is used it is not possible to take advantage of just the FI-based well-formed checking and FI encoding features (just like bnux takes advantage of the bnux encoding features). All this puts FI at a big disadvantage.

The only way to effectively get closer to an 'apples-to-apples' comparison is to compare using the same level of optimizations/tricks and API. That means developing an optimal FI XOM parser or an optimal bnux SAX parser that performs all the well-formed checks. I am currently in the process of developing the former (using the SAX DOM parser as a template). For development am using XOM-1.1b4 (patched with Wolfgang's performance patches) and an early access of nux 1.4 that Wolfgang has kindly sent me.

Part of this development requires that i can measure things. I have used Japex to develop specific drivers for 1 and 2 in addition to the FI SAX parser and the soon-to-be-fully implemented optimal FI XOM parser. Preliminary results produced from measuring these drivers (minus the FI XOM parser) on a small set of documents show that:

  • the FI SAX parser (performing well-formed checks) results are not that much slower than bunx form 2) results (using a null node factory, with no well-formed checking);

  • the FI SAX parser form 2) results are twice as slow as the the FI SAX parser results. Which indicates the use of the well-formed checking XOM SAX handler with a null node factory is costly; and

  • the difference between the results of FI SAX parser form 1) and FI SAX parser form 2) is much larger than the difference between the results of bunx form 1) and bunx form 2). This indicates that well-formed checking XOM SAX handler in addition to Java object creation is very expensive (some of the well-formed checking is performed at object creation).

Rather than present detailed results of the above and potentially compare 'apples-to-oranges', namely the FI SAX parser with the bnux parser using the null node factory, it would be far more convincing if I wait until the optimal FI XOM parser is complete and present results comparing this to bnux form 1) to give an 'apples-to-apples' comparison. Such results indicate that an optimal FI XOM parser could perform a lot better than using the FI SAX parser and could close the gap between FI and bnux. I hope this will be the case. Stay tuned!

Monday Oct 17, 2005

Update of the Fast Infoset web site

The Java.Net Fast Infoset web site has been given a little makeover:

The "battle for attention"

Blogs i presume start at the very very end of a long tail with a likely interest of 1 (perhaps in the interest of proof reading what has been written or for just for vanity).

Bob Wyan points out, using data from PubSub link ranks, that:

"The harsh reality is that those with the lowest ranks tend to stay that way."
"Those with the highest rank tend to keep their high rank."
"It is in the middle ranks that the real "battle for attention" is taking place."
So i start out at the very very end of the long tail where the signal to noise ratio is very very high. Let's see if i can occasionally pop my head up into the lower echelons of the middle ranks!
About

sandoz

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today