Tuesday Apr 07, 2009

Faster new String(bytes, cs/csn) and String.getBytes(cs/csn)

String(byte[] bytes, String csn) and String.getBytes(String csn) pair (and their variants) have been widely used as the convenient methods for char[]/String<->byte[] encoding/decoding when

a) You don't want to get your hands "dirty" on the trivial decoding and/or encoding process (via java.nio.charset)  and
b) The default error handling action when there is/are un-mappable/malformed byte(s)/char(s) in the input array  is acceptable for your situation (though spec of some methods states the "behavior is unspecified", Sun JDK/JRE runtime always performs REPLACE in this case)

These methods are convenient, especially when dealing with "small" byte[] and char[].  However we have been hearing complains for years (including from our EE folks) that these methods are slow, especially when there is a SecurityManager installed, and also consume too much memory than "expected".

The complains are somewhat accurate. A simple new String(byte[] bb, String csn) takes a "long" journey down to the CharsetDecoder to do the real conversion as listed below


         ->at java.lang.String.<init>(String.java:523)
          ->at java.lang.String.<init>(String.java:451)
            ->at java.lang.StringCoding.decode(StringCoding.java:193)
              ->at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:160)
                ->at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:561)
                  ->at sun.nio.cs.SingleByte$Decoder.decodeLoop(SingleByte.java:106)


during the process, ByteBuffer and CharBuffer objects are created to wrap the source byte[] and the destinating char[],  decoder takes the byte[] and char[] in and out of the wrappers during decoding,  a long list of sanity checks, defensive copying operations have to be performed... regardless lots of performance tuning have been done already, they are truely slow and bloat, a sad reality.


Good news is we finally found ourself some time to look into the issue recently. With lots of trivial but effective tuning works here and there including a "fast pass/green lane" for "built-in" charsets (the charsets included in JDK/JRE's charset respository) we are able to  cut the "bloat" memory use during the de/coding and boost the coding speed quite lot for "small" size of byte[]/char[]/String objects. So far the results look pretty good (listed below). The change has been integrated into JDK7 b53.


(1)NetBeans Memory profiles of running StrCodingBenchMark against JDK7 b52 and b53 show the "bloat" 36%+ java.nio.HeapByte/CharBuffer  objects are completely gone.


b52 Result:



b53 result:





2)Speed: Out micro performance benchmark StrCodingBenchmark shows huge speed boost for small byte[]/String use scenario. Below is the result of using ISO-8859-1 charset for different de/encoding operations with different sizes of input and under different runtime configs.


[-server] server vm   [-client]  client vm   [+sm] with a SecurityManager installed
[String decode: csn] new String(byte[], "iso-8859-1")
[String decode: cs ] new String(byte[], Charset.forName("iso-8859-1"))
[String encode: csn] String.getBytes("iso-8859-1")
[String encode: cs ] String.getBytes(Charset.forName("iso-8859-1"))
\*The first 4 columns are JDKb53, the later 4 are results of b52, smaller the number is the faster

-server -client -server+sm -client+sm ------------b52/old-----------------
[len=4]
String decode: csn 29 52 27 51 61 140 117 215
String decode: cs 38 73 93 147 90 160 139 237
String encode: csn 23 43 24 43 64 116 135 188
String encode: cs 42 77 98 134 170 341 232 418
[len=8]
String decode: csn 31 60 32 59 65 148 118 225
String decode: cs 40 81 97 155 95 170 143 247
String encode: csn 27 51 27 49 65 125 136 197
String encode: cs 46 84 101 139 174 356 238 451
[len=16]
String decode: csn 36 72 35 71 68 164 122 241
String decode: cs 45 94 101 169 108 189 147 263
String encode: csn 31 63 31 63 68 142 141 214
String encode: cs 50 96 103 152 179 379 242 453
[len=32]
String decode: csn 47 101 46 101 76 197 131 274
String decode: cs 55 119 113 194 115 228 157 303
String encode: csn 40 89 40 89 78 173 149 248
String encode: cs 58 121 112 178 194 423 258 497
[len=64]
String decode: csn 68 152 66 159 97 262 149 339
String decode: cs 74 171 133 246 139 302 179 379
String encode: csn 56 140 56 139 94 243 165 316
String encode: cs 73 171 128 227 223 521 285 582
[len=128]
String decode: csn 108 256 104 254 130 389 189 465
String decode: cs 110 272 173 345 184 449 222 522
String encode: csn 91 238 88 237 126 371 195 458
String encode: cs 106 267 162 324 283 672 342 745
[len=256]
String decode: csn 187 462 179 459 197 645 242 721
String decode: cs 183 474 249 548 277 739 305 811
String encode: csn 171 437 150 435 190 642 257 712
String encode: cs 171 462 224 519 395 1021 447 1093
[len=512]
String decode: csn 349 877 332 874 335 1156 368 1229
String decode: cs 334 879 405 961 463 1349 476 1408
String encode: csn 284 830 277 828 317 1168 379 1240
String encode: cs 298 865 350 910 636 1698 673 1769
[len=1024]
String decode: csn 782 1708 742 1701 660 2176 665 2262
String decode: cs 741 1689 834 1785 865 2507 846 2568
String encode: csn 571 1631 555 1616 577 2228 631 2299
String encode: cs 559 1635 601 1691 1121 3051 1128 3118


Some more results are at benchmark.txt.


Two facts worth mentioning are
(1)The best gain achived when de/encoding "small" sizes of byte[]/char[]/String (when de/encoding on "big" chunk of data, the overhead we are trying to cut off becomes marginal)
(2)Using Charset as the parameter does not bring you any performance benefit (in fact it's slower and bloat) in most use scenarios, use it with caution.


For now all the SingleByte charsets (except JIS0201) in our repository are benefited from this change. I Hope I can find some time to work on the MultiByte charsets later, especially the UTF-8, yes, it's on my to-do list.



Tuesday Mar 10, 2009

The Overhaul of Java UTF-8 Charset

The UTF-8 charset implementation (in all JDK/JRE releases from Sun) has been updated recently to reject non-shortest-form UTF-8 byte sequences, since the old implementation might be leveraged in security attacks. Since then I have been asked many times about what this "non-shortest-form" issue is and what the possible impact might be, so here are the answers.

The first question usually goes "what is the non-shortest-form issue"? The detailed and official answer can be found at Unicode Corrigendum #1: UTF-8 Shortest Form. Put it in simple words, the problem is that the Unicode characters could be represented in more than one way (form) in the "UTF-8 encoding" that many people think/believe it is. When be asked what the UTF-8 encoding looks like, the easy/simple/brief explain would be the bit pattern showed below


# Bits Bit pattern
1 7 0xxxxxxx
2 11 110xxxxx 10xxxxxx
3 16 1110xxxx 10xxxxxx 10xxxxxx
4 21 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx

It's close, but it's actually WRONG, based on the latest definition/spec of UTF-8. The pattern above has a loophole that you can actually have more than one form to represent a Unicode character. For example, for ASCII characters from u+0000 to u+007f, the UTF-8 encoding form maintains transparency for all of them, so they keep their ASCII code values of 0x00..0x7f (in one-byte form) in UTF-8. However if based on the above pattern, These characters can also be represented in 2-bytes form as [c0, 80]..[c1, bf], the "non-shortest-form". The code below shows all of these non-shortest-2-bytes-form for these ASCII characters, if you run code against the "OLD" version of JDK/JRE.


byte[] bb = new byte[2];
for (int b1 = 0xc0; b1 < 0xc2; b1++) {
for (int b2 = 0x80; b2 < 0xc0; b2++) {
bb[0] = (byte)b1;
bb[1] = (byte)b2;
String cstr = new String(bb, "UTF8");
char c = cstr.toCharArray()[0];
System.out.printf("[%02x, %02x] -> U+%04x [%s]%n",
b1, b2, c & 0xffff, (c>=0x20)?cstr:"ctrl");
}
}

The output would be


...
[c0, a0] -> U+0020 [ ]
[c0, a1] -> U+0021 [!]
...
[c0, b6] -> U+0036 [6]
[c0, b7] -> U+0037 [7]
[c0, b8] -> U+0038 [8]
[c0, b9] -> U+0039 [9]
...
[c1, 80] -> U+0040 [@]
[c1, 81] -> U+0041 [A]
[c1, 82] -> U+0042 [B]
[c1, 83] -> U+0043 [C]
[c1, 84] -> U+0044 [D]
...

so for a string like "ABC" you would have two forms of UTF-8 sequences

"0x41 0x42 0x43" and "0xc1 0x81 0xc1 0x82 0xc1 0x83"

The Unicode Corrigendum #1: UTF-8 Shortest Form specifies explicitly that

"The definition of each UTF specifies the illegal code unit sequences in that UTF. For example, the definition of UTF-8 (D36) specifies that code unit sequences such as [C0, AF] are ILLEGAL."

Our old implementation accepts those non-shortest-form (while never generates them when encoding). The new UTF_8 charset now rejects the non-shortest-form byte sequences for all BMP characters, only the "legal byte sequences" listed below are accepted.


/\* Legal UTF-8 Byte Sequences
\*
\* # Code Points Bits Bit/Byte pattern
\* 1 7 0xxxxxxx
\* U+0000..U+007F 00..7F
\* 2 11 110xxxxx 10xxxxxx
\* U+0080..U+07FF C2..DF 80..BF
\* 3 16 1110xxxx 10xxxxxx 10xxxxxx
\* U+0800..U+0FFF E0 A0..BF 80..BF
\* U+1000..U+FFFF E1..EF 80..BF 80..BF
\* 4 21 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
\* U+10000..U+3FFFF F0 90..BF 80..BF 80..BF
\* U+40000..U+FFFFF F1..F3 80..BF 80..BF 80..BF
\* U+100000..U10FFFF F4 80..8F 80..BF 80..BF
\*/

The next question would be "What would be the issue/problem if we keep using the old version of JDK/JRE?".

First, I'm not a lawyer...oops, I meant I'm not a security expert:-) so my word does not count. So we consulted with our security experts' opinion. Their conclusion is "it is NOT a security vulnerability in Java SE per se, but it may be leveraged to attack systems running software that relies on the UTF-8 charset to reject these non-shortest form of UTF-8 sequences".

A simple scenario that might give you an idea about what the above "may be leveraged to attack..." really means is

(1)A Java application would like to filter the incoming UTF-8 input stream to reject certain key words, for example "ABC"

(2)Instead of decoding the input UTF-8 byte sequences into Java char representation and then filter out the keyword string "ABC" at Java "char" level, for example,


String utfStr = new String(bytes, "UTF-8");
if ("ABC".equals(strUTF)) { ... }

The application might choose to filter the raw UTF-8 byte sequences "0x41 0x42 0x43" (only) directly against the UTF-8 byte input stream and then rely on (assume) the Java UTF-8 charset to reject any other non-shortest-form of the target keyword, if there is any.

(3)The consequence is the non-shortest form input "0xc1 0x81 0xc1 0x82 0xc1 0x83" will penetrate the filter and trigger a possible security vulnerability, if the underlying JDK/JRE runtime is an OLD version.


So the recommendation is to update to the latest JDK/JRE releases to avoid the potential risk.

Wait, there is also a big bonus for the updating. The UTF-8 charset implementation has not been updated/touched for years, given the fact that the UTF-8 encoding is so widely used (the default encoding for the XML and more and more websites use UTF-8 as their page encoding), we have been taking the "defensive" position of "don't change it if it works" the past years. So Martin and I decided to take this as an opportunity to give it a speed boost as well. The data below is taken from one of my benchmark (this is NOT an official benchmark, provided to give a rough idea of the performance boost) run data which compares the decoding/encoding operations of new implementation and old implementation under -server vm. The new implementation is much faster, especially when de/encoding the single bytes (those ASCIIs). The new decoding and encoding are faster under -client vm as well, but the gap is not as big as in -server vm,I wanted to show you the best:-)


Method Millis Millis(OLD)
Decoding 1b UTF-8 : 1786 12689
Decoding 2b UTF-8 : 21061 30769
Decoding 3b UTF-8 : 23412 44256
Decoding 4b UTF-8 : 30732 35909
Decoding 1b (direct)UTF-8 : 16015 22352
Decoding 2b (direct)UTF-8 : 63813 82686
Decoding 3b (direct)UTF-8 : 89999 111579
Decoding 4b (direct)UTF-8 : 73126 60366
Encoding 1b UTF-8 : 2528 12713
Encoding 2b UTF-8 : 14372 33246
Encoding 3b UTF-8 : 25734 26000
Encoding 4b UTF-8 : 23293 31629
Encoding 1b (direct)UTF-8 : 18776 19883
Encoding 2b (direct)UTF-8 : 50309 59327
Encoding 3b (direct)UTF-8 : 77006 74286
Encoding 4b (direct)UTF-8 : 61626 66517

The new UTF-8 charset implementation has been integrated in
JDK7, JDK6-open, JDK6-u11 and later, JDK5.0u17 and 1.4.2_19.

And if you are interested in what the change looks like, you can take a peek on the webrev of new UTF_8.java for OpenJDK7.

About

xuemingshen

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today