Friday May 11, 2007

Java One ECC BOF - Thanks for the interest!

Thanks to everyone who took time out of the Java One After Dark party to stop by my talk on ECC and TLS last night, it was great to see all of you there and so much interest in ECC.

Here is a link to the BOF-6958 presentation.


Thursday May 03, 2007

Elliptic Curve Cryptography - The Future of SSL

"Elliptic Curve Cryptography - The Future of SSL", that's the title of my BOF session at JavaOne 2007 exactly a week from today (look up BOF-6958 for location & time).

I encourage you to attend if:

  • You are using ECC in production today - tell us about your experiences
  • You are planning on deploying ECC in your web servers - anything we can do to help?
  • You use SSL/TLS but don't know about ECC - come and hear about it!

See you there...


Tuesday May 01, 2007

Server Watch on Sun Java Web Server 7.0

I already quietly linked to this article about our Web Server 7.0 on http://www.serverwatch.com/ in my previous blog entry but I didn't really say anything about it. By now most of you have probably seen it but if you haven't, check it out, particularly if you believe that performance matters.


Tuesday Apr 17, 2007

PKCS#11 and SSL Performance

Unless you have written consumers or providers of crypto functionality you probably don't have much interaction with PKCS#11. What is it? Quoting from RSA Labs, "this standard specifies an API, called Cryptoki, to devices which hold cryptographic information and perform cryptographic functions".

As a web server administrator you generally don't have to deal with it either, even though you probably know that the JES Web Server supports PKCS#11 compatible devices for the crypto functionality used by SSL/TLS.

Recently I added a feature which we plan on including in some upcoming update of JES Web Server 7.0, which allows bypassing the PKCS#11 layer for SSL operations. What does that mean? Well, basically it means your server will be able to handle SSL requests faster ;-)

Here is a coarse high level diagram of the relationship between the various building blocks which comprise the SSL support in the web server:

As you can see, NSS always interacts with the crypto device via the PKCS#11 API, whether you're using a crypto accelerator or the Solaris 10 crypto framework or even when using the built-in software token (softoken).

Of course, this adds some overhead. NSS must construct PKCS#11 structures and call the right APIs and the softoken must decode these structures and pass on the data to the underlying crypto primitives. Since the softoken is part of NSS one might argue that NSS could simply call the crypto functions directly. In a nutshell, that is what the PKCS#11 bypass does. This is conceptually represented in the diagram above by the dotted line red arrow.

For most use cases this is a no-brainer - once this feature is released SSL will perform better. The bypass will be on by default so you don't even need to do anything other than upgrade your server. It'll just be faster (I should say even faster!).

There are some details to be aware of, however.

First, NSS's interaction with the crypto device is "sticky". If the server key (associated with the certificate given in the server-cert-nickname element of server.xml) lives in Token-A, operations related to SSL sessions initiated with this key will also be performed by Token-A. Note that this applies to all subsequent operations (as long as Token-A is capable of performing them), not only the handshake operations which directly make use of the private key material (the operations which directly use the private key material are always performed by the token in which the private key is stored, regardless of bypass!)

Bypass removes this "stickyness". When bypass is active NSS will directly use its softoken for all subsequent operations, bypassing not just PKCS#11 overhead but also the external device. In order for this to occur, some of the session-related key material must be extracted from the device and made available to the softoken. It is possible some devices might not support this operation, in which case bypass cannot occur. The good news is that the server will test for this condition - if the device cannot support this requirement, bypass is automatically disabled.

What if the external crypto device is faster than the softoken? Realistically, this not very likely. It would have to be so much faster that it can overcome not only the overhead of PKCS#11 calls but also the overhead of transporting the data to and from the device. But as with all performance-related tunables, you should test both settings using an environment which matches your production setup and loads.

The final detail is related to certification compliance instead of actual functionality. NSS's FIPS-140 testing has been done using the PKCS#11 layer which means the certification covers that configuration. Using the bypass results in a different code path. Even though ultimately it is the same crypto implementation being used, technically it is not a validated setup. So, if your environment requires running in precisely the FIPS-140 certified configuration, you should not use the bypasss.


Monday Mar 12, 2007

ECC at JavaOne 2007

Previously I've written up a few entries on ECC (Elliptic Curve Cryptography) particularly from the performance angle.

If you find this topic interesting, please join me at JavaOne 2007 for my talk on ECC, "Elliptic Curve Cryptography - The Future of SSL".


Friday Mar 02, 2007

Using Software Interfaces

In the past I briefly touched on the topic of public interfaces and interface stability.

One of the unfortunate side effects of the power of web search engines (e.g. google) today is that nobody looks for product details in the product documentation anymore. After all it is so much easier to google for the relevant info. I do it all the time, works great.

So what is unfortunate about it?

Most Sun products go to great lengths to document their public interfaces and to make sure they don't break over time - that's why most Sun products have such an excellent track record of backward compatibility. But when I say "document their public interfaces" I mean, of course, document them in the official product documentation which can be found at http://docs.sun.com.

When you find details of interface behavior elsewhere - not on docs.sun.com - then you need to check the corresponding manuals on docs.sun.com to validate the info. If it's not there or there is any difference, then you cannot depend on that behavior!

Sadly, I see this too often. We'll get a customer query on something that "broke". When we get the details, turns out they are doing something that never should have worked and when we check the documentation there is nothing there that suggests it should work. When we ask how they ended up with that usage, the answer is "We googled for it and found some website by some guy who tried it and it seemed to work".

It's a terrible position to be in because we never want to tell a customer they have to undo their work. Sometimes we can accomodate the unexpected usage. But other times, there really is no other answer than "It can't work, you'll have to fix your code".

So my message on this Friday evening is... "some website by some guy who tried it" is not official Sun documentation. You really must go to http://docs.sun.com for that!

(I won't categorically state that depending on private interfaces is always wrong. It can be useful at times! As long as everyone involved understands that this is the case and understands that the behavior may change at any time for any reason (or even no reason at all) and is happy to accept the inherent instability of doing so... then, maybe it can be ok, in limited, non-production, cases.)

(P.S. Yes, I realize I just wrote 9 paragraphs on what is usually expressed by a single four letter word ("RTFM!").. but I think it is important enough to expand on the topic a bit ;-)


Tuesday Feb 27, 2007

More on Request Limiting

Continuing with my coverage of check-request-limits (see request limiting and concurrency limiting), today I'll give a few examples using the <If> tags introduced in Web Server 7.0.

As I showed in the previous articles, using server variables with the monitor parameter can be quite flexible. However, there are times when you may want to apply check-request-limits in ways which cannot be expressed using monitor parameter values. Check out the <If> expressions also introduced in Web Server 7.0.

Let's look at an example. Say you want to apply limits only to request paths which end in \*.jsp. You could write:

<If $path = "\*.jsp">
PathCheck fn="check-request-limits" max-rps="10"
</If>

Simple enough! There are some pitfalls to watch out for, though. Take a look at this example:

<If $path = "\*.pl">
PathCheck fn="check-request-limits" max-rps="100"
</If>
<If $path = "\*.jsp">
PathCheck fn="check-request-limits" max-rps="10"
</If>

At first glance one might think this limits all "\*.pl" paths to 100rps and all "\*.jsp" paths to 10rps. Not so! Recall that request counts and averages are tracked separately for each value of the monitor parameter (thus, for example, when monitor="$ip" counts are kept separately for every client IP). In the above two invocations of check-request-limits there are no monitor parameters. So where are the counts kept? When the monitor parameter is not given, counts are kept in a default unnamed slot.

By now you can probably see the issue... the counts for the two calls above are kept in the same counter. So, whenever a "\*.pl" request is processed, the counter increases. But this same counter is the one used for "\*.jsp" requests! If the server is processing an average of 20rps worth of "\*.pl" requests, no request for "\*.jsp" will ever be serviced... Not quite what I wanted!

Fortunately, this is easy to correct by simply tracking each type of request separately, using the monitor parameter:

<If $path = "\*.pl">
PathCheck fn="check-request-limits" max-rps="100" monitor="pl"
</If>
<If $path = "\*.jsp">
PathCheck fn="check-request-limits" max-rps="10" monitor="jsp"
</If>


Friday Feb 23, 2007

Throw More Boxes At It?

As commodity hardware became cheaper, a common approach to scaling up has been to just throw more boxes at it. Individual efficiency (hardware or software) and reliability doesn't really matter too much and it is not worth the time to optimize the setup when it is cheaper to order a few hundred more machines whenever a bit more capacity is needed. Repeat as needed.

More recently this trend has started to hit some limits. The individual boxes are cheap but acquiring thousands is still an expense. More importantly, the cost of electricity to run them all and to run the A/C to cool the buildings is rapidly escalating. The real estate footprint costs are also a concern.

None of this is news if you follow the trade press even casually, of course. Articles about the concern over electrical and space costs have been all over for the last few years.

We've seen the hardware adapt, with CPUs which deliver more throughput per watt. The most exciting story on this front is certainly the Sun Niagara server line. If you haven't already, go try and buy one and see how good it can be. Practices are also changing, as seen from the renewed interest in combining a number of under-utilized servers into one hardware box (see virtualization).

Fewer boxes, less electricity, less A/C, less space, fewer purchase orders, less administration. What's not to like?

Curiously, there doesn't (yet??) seem to be quite the same buzz on the software side on this issue. It's cool to reduce machine (and watt) counts by upgrading to efficient hardware like the T2000 but how about even further reducing the boxes needed by upgrading to more scalable software?

I was wondering about this while reading the SAMP announcement. Why pay for higher Watts/HTTP-request running Apache if you could run JES Web Server 7.0 instead (unlike the T2000 try & buy, the web server is try and ... deploy, being free for production as well).

How do you feel about it? Is software efficiency & scalability going to become increasingly important as the electrical costs of running massive server farms continues to increase?

Thursday Feb 22, 2007

Web Server 7.0 Concurrency Limiting

Last week I talked about the new request limiting feature in Web Server 7.0. I'll expand on this topic today by covering the concurrency limiting feature of check-request-limits.

Once again let's start with the simplest possible usage:

PathCheck fn="check-request-limits" max-connections="2"

The max-connections option tracks instantaneous connections, as opposed to averages over time as max-rps did. The above tells check-request-limits that I want to allow only 2 simultaneous requests being processed at any one time. If two requests are being serviced and a third one arrives, the third one will be rejected with the desired error code (as before, the default is 503).

As before, this minimal example isn't really useful. For one thing, I probably never want to limit the entire server to just two simultaneous requests. As with max-rps, the use cases become more interesting when the monitor parameter is introduced:

PathCheck fn="check-request-limits" max-connections="2" monitor="$ip"

Now I'm limiting the server to only process two simultaneous requests from any one client (by IP) but there is no limit to the number of distinct clients which can be serviced at the same time (well, there are other limits which apply such as server capacity and worker threads ;-)

I won't repeat the various examples I gave last time, but suffice to say that all of them can be used with max-connections just as well as with max-rps. You can use any server variable or combinations of multiple variables to establish the domain over which the limits apply.

Note that if a request exceeds the limit it is rejected right away. If you wanted to limit the number of active requests being processed but allow additional ones to queue up you can do that by setting the maximum worker threads instead.

Let me know of any interesting and useful use cases for check-request-limits that you come up with... I'll add some more examples and notes about it in a future entry. Have fun with it!


Wednesday Feb 14, 2007

Web Server 7.0 Request Limiting

One of the cool new features in Web Server 7.0 is the check-request-limits SAF. In a nutshell, it can be used to selectively track request rates and refuse requests if limits are exceeded. While its primary purpose is to help protect against denial of service attacks consisting of high request rates, it is also useful in other scenarios where limiting request counts is of interest.

Here's the simplest possible invocation:

PathCheck fn="check-request-limits" max-rps="10"

This instructs the SAF to track the average requests per second (rps) and refuse client connections if the average rps exceeds 10rps. The average rate is recomputed every 30 seconds (default interval) based on the number of requests received in those past 30 seconds. If the average rps exceeds 10, all subsequent requests are rejected with HTTP response code 503 (service unavailable), the default rejection response. Requests will continue to be rejected until the average rps falls below the threshhold of 10. Since the average is only recomputed once per interval, this means it'll be at least one interval before normal service resumes. Naturally, these defaults can all be changed. The following invocation is equivalent to the prior one but shows the default values explicitly:

PathCheck fn="check-request-limits" max-rps="10" interval="30"
                                    continue="threshhold" error="503"

The other possibility for the continue option is "silence". If set, the incoming request count must fall to zero (for an entire interval) before normal service resumes. You can use this if you want to force the offending requests to truly "go away" before allowing any more to be serviced.

Now, in most cases one would not use a line such as the above in a real server because it is tracking all requests globally (for that web server process) and that is overly heavy handed unless you really want to limit the entire server to such a low average request rate. Perhaps if it is a home server, but not in most cases.

You may have read about the server variables and <If> tag also introduced in Web Server 7.0. Let's use some of those capabilities to make the request limiting more interesting:

PathCheck fn="check-request-limits" max-rps="10" monitor="$ip"

The "monitor" parameter is optional but in nearly every case you will want to give it a value. It instructs check-request-limits to track request statistics using separate counters for each monitored value. In the example above, separate stats will be kept for every client IP ($ip expands to the client's IP) making a request to the server. That's more like it! Now, any client which exceeds my set limit (10rps) will be refused service but all other clients continue to experience normal operation.

You can use any of the supported server variables as the value of "monitor", of course. Another interesting one might be $uri:

PathCheck fn="check-request-limits" max-rps="10" monitor="$uri"

Here, instead of setting limits for each client, we set the limit for each URI on the server. Perhaps some areas of your server are harder hit and you wish to limit use of those while allowing normal servicing of other areas? The above directive will accomplish that.

In fact you can combine variable as well. This is also legal:

PathCheck fn="check-request-limits" max-rps="10" monitor="$ip:$uri"

Here the SAF will limit only specific clients which request the same URI(s) too frequently (over 10rps, that is) but all other URIs for those clients and all other clients continue to be serviced normally. Cool!

This functionality is fairly flexible so experiment with it for a bit. While these examples will get you started, I'll describe a few other scenarios later on.

Here are links to relevant parts of the documentation:


Wednesday Feb 07, 2007

My Filesystem Is Broken?

In my previous humorous take on password encryption I inserted a few phrases meant to highlight some of the issues. Last week I commented on one of them, this time I'll take a look at another.

I mentioned that "An attacker who manages to crack or bypass the file protections will be able to obtain the cleartext password, which is a problem". True enough, as far as the statement goes.

Imagine for a moment that this attacker has indeed managed to bypass the operating system's file system permissions, so he can read the supposedly protected file and extract the password. The requested solution is to avoid storing the password (or, somewhow, store it encrypted) so that when the attacker breaks the file permissions he will not get the [cleartext] password. Attack foiled!

Or was it?

If faced with this situation, what would the attacker do next?

We know the web server still ultimately needs the passwords, so we know the data will exist within the process at some point even if it is not on disk (or only on disk encrypted). Since the attacker bypassed the file system protections, how about modifying libns-httpd40.so (the core implementation of the server) to insert malicious code which emails him the password once the server process obtains it? That's just one example, there would be nearly endless injection points where the attacker could insert similar trickery if we assert that the file system permissions have been bypassed.

Fortunately the file system permissions aren't quite so weak. While it is true that an attacker who breaks them could indeed then read passwords out of a file, it is also true that if the attacker has this power then the entire system is compromised beyond help.


Monday Feb 05, 2007

Sun, ECC and Web Server 7.0

There was a press release about Sun's ECC support earlier today.

And as I mentioned earlier, Web Server 7.0 (containing the ECC support) shipped two weeks ago, so our ECC support is ready for production use!

 

Wednesday Jan 31, 2007

Tea and No Tea

If you had the opportunity to play that classic game you probably eventually succeeded in having tea and no tea at the same time. Of course, those events took place in a universe which had the benefit of the Infinite Improbability Drive.

Some time ago I wrote a humorous take on encrypting passwords. Hopefully it was clear that I was poking fun at a few nonsensical implementations. However, every so often I get requests to implement something along the lines of what I described in that article.

There are two types of passwords handled by the web server. First, there are passwords which (one way or the other) will be sent by a client to the server and the server needs to (by various mechanisms) to validate whether it matches the stored password. Second, there are passwords which the web server process will itself need to know during its lifetime because it will interact with some other entity using a protocol which requires the web server to posess the password in the clear.

The first kind is used, for example, when authenticating web clients using HTTP Basic or Digest auth or Servlet FORM authentication. The server doesn't need to know the actual password of that user. It only needs to know a one-way hash of the password in a suitable form. The suitable form can vary depending on the protocol but we can ignore the details for the moment - the high level point being that the server only needs a hash of the password, therefore it doesn't need to store the clear text password nor does it need to be able to compute the clear text password (say, by decrypting) at any time.

The second kind is different. Above I defined these to be those which the server will need to know in the clear at some point. So the question is raised, how should these be stored? Storing one-way hashes of the password is clearly out, since the one-way-ness handily breaks the stated requirement of recovering the original password.

We can encrypt these passwords with a suitably strong reversible algorithm! In the previous article I wrote "encryption is really hard to break, so that will certainly improve the security even further". I chose that wording to highlight the common misconception that encryption is a magic bullet that makes problems go away.

Unfortunately we also need to handle the key management issues if encryption is introduced. It is certainly possible to encrypt those passwords with a strong cipher such as AES and have the web server store only the encrypted data on disk. But we have already asserted that the web server process needs to obtain the clear text of those password at some point during the life of the process. How will the process do that? As long as it has the encryption key it can decrypt the data to obtain the passwords.

So where is that key coming from?

The are two possibilities:

  1. The key is not stored anywhere on disk; a human must enter it into the console when the server process requests it.
  2. The key is kept on disk in a form which allows the server process to obtain it programmatically, without human interaction.

The first choice has some real benefits. The key is never stored anywhere and the passwords are only stored on disk securely encrypted. Of course, there is a major drawback to this option - the server cannot start without the help of the human who needs to enter the key. The scenarios where this is practical are limited.

The second choice is the one I covered in the previous article.


Monday Jan 22, 2007

Sun Java System Web Server 7.0

It's Here!

The official release of Sun Java System Web Server 7.0 is now available.

If you've been following my blog you probably remember that functionally complete preview releases have been available for download since last summer, so most of you already have experimented with 7.0. Over the last several months in this blog I have been covering several of the new security features in this release, so check out my older blog entries if you are new to 7.0. With the official release now available, you can start deploying Web Server 7.0 in production. I know many of you have been eagerly awaiting this... Enjoy!

If you wish to discuss any features or have any questions, please head over to the web server forum.

Friday Jan 12, 2007

More On Web Server ECC Performance

Last summer I talked a bit about ECC performance in Web Server 7.0 while comparing different ECC and RSA key sizes.

In the previous article I had a table which showed the approximate equivalency in strength of RSA vs. ECC key sizes. This time, I'll pick one row from that table and compare the performance of several cipher suites in those key sizes. I decided to use 3072 bit RSA keys - roughly equivalent to 256 bit ECC keys. For my JES Web Server 7.0 instance, I generated a 3072 bit RSA keypair and an ECC keypair on NIST P-256 curve (256 bit key).

Using the various cipher suites shown below, I ran a fixed number of requests to the web server; the Y axis shows time taken to complete these. As in the previous article, I ran each scenario at various percentages of SSL session reuse; these are shown in the X axis.

As before, when the SSL session is always reused (far left X axis on the graph) the cipher suite and server key size hardly matter since there is only 1 full handshake in the entire run, therefore its cost is lost in the noise. As the percentage of new handshakes increase, the computational load on the server increases and the differences between key sizes and cipher suites become increasingly visible.

Now, remember, in this graph every line is using equivalent server key sizes (3072 bit for RSA, 256 bit for ECC), so we're focusing on the differences between the cipher suites themselves.

The red line shows the traditional RSA server keypair (3072 bit).

The black line is interesting as it is a good bit slower than all the others. The TLS_ECDHE_RSA_\* cipher suites deserve some comment. When using these suites, the web server actually has an RSA keypair. These suites can be used by your existing web server without generating any new keys or having to obtain new server certificates from your CA. Instant ECC adoption! The tradeoff, however, can be seen from the graph... performance is not the best.

The TLS_ECDHE_ECDSA_\* (blue line) came out the same (within experimental error, I measured these numbers on my desktop and not a dedicated server) as the RSA cipher suite (at this particular key size - at larger key sizes ECC would have an advantage since the cost of RSA computation increases faster as key sizes grow).

Finally, the green line shows the TLS_ECDH_ECDSA_\* runs, which were significantly faster than RSA.

Hopefully this small experiment sheds some light in selecting appropriate cipher suites for your server installation as there are a number of tradeoffs in performance, convenience and flexibility. ECC offers significant performance advantages but as with any technology it is important to understand the details. For example, if performance is very important in your server you should look into generating an ECC key pair instead of attempting to use TLS_ECDHE_RSA_\* suites with your existing RSA-based server keypair/certificate.

I should probably also point out that here I looked only at the server side of the performance coin. But if you have small devices (mobile phones, etc) using ECC, the benefit to those clients from ECC over RSA can be substantial.


About

jyri

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today