Taking the pain out of PKI, a modern approach to encryption
By Simon Thorpe on Oct 05, 2009
I was recently approached with a problem where an organization wanted to tie encryption directly to a smart card which the user carried with them. The requirement was that the device stored the cryptography keys that would be used in decrypting any information that the owner of the device is authorized to access. This led to a set of discussions which I've edited into the following article. Note that it is wise to have a basic understanding of cryptography before reading any further, some of the concepts can be quite technical.
The ideaThe requirement asks for all sensitive data to be encrypted so that only those who are authorized have access to the keys which decrypt the information. Because the source of the information may come from numerous locations, such as different database systems, applications, documents, emails, there is a desire to try and centralize the management and application of the cryptography.
In this particular case the idea was to use random symmetric keys (session keys) to encrypt the data. The session keys are then encrypted to the public keys of the smart cards to which the data is being sent, which can only be decrypted by the smart card private key.
Database encryptionOne of the first problems with this approach is that it cannot really be applied to database encryption, where all the encryption/decryption is done on the server-side by the database server (or server plugins or network interceptors). With Oracle database, only valid clients of the database can decrypt the content and these don't have access to the keys on the smart card.
The problemThe idea as described above has long been used within first generation PKI products such as PGP. While cryptographically very sound it has serious usability flaws. The most important flaw is that if a thousand end users are to access the same piece of data then the session key for the data must be re-encrypted a thousand different times (to the public key of each user) and sent to each to each of the thousand users. If there are a thousand data items then a million encrypted session keys must be distributed between the thousand end users. I'm sure you can see where this is going.
Before server-based second generation PKI solutions, products would simply bundle the encrypted session keys in with the encrypted data. The encrypted data swelled in size (keys are not small) and different versions of the same data proliferated as new per-user encrypted session keys were added and removed. Before there were more than a few hundred users the system quickly becomes unmanageable and requires significant infrastructure. Returning to the thousand user and thousand data item example: assuming 256-bit keys, the best possible overhead per data item is 32kb (in practice a lot more), but the killer problem is the proliferation of new versions of the same data with different key sets (as recipients are added/removed). This results in people have the same document on their computer and they can open one, but not the other, because there are different keys encrypting the data. Yet the content of the document is the same. This quickly becomes very confusing for the end user as well.
To address these short calls, second-generation encryption solutions moved the key management to servers (instead of including the encrypted session keys in the encrypted data) but all this has done is move the exponential complexity to the server.
One example of this move is the implementation in Microsoft RMS. All documents are encrypted using their own random symmetric session key. This key is then encrypted to the public key of the RMS server, until it is re-encrypted to a Windows-generated public key of the end user when the user obtains a first-use license. The per-file, per-user session key must then be stored on the desktop if offline access is desired. There are several usability problems with this. First, the per-file, per-user session key can only be recovered while online, so all first uses of encrypted documents must be while offline (not good when an important business executive gets on a plane, loads a DVD and cannot access its encrypted content). Then, if a user has potential access to many thousands of documents (generally the case in large organizations) the volume of the per-file, per-user licenses (which not only include keys but rights) precludes the repeated synchronization of those licenses from the server to the end user desktop. For Microsoft RMS this means administrators are forced to choose between offline use (cache use licenses on desktop in perpetuity) or possible future centralized revocation, they cannot have both. Yet both offline use and revocation are both critical capabilities of RMS.
The solutionThe problem above outlines that trying to tightly tie the cryptography used to protect the information to the user results in an unmanageable, unusable system where keys are tightly coupled with each copy of data they have encrypted. The solution is to separate the keys from both the content and the user and provide a logical model which applies these at the right time with an intelligent offline caching system.
Oracle IRM is an example of such a solution and offers a third generation of PKI. It generates symmetric keys for each classification of information. These keys are then securely shared with users authorized for those classifications and stored in encrypted offline caches tied to their Windows login. Documents are then secured against these classifications and encrypted against these keys.
Cryptographically this may appear at first glance less secure than per-file keys, but consider that all RMS files are encrypted using random symmetric keys derived from a single RMS server private key that can never be rotated. Oracle IRM files are encrypted from one of a set of per-classification keys that are cryptographically completely separate from each other and any other IRM server keys. These keys can be rotated and even destroyed without decommissioning the entire IRM server (and all the content managed by it).
While you can argue about the cryptography, the usability benefits are profound. A typical end user may only need to synchronize a handful of classifications and per-classification keys, resulting in Oracle IRM being able to provide hands-free offline working and timely revocation (because of the ease of repeated sync). Combined with per-classification role based access control (as opposed to Microsoft's 'ad hoc' per-file, per-user rights) this makes Oracle IRM usable at volume (users and files). Security without usability is no security at all.
Getting back to the smart card issue. Oracle IRM is about centralized key management, so it is unlikely that the smart card private key should be directly involved in the data encryption/decryption. The smart card is best placed to provide strong authentication and therefore the end user's IRM desktop agent which then requests access to that end user's set of per-classification keys. With such a solution you could protect millions of documents and share them with a million people, with one simple change on the server you can revoke access to every single copy of that information ever made to all million users!