Monday Apr 18, 2011

Time back to blog

In the past two years, a few new features are designed in JDK 7. I will try to demonstrate some of them in this blog, especially weak cryptographic algorithm control and TLS 1.1/1.2.

Sunday Jul 19, 2009

Document published: SunJSSE FIPS 140 Complient Mode

If you review the online JSSE Reference Guide recently, you would found that in the section, Related Documentation, there is a new link to the just published document FIPS 140 Compliant Mode for SunJSSE

Saturday Jul 18, 2009

Dump PKCS11 Slot Info

Recently, I needed a tool to show the detailed PKCS11 slot information. Cryptoadm is a good utility to display cryptographic provider information for a system, but it does not show me the "ulMaxSessionCount" field, which was important to me at that time, I was eager to know what's the maximum number of sessions that can be opened with the token at one time by a single application. Google did not help this time, so I had to write a simple tool by myself.

Past the code here, maybe one day, it will save me a lot time when I need such a detailed slot info.

Compile the codes with:

$gcc cryinfo.c -o slotinfo -lpkcs11

Copy (or download), save, compile the source code bellow:

#include <stdio.h>
#include <security/cryptoki.h>
#include <security/pkcs11.h>

extern void dump_info();

int main(int argc, char \*\*argv) {
    CK_RV               rv;
    CK_MECHANISM        mechanism = {CKM_RC4, NULL_PTR, 0L};
    CK_SESSION_HANDLE   hSession;

    // initialize teh crypto library
    rv = C_Initialize(NULL_PTR);
    if (rv != CKR_OK) {
        fprintf(stderr, "C_Initialize: Error = 0x%.8X\\n", rv);
        return -1;
    }

    dump_info();

    rv = C_Finalize(NULL_PTR);
    if (rv != CKR_OK) {
        fprintf(stderr, "C_Finalize: Error = 0x%.8X\\n", rv);
        return -1;
    }

}

void dump_info() {
    CK_RV               rv;
    CK_SLOT_INFO        slotInfo;
    CK_TOKEN_INFO       tokenInfo;
    CK_ULONG            ulSlotCount = 0;
    CK_SLOT_ID_PTR      pSlotList = NULL_PTR;
    int                 i = 0;

    rv = C_GetSlotList(0, NULL_PTR, &ulSlotCount);
    if (rv != CKR_OK) {
        fprintf(stderr, "C_GetSlotList: Error = 0x%.8X\\n", rv);
        return;
    }

    fprintf(stdout, "slotCount = %d\\n", ulSlotCount);
    pSlotList = malloc(ulSlotCount \* sizeof(CK_SLOT_ID));
    if (pSlotList == NULL) {
        fprintf(stderr, "System error: unable to allocate memory");
        return;
    }

    rv = C_GetSlotList(0, pSlotList, &ulSlotCount);
    if (rv != CKR_OK) {
        fprintf(stderr, "C_GetSlotList: Error = 0x%.8X\\n", rv);
        free(pSlotList);
        return;
    }

    for (i = 0; i < ulSlotCount; i++) {
        fprintf(stdout, "slot found: %d ----\\n", pSlotList[i]);
        rv = C_GetSlotInfo(pSlotList[i], &slotInfo);
        if (rv != CKR_OK) {
            fprintf(stderr, "C_GetSlotInfo: Error = 0x%.8X\\n", rv);
            free(pSlotList);
            return;
        }

        fprintf(stdout, "slot description: %s\\n", slotInfo.slotDescription);
        fprintf(stdout, "slot manufacturer: %s\\n", slotInfo.manufacturerID);
        fprintf(stdout, "slot flags: 0x%.8X\\n", slotInfo.flags);
        fprintf(stdout, "slot hardwareVersion: %d.%d\\n",
            slotInfo.hardwareVersion.major, slotInfo.hardwareVersion.minor);
        fprintf(stdout, "slot firmwareVersion: %d.%d\\n",
            slotInfo.firmwareVersion.major, slotInfo.firmwareVersion.minor);

        rv = C_GetTokenInfo(pSlotList[i], &tokenInfo);
        if (rv != CKR_OK) {
            fprintf(stderr, "C_GetTokenInfo: Error = 0x%.8X\\n", rv);
            free(pSlotList);
            return;
        }

        fprintf(stdout, "Token label: %s\\n", tokenInfo.label);
        fprintf(stdout, "Token manufacturer: %s\\n", tokenInfo.manufacturerID);
        fprintf(stdout, "Token model: %s\\n", tokenInfo.model);
        fprintf(stdout, "Token serial: %s\\n", tokenInfo.serialNumber);
        fprintf(stdout, "Token flags: 0x%.8X\\n", tokenInfo.flags);
        fprintf(stdout, "Token ulMaxSessionCount: %ld\\n",
                                tokenInfo.ulMaxSessionCount);
        fprintf(stdout, "Token ulSessionCount: %ld\\n",
                                tokenInfo.ulSessionCount);
        fprintf(stdout, "Token ulMaxRwSessionCount: %ld\\n",
                                tokenInfo.ulMaxRwSessionCount);
        fprintf(stdout, "Token ulRwSessionCount: %ld\\n",
                                tokenInfo.ulRwSessionCount);
        fprintf(stdout, "Token ulMaxPinLen: %ld\\n", tokenInfo.ulMaxPinLen);
        fprintf(stdout, "Token ulMinPinLen: %ld\\n", tokenInfo.ulMinPinLen);
        fprintf(stdout, "Token ulTotalPublicMemory: %ld\\n",
                                tokenInfo.ulTotalPublicMemory);
        fprintf(stdout, "Token ulFreePublicMemory: %ld\\n",
                                tokenInfo.ulFreePublicMemory);
        fprintf(stdout, "Token ulTotalPrivateMemory: %ld\\n",
                                tokenInfo.ulTotalPrivateMemory);
        fprintf(stdout, "Token ulFreePrivateMemory: %ld\\n",
                                tokenInfo.ulFreePrivateMemory);
        fprintf(stdout, "slot hardwareVersion: %d.%d\\n",
            tokenInfo.hardwareVersion.major, tokenInfo.hardwareVersion.minor);
        fprintf(stdout, "slot firmwareVersion: %d.%d\\n",
            tokenInfo.firmwareVersion.major, tokenInfo.firmwareVersion.minor);
        fprintf(stdout, "Token utcTime: %s\\n", tokenInfo.utcTime);
        fprintf(stdout, "\\n");
    }

    free(pSlotList);
}

Monday Jul 13, 2009

An Aggregate of Feeds: Top Influencers on IT Security

An aggregate of feeds, http://feeds.feedburner.com/influenceronsec, from Bruce Schneier, Alan Shimel, and more.

Thursday Jul 02, 2009

Enable OCSP checking

If a certificate is issued with a authority information access extension which indicates the OCSP access method and location, one can enable the default implementation of OCSP checker during building or validating a certification path.

Maybe you need to check your certificate firstly, in the purpose of making sure it includes a OCSP authority information access extension:

#${JAVA_HOME}/bin/keytool -printcert -v -file target.cert

You are expected to see similar lines in the output:

#3: ObjectId: 1.3.6.1.5.5.7.1.1 Criticality=false
AuthorityInfoAccess [
[accessMethod: 1.3.6.1.5.5.7.48.1
accessLocation: URIName: http://onsite-ocsp.verisign.com]
]

In the above output, "http://onsite-ocsp.verisign.com" indicates the location of the OCSP service.

If you find one of similar authority information access extension in your certificate path, you need to enable OCSP checker.

For Sun PKIX implementation, OCSP checking is not enabled by default for compatibility, note that enabling OCSP checking only has an effect if revocation checking has also been enabled. So, in order to enable OCSP checker, first of all, you need to active certificate revocation checking; then active OCSP checking. It is simple and straightforward, only needs a few lines.

PKIXParameters params = new PKIXParameters(anchors);

// Activate certificate revocation checking
params.setRevocationEnabled(true);

// Activate OCSP
Security.setProperty("ocsp.enable", "true");

After that above two configurations, the default Sun PKIX implementation will try to get certificate status from the OCSP service indicated in the authority information access extension. For the above example, "http://onsite-ocsp.verisign.com" is the OCSP service. The enabled Sun OCSP checker will send certificate status request to the service, get response, and analysis the status from the response, if the status is revoked or unknown, the target certificate would be rejected.

Here is a sample code I wrote help you test your certificates and OCSP service, hope it helps.

/\*\*
 \* @author Xuelei Fan
 \*/
import java.io.\*;
import java.net.SocketException;
import java.util.\*;
import java.security.Security;
import java.security.cert.\*;

public class AuthorizedResponderNoCheck {

    static String selfSignedCertStr =
        "-----BEGIN CERTIFICATE-----\\n" +
        // copy your trust anchor certificate here, in PEM format.
        "-----END CERTIFICATE-----";

    static String trusedCertStr =
        "-----BEGIN CERTIFICATE-----\\n" +
        // copy your trusted enterprise certificate here, in PEM format.
        "-----END CERTIFICATE-----";

    static String issuerCertStr =
        "-----BEGIN CERTIFICATE-----\\n" +
        // copy the intermediate CA certificate here, in PEM format.
        "-----END CERTIFICATE-----";

    static String targetCertStr =
        "-----BEGIN CERTIFICATE-----\\n" +
        // copy the target certificate here, in PEM format.
        "-----END CERTIFICATE-----";


    private static CertPath generateCertificatePath()
            throws CertificateException {
        // generate certificate from cert strings
        CertificateFactory cf = CertificateFactory.getInstance("X.509");

        ByteArrayInputStream is =
            new ByteArrayInputStream(issuerCertStr.getBytes());
        Certificate issuerCert = cf.generateCertificate(is);

        is = new ByteArrayInputStream(targetCertStr.getBytes());
        Certificate targetCert = cf.generateCertificate(is);

        is = new ByteArrayInputStream(trusedCertStr.getBytes());
        Certificate trusedCert = cf.generateCertificate(is);

        is.close();

        // generate certification path
        List list = Arrays.asList(new Certificate[] {
                        targetCert, issuerCert, trusedCert});

        return cf.generateCertPath(list);
    }

    private static Set generateTrustAnchors()
            throws CertificateException {
        // generate certificate from cert string
        CertificateFactory cf = CertificateFactory.getInstance("X.509");

        ByteArrayInputStream is =
                    new ByteArrayInputStream(selfSignedCertStr.getBytes());
        Certificate selfSignedCert = cf.generateCertificate(is);

        is.close();

        // generate a trust anchor
        TrustAnchor anchor =
            new TrustAnchor((X509Certificate)selfSignedCert, null);

        return Collections.singleton(anchor);
    }

    public static void main(String args[]) throws Exception {

        // if you work behind proxy, configure the proxy.
        System.setProperty("http.proxyHost", "proxyhost");
        System.setProperty("http.proxyPort", "proxyport");

        CertPath path = generateCertificatePath();
        Set anchors = generateTrustAnchors();

        PKIXParameters params = new PKIXParameters(anchors);

        // Activate certificate revocation checking
        params.setRevocationEnabled(true);

        // Activate OCSP
        Security.setProperty("ocsp.enable", "true");

        // Activate CRLDP
        System.setProperty("com.sun.security.enableCRLDP", "true");

        // Ensure that the ocsp.responderURL property is not set.
        if (Security.getProperty("ocsp.responderURL") != null) {
            throw new
                Exception("The ocsp.responderURL property must not be set");
        }

        CertPathValidator validator = CertPathValidator.getInstance("PKIX");

        validator.validate(path, params);
    }
}

Thursday Jun 25, 2009

An Aggregate of Feeds on Java Security and Networking

To facilitate keeping track of blogs on java security and networking, I just created an aggregate of feeds, http://feeds.feedburner.com/javasec, and subscribed it to my feed reader, thunderbird. If you are blogging on Java security or networking, please let me know, I would like subscribe to your feed and add it into the aggregator

Of course, you are welcome to subscribe to the aggregated feed, http://feeds.feedburner.com/javasec.

Thursday Jun 18, 2009

TLS and NIST'S Policy on Hash Functions

NIST'S Policy on Hash Functions

March 15, 2006: The SHA-2 family of hash functions (i.e., SHA-224, SHA-256, SHA-384 and SHA-512) may be used by Federal agencies for all applications using secure hash algorithms. Federal agencies should stop using SHA-1 for digital signatures, digital time stamping and other applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010. After 2010, Federal agencies may use SHA-1 only for the following applications: hash-based message authentication codes (HMACs); key derivation functions (KDFs); and random number generators (RNGs). Regardless of use, NIST encourages application and protocol designers to use the SHA-2 family of hash functions for all new applications and protocols.

TLS specifics of hash functions

  1. MAC constructions

    A number of operations in the TLS record and handshake layer require a keyed Message Authentication Code (MAC) to protect message integrity or to construct key derivation functions.

    For TLS 1.0 and 1.1, the construction used is known as HMAC; TLS 1.2 still use HMAC, but it also decalares that "Other cipher suites MAY define their own MAC constructions, if needed."

  2. HMAC at handshaking

    HMAC can be used with a variety of different hash algorithms. However, TLS 1.0 and TLS 1.1 use it in the handshaking with two different algorithms, MD5(HMAC_MD5) and SHA-1(HMAC-SHA). Additionla hash algorithm can be defined by cipher suites and used to protect record data, but MD5 and SHA-1 are hard coded into the description of the handshaking for TLS 1.0 and TLS 1.1.

    TLS 1.2 move away from the hard coded MD5 and SHA-1, SHA-256 is the default hash function for all cipher suites defined in TLS 1.2, TLS 1.1, TLS 1.0 when TLS 1.2 is negotiated. TLS 1.2 also declares that "New cipher suites MUST explicitly specify a PRF and, in general, SHOULD use the TLS PRF with SHA-256 or a stronger standard hash function", which means that the hash functions used at handshakeing should be SHA-256 at least.

  3. HMAC at protecting record

    For the HMAC operations used to protect record data, the hash funtion is defined by cipher suites. For example, the HMAC's hash function of cipher suite TLS_RSA_WITH_NULL_MD5 is MD5.

    TLS 1.0 and TLS 1.1 define three hash functions for HMAC, they are:

    • null
    • MD5
    • SHA1

    From TLS 1.2, new cipher suites may define their own MAC constructions except the default HMAC. TLS 1.2 defines five MAC algorithms, from the literal, it is straight forward to know the hash function used.

    • null
    • hmac_md5
    • hmac_sha1
    • hmac_sha256
    • hmac_sha384
    • hmac_sha512

  4. Pseudo-Random Function

    Pseudo-random function takes a key rule in TLS handshaking, it is used to calculate the master secret, calculate session keys, and verify the just negotiated algorithms via Finished message. TLS specifications define PRF based on HMAC.

    For TLS 1.0 and TLS 1.1, the PRF is created by splitting the secret into two halves and using one half to generate data with P_MD5 and the other half to generate data with P_SHA-1, then exclusive-ORing the outputs of these two expansion functions together.

    PRF(secret, label, seed) = P_MD5(S1, label + seed) XOR P_SHA-1(S2, label + seed);

    TLS 1.2 defines a PRF based on HMAC as TLS 1.0/1.1, except that the hash algorithm used if SHA-256, "This PRF with the SHA-256 hash function is used for all cipher suites defined in this document and in TLS documents published prior to this document when TLS 1.2 is negotiated. New cipher suites MUST explicitly specify a PRF and, in general, SHOULD use the TLS PRF with SHA-256 or a stronger standard hash function."

    Unlike TLS 1.0/1.1, the PRF of TLS 1.2 does not require to split the secret any more, only one hash function used:

    PRF(secret, label, seed) = P_<hash>(secret, label + seed)

  5. Hash function at ServerKeyExchange

    In the handshakeing message, ServerKeyExchage, for some exchande method, such as RSA, diffie_hellman, ec_diffie_hellman, ecdsa, etc., needs a so-called "signature" to protect the exchanged parameters.

    TLS 1.0 and TLS 1.1 use SHA-1 ( or with MD5 at the same time) to generate the digest for the "signature". While for TLS 1.2, the hash function may be other than SHA-1, it is varied with the ServerKeyExchange message context, such as "signature algorithm" extension, the server end-entity certificate.

  6. Server Certificates

    In TLS 1.0/1.1, there is no way for client to indicate the server what kind of server certificates it would accept. TLS 1.2 defines a extension, signature_algorithms, to indicate to the server which signature/hash algorithm pairs may be used in digital signatures. The hash algorithm could be one of:

    • none
    • md5
    • sha1
    • sha224
    • sha256
    • sha384
    • sha512

  7. Client Certificates

    In TLS 1.0/1.1, a TLS server could request a serial of types of client certificate, but the "type" here refer to the "signature" algorithm, which does not include the hash algorithm the certificate should be signed with. So a certificate signed with a stonger signature algorithm, such as RSA2048, but with a weak hash funtion, such as MD5, would meet the requirements. That's not enough.

    TLS 1.2 extends the CertificateRequest handshaking message with a addtional field, "supported_signature_algorithms", to indicate to the client which signature/hash algorithm pairs may be used in digital signatures. The hash algorithm could be one of:

    • none
    • md5
    • sha1
    • sha224
    • sha256
    • sha384
    • sha512

What the FIPS 140-2 Concern

In the last update of "Implementation Guidance for FIPSPUB 140-2", "The KDF in TLS is allowed only for the purpose of establishing keying material (in particular, the master secret) for a TLS session with the following restrictions, even though the use of the SHA-1 and MD5 hash functions are not consistent with in Table 1 or Table 2 of SP 800-56A: "

  1. The use of MD5 is allowed in the TLS protocol only; MD5 shall not be used as a general hash function.
  2. The maximum number of blocks of secret keying material that can be produced by repeated use of the pseudorandom function during a single call to the TLS key derivation function shall be 2\^32-1.

NIST's Policy Compliant profile for TLS

The NIST's policy on hash functions could be split into four principle. We discuss the profile according to the principles.

  • Principle 1: The SHA-2 family of hash functions (i.e., SHA-224, SHA-256, SHA-384 and SHA-512) may be used by Federal agencies for all applications using secure hash algorithms.

    MD5 is not a FIPS approved hash functions, so first of all, the profile needs to disable all cipher suites with the MACAlgorith of MD5.

    • TLS_RSA_WITH_NULL_MD5
    • TLS_RSA_EXPORT_WITH_RC4_40_MD5
    • TLS_RSA_WITH_RC4_128_MD5
    • TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
    • TLS_DH_anon_EXPORT_WITH_RC4_40_MD5
    • TLS_DH_anon_WITH_RC4_128_MD5
    • TLS_KRB5_WITH_DES_CBC_MD5
    • TLS_KRB5_WITH_3DES_EDE_CBC_MD5
    • TLS_KRB5_WITH_RC4_128_MD5
    • TLS_KRB5_WITH_IDEA_CBC_MD5
    • TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5
    • TLS_KRB5_EXPORT_WITH_RC2_CBC_40_MD5
    • TLS_KRB5_EXPORT_WITH_RC4_40_MD5

    SHA-2 family of hash functions are completely compliant to the policy. The profile is safe to enabled the those cipher suites based on SHA-2

    • TLS_RSA_WITH_NULL_SHA256
    • TLS_RSA_WITH_AES_128_CBC_SHA256
    • TLS_RSA_WITH_AES_256_CBC_SHA256
    • TLS_DH_DSS_WITH_AES_128_CBC_SHA256
    • TLS_DH_RSA_WITH_AES_128_CBC_SHA256
    • TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
    • TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
    • TLS_DH_DSS_WITH_AES_256_CBC_SHA256
    • TLS_DH_RSA_WITH_AES_256_CBC_SHA256
    • TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
    • TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
    • TLS_DH_anon_WITH_AES_128_CBC_SHA256
    • TLS_DH_anon_WITH_AES_256_CBC_SHA256
    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_DH_RSA_WITH_AES_128_GCM_SHA256
    • TLS_DH_RSA_WITH_AES_256_GCM_SHA384
    • TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
    • TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
    • TLS_DH_DSS_WITH_AES_128_GCM_SHA256
    • TLS_DH_DSS_WITH_AES_256_GCM_SHA384
    • TLS_DH_anon_WITH_AES_128_GCM_SHA256
    • TLS_DH_anon_WITH_AES_256_GCM_SHA384
    • TLS_PSK_WITH_AES_128_GCM_SHA256
    • TLS_PSK_WITH_AES_256_GCM_SHA384
    • TLS_DHE_PSK_WITH_AES_128_GCM_SHA256
    • TLS_DHE_PSK_WITH_AES_256_GCM_SHA384
    • TLS_RSA_PSK_WITH_AES_128_GCM_SHA256
    • TLS_RSA_PSK_WITH_AES_256_GCM_SHA384
    • TLS_PSK_WITH_AES_128_CBC_SHA256
    • TLS_PSK_WITH_AES_256_CBC_SHA384
    • TLS_PSK_WITH_NULL_SHA256
    • TLS_PSK_WITH_NULL_SHA384
    • TLS_DHE_PSK_WITH_AES_128_CBC_SHA256
    • TLS_DHE_PSK_WITH_AES_256_CBC_SHA384
    • TLS_DHE_PSK_WITH_NULL_SHA256
    • TLS_DHE_PSK_WITH_NULL_SHA384
    • TLS_RSA_PSK_WITH_AES_128_CBC_SHA256
    • TLS_RSA_PSK_WITH_AES_256_CBC_SHA384
    • TLS_RSA_PSK_WITH_NULL_SHA256
    • TLS_RSA_PSK_WITH_NULL_SHA384
    • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256
    • TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384
    • TLS_ECDHE_PSK_WITH_NULL_SHA256
    • TLS_ECDHE_PSK_WITH_NULL_SHA384

    Those cipher suites with MAC algorithm of SHA-1 are addressed at the follow principles.

  • Principle 2: Federal agencies should stop using SHA-1 for digital signatures, digital time stamping and other applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010.
    Profile ServerKeyExchange Message

    ServerKeyExchange depends on digital signature, the profile should stop using SHA-1 hash function for ServerKeyExchange handshaking message.

    TLS 1.0 and TLS 1.1 use SHA-1 ( or with MD5 at the same time) to generate the digest for the "signature". There is no way to disable SHA-1 in ServerKeyExchange handshaking message. ServerKeyExchange is a optional handshaking message," it is sent by the server only when the server certificate message (if sent) does not contain enough data to allow the client to exchange a premaster secret. This is true for the following key exchange methods:"

    • RSA_EXPORT (if the public key in the server certificate is longer than 512 bits)
    • DHE_DSS
    • DHE_DSS_EXPORT
    • DHE_RSA
    • DHE_RSA_EXPORT
    • DH_anon

    For TLS 1.0 and TLS 1.1, the profile needs to disable the above key exchange methods, for the purpose of preventing the ServerKeyExchange handshaking message occurred, by disabling the following cipher suites:

    • TLS_RSA_EXPORT_WITH_RC4_40_MD5
    • TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
    • TLS_RSA_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DHE_DSS_WITH_DES_CBC_SHA
    • TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
    • TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DHE_RSA_WITH_DES_CBC_SHA
    • TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
    • TLS_DH_anon_EXPORT_WITH_RC4_40_MD5
    • TLS_DH_anon_WITH_RC4_128_MD5
    • TLS_DH_anon_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DH_anon_WITH_DES_CBC_SHA
    • TLS_DH_anon_WITH_3DES_EDE_CBC_SHA
    • TLS_DHE_DSS_WITH_AES_128_CBC_SHA
    • TLS_DHE_RSA_WITH_AES_128_CBC_SHA
    • TLS_DH_anon_WITH_AES_128_CBC_SHA
    • TLS_DHE_DSS_WITH_AES_256_CBC_SHA
    • TLS_DHE_RSA_WITH_AES_256_CBC_SHA
    • TLS_DH_anon_WITH_AES_256_CBC_SHA

    In TLS 1.2, the hash function used with ServerKeyExchange may be other than SHA-1, the following rules defined:

    • Signature Algorithm Extension: If the client has offered the "signature_algorithms" extension, the signature algorithm and hash algorithm used in ServerKeyExchange message MUST be a pair listed in that extension.

      Per this rule, the profile requires that the "signature_algorithms" extension sent by client should include only SHA-2 hash algorithms or stronger, and must not include the hash algorithms: "none", "md5", and "sha1".

    • Compatible with the Key in Server's EE Certificate: the hash and signature algorithms used in ServerKeyExchange message MUST be compatible with the key in the server's end-entity certificate.

      Per this rule, the profile requires that the server end-entity certificate must be signed with SHA-2 or stronger hash functions.

      Note that, at present, the DSA(DSS) may only be used with SHA-1, the profile will not allow server end-entity certificate signed with DSA(DSS).

    Profile Server Certificate

    In TLS 1.0/1.1, there is no way for client to indicate the server what kind of server certificates it would accept. What we can do here is from the point of programming and managerment, the profile requires all server certificates must be signed with SHA-2 or stronger hash functions, and carefully checking that there is no certificate in the chain signed with none SHA-2-or-stronger hash functions.

    In TLS 1.2, there is a protocol specified behavior, "signature_algorithms" extension. "If the client provided a 'signature_algorithms' extension, then all certificates provided by the server MUST be signed by a hash/signature algorithm pair that appears in that extension." Per the specific, the profile requires that the "signature_algorithms" extension sent by client should include only SHA-2 hash algorithms or stronger, and must not include the hash algorithms: "none", "md5", and "sha1"

    However, "signature_algorithms" extension is not a mandatory extension in TLS 1.2, while server does not receive the "signature_algorithms" extension, it also needs to ship the NIST principle. So the profile still requires all server certificates must be signed with SHA-2 or stronger hash functions from the point of programming and management.

    Profile Client Certificate

    In TLS 1.0/1.1, there is no way for server to indicate the client it would accept what kind of hash algorithm used to signed the client certificates. What we can do here is from the point of programming and managerment, the profile requires all client certificates must be signed with SHA-2 or stronger hash functions.

    TLS 1.2 extends the CertificateRequest handshaking message with a addtional field, "supported_signature_algorithms", to indicate to the client which signature/hash algorithm pairs may be used in digital signatures. The profile requires that the "supported_signature_algorithms" field must include only SHA-2 hash algorithms or stronger, and must not include the hash algorithms: "none", "md5", and "sha1".

  • Principle 3: After 2010, Federal agencies may use SHA-1 only for the following applications:
    • hash-based message authentication codes (HMACs);
    • key derivation functions (KDFs);
    • random number generators (RNGs).

    Except the ServerKeyExchange, server Certificate and client Certificate messages, the hash function used in TLS protocols is for HMAC, KDF or RNG, which is allowed by the policy. Need no addtional profile for this principle.

  • Principle 4: Regardless of use, NIST encourages application and protocol designers to use the SHA-2 family of hash functions for all new applications and protocols.
  • TLS 1.0 and TLS 1.1 is totally depends on SHA-1 and MD5, there is no way to obey this principle. In order to fully remove the dependency on SHA-1/MD5, one have to upgrade to TLS 1.2 or later revisions.

A stric mode profile

  1. Disable all cipher suites which mac algorithm is MD5;
  2. Disable all cipher suites which may trigger ServerKeyExchange message;
  3. Accept only those certificates that signed with SHA-1 or stronger hash functions;
  4. Upgrade to TLS 1.2 for purpose of fully remove the dependence on weak hash functions.

Put it into practice

Currently, Java SDK does not support TLS 1.1 or later. The proposals talked here are for TLS 1.0, which is implemented by the default SunJSSE provider.

  1. Disable cipher suite

    JSSE has no APIs to disable a particular cipher suite, but there are APIs to set which cipher suites could be used at handshaking. Refer to SSLSocket.setEnabledCipherSuites(String[] suites), SSLServerSocket.setEnabledCipherSuites(String[] suites), SSLEngine.setEnabledCipherSuites(String[] suites) for detailed usage.

    By default, SunJSSE enables both MD5 and SHA-1 based cipher suites, and those cipher suites that trigger ServerKeyExchange massage. In FIPS mode, SunJSSE enable SHA-1 based cipher suites only, however some of those cipher suites that trigger ServerKeyExchange also enabled. So, considering the above strict mode profile, the coder must explicit call SSLX.setEnabledCipherSuites(String[] suites), and the parameter "suites" must not include MD5 based cipher suites, and those cipher suites triggering handshaking message, ServerKeyExchange.

  2. Constrain certificate signature algorithm
  3. The strict profile suggest all certificates should be signed with SHA-2 or stronger hash functions. In JSSE, the processes to choose a certificate for the remote peer and validate the certificate received from remote peer are controlled by KeyManager/X509KeyManager and TrustManager/X509TrustManager. By default, the SunJSSE provider does not set any limit on the certificate's hash functions. Considerint the above strict profile, the coder should customize the KeyManager and TrustManager, and limit that only those certificate signed with SHA-2 or stronger hash functions are available or trusted.

    Please refer to the section of X509TrustManager Interface in JSSE Reference Guide for details about how to customize trust manager by create your own X509TrustManager; and refer to the section of X509KeyManager Interface in JSSE Reference Guide for details about how to customize key manager by create your own X509KeyManager

Note that the above profile and suggestions are my personal understanding of the NIST's policy and TLS, they are my very peronal suggestions, instead of official proposals from Sun.

Linkage to the blog entry at simabc.blogspot.com

Tuesday Jun 16, 2009

Publicly Accessible LDAP Servers

In order to learn JNDI,  one needs a LDAP server for various purpose. In the JNDI tutorial, there are a few of publicly accessible servers documented[1]. However, the list is too old, and those servers are out of services.

By Google, Found the following two collections[2][3] of public accessible LDAP servers.

And thanks to Ludovic, who commented that FreeLDAP.org is an alternative.  FreeLDAP.org[4]  is a  free LDAP service that you can add yourself entries, and best of all, it provide the service base on SSL and requires individual authentication, which is handy to for the examples that need SSL or user authentication.

[1] Publicly accessible servers

[2] http://www.keutel.de/directory/public_ldap_servers.html

[3] http://www.emailman.com/ldap/public.html

[4] http://www.freeldap.org/

JSSE Troubleshooting: Certificates Order in TLS Handshaking

Issue:

Failed with a exception: java.security.cert.CertPathValidatorException: subject/issuer name chaining check failed.

Example:

Test case:
     1  //
     2  // JSSE Troubleshooting: Disordered Certificate List in TLS Handshaking
     3  //
     4  import java.net.\*;
     5
     6  public class DisorderedCertificateList {
     7      public static void main(String[] Arguments) throws Exception {
     8          URL url = new URL("https://myservice.example.com/");
     9          URLConnection connection = url.openConnection();
    10
    11          connection.getInputStream().close();
    12      }
    13  }
	
Test environment:

The HTTPS server, myservice.example.com, is configurated with a certificate path that the certificates in the path is out of order. For example, the expected certificate path is server_certificate -> intermediate ca -> seld-signed root ca. However, the certificate path is configurated as server_certificate -> seld-signed root ca -> intermediate ca.

Test Result:
	Exception in thread "main" javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: subject/issuer name chaining check failed
		at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
		at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1627)
		at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:204)
		at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:198)
		at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:994)
		at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:142)
		at sun.security.ssl.Handshaker.processLoop(Handshaker.java:533)
		at sun.security.ssl.Handshaker.process_record(Handshaker.java:471)
		at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:904)
		at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1132)
		at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1159)
		at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1143)
		at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:423)
		at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
		at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:997)
		at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
		at DisorderedCertificateList.main(DisorderedCertificateList.java:11)
	Caused by: sun.security.validator.ValidatorException: PKIX path validation failed: java.security.cert.CertPathValidatorException: subject/issuer name chaining check failed
		at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:266)
		at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:249)
		at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:172)
		at sun.security.validator.Validator.validate(Validator.java:235)
		at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:147)
		at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:230)
		at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:270)
		at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:973)
		... 12 more
	Caused by: java.security.cert.CertPathValidatorException: subject/issuer name chaining check failed
		at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:153)
		at sun.security.provider.certpath.PKIXCertPathValidator.doValidate(PKIXCertPathValidator.java:321)
		at sun.security.provider.certpath.PKIXCertPathValidator.engineValidate(PKIXCertPathValidator.java:186)
		at java.security.cert.CertPathValidator.validate(CertPathValidator.java:267)
		at sun.security.validator.PKIXValidator.doValidate(PKIXValidator.java:261)
		... 19 more
	

Cause:

Per the TLS specification (page 39, section 7.4.2, RFC2246), the certificate list passed to server Certificate message or client Certificate message "is a sequence (chain) of X.509v3 certificates. The sender's certificate must come first in the list. Each following certificate must directly certify the one preceding it."

So, the certificate order of the above test case, server_certificate -> seld-signed root ca -> intermediate ca, is not a TLS specification compliant behavior, the TLS handshaking is expected to fail.

Solution:

Checking the TLS/SSL configuration, and make sure that the certificate list sent to peer is properly configuated and in order.


Linkage to the blog entry at simabc.blogspot.com

Friday Jun 12, 2009

RSA AlgorithmIdentifier of X.509 Certificate

By far, RSA is a most wide used cryptography algorithm. Both ITU-T X.509 and IETF PKIX WG define the RSA algorithm identifier, however, they are not identical.

ITU-T X.509[1] defines the algorithm as:

rsa ALGORITHM ::= {
    KeySize
    IDENTIFIED BY  id-ea-rsa
}

KeySize ::= INTEGER

id-ea-rsa OBJECT IDENTIFIER ::= {joint-iso-itu-t(2) ds(5)
algorithm(8) encryptionAlgorithm(1) rsa(1)}

While IETF PKIX WG[2] defines the algorithm as:
rsaPublicKey ALGORITHM-ID ::= { OID rsaEncryption PARMS NULL }

rsaEncryption OBJECT IDENTIFIER ::= {iso(1) member-body(2)
us(840) rsadsi(113549) pkcs(1) pkcs-1(1) rsaEncryption(1)}
 
  

There two differences:
1. different OID.
    ITU-T defines it as "2.5.8.1.1", while PKIX WG defines it as "1.2.840.113549.1.1.1"

2. different algorithm parameters
    ITU-T defines a parameter for RSA, "KeySize", while PKIX WG defines it as null.

Indeed, the RSA encryption algorithm PKIX WG used is defined by PKCS#1 [3][4], it is the industry standard definition. Most of the world use PKCS#1 OID, but not the one of ITU-T. Because of the above differences, there is a risk of interoperability problems between ITU-T X.509 compliant implementations and PKIX compliant implementations.

Before JDK 7, Sun certificate implementation cannot recognize the ITU-T X.509 OID, "2.5.8.1.1", throws a java.security.InvalidKeyException instead. It would be get fixed at OpenJDK 7 M4. If you happened to have such similar interoperability problem, I'd appreciate it if you comment it here or mail me your problems.

Linkage to the blog entry at simsbc.blogspot.com

[1] http://www.itu.int/ITU-T/asn1/database/itu-t/x/x509/2008/AlgorithmObjectIdentifiers.html#AlgorithmObjectIdentifiers.rsa
[2] http://www.ietf.org/rfc/rfc2459.txt
[3] http://www.rsa.com/rsalabs/node.asp?id=2125
[4] http://www.ietf.org/rfc/rfc2459.txt

Friday May 29, 2009

JSSE Debug Logging With Timestamp

These days, I was asked about a strange network delay of input/output stream when migrating a TLS protected application to a new platform. The application is built on top of SunJSSE. They enabled debug with option "-Djavax.net.debug=all", however, because there is no timestamp in the debug output, the debug logging was not of much help.

Is there any way to enable JSSE debug logging with timestamp?  Definitely, the answer is YES.  It is straightforward.

Firstly,  create a class extends PrintStream,and override all println() methods. I used a static nested class here.

    final static class TimestampPrintStream extends PrintStream {
        TimestampPrintStream(PrintStream out) {
            super(out);
        }

        public void println() {
            timestamp();
            super.println();
        }

        public void println(boolean x) {
            timestamp();
            super.println(x);
        }

        public void println(char x) {
            timestamp();
            super.println(x);
        }

        public void println(int x) {
            timestamp();
            super.println(x);
        }

        public void println(long x) {
            timestamp();
            super.println(x);
        }

        public void println(float x) {
            timestamp();
            super.println(x);
        }

        public void println(double x) {
            timestamp();
            super.println(x);
        }

        public void println(char x[]) {
            timestamp();
            super.println(x);
        }

        public void println(String x) {
            timestamp();
            super.println(x);
        }

        public void println(Object x) {
            timestamp();
            super.println(x);
        }


        private void timestamp() {
            super.print("<Thread Id: " + Thread.currentThread().getId() + ">" +
                        "<Timestamp: " + System.currentTimeMillis() + ">    ");
        }
    }

 

Surely, you can change the timestamp() to what kind of codes you like. 

Then, insert into the following codes into a certain place, where the codes should be called before initialize a TLS connection, of your application. Simply, you can add it into the head of main() method.

        if (true) {
            AccessController.doPrivileged(new PrivilegedAction<Void>() {
                public Void run() {
                    System.setOut(new TimestampPrintStream(System.out));
                    System.setErr(new TimestampPrintStream(System.err));
                    return null;
                }
            });
        } 

You see, it is simple and straightforward.

Thursday May 28, 2009

Understanding Self-Issued Certificate

Certificate Types

RFC5280 categorize certificate into two classes: CA certificates and end entity certificates, and CA certificates are divided into three classes: cross-certificates, self-issued certificates, and self-signed certificates.

certificate +- CA certificate
+- cross-certificate
+- self-issued certificate
+- self-signed certificat
     +- end entity certificate

"Cross-certificates are CA certificates in which the issuer and subject are different entities. Cross-certificates describe a trust relationship between the two CAs." [RFC5280]

"Self-issued certificates are CA certificates in which the issuer and subject are the same entity. Self-issued certificates are generated to support changes in policy or operations." [RFC5280]

"Self-signed certificates are self-issued certificates where the digital signature may be verified by the public key bound into the certificate. Self-signed certificates are used to convey a public key for use to begin certification paths." [RFC5280]

Self-signed certificates are speicial slef-issied certificates, so we also can redraw the above tree as:

certificate +- CA certificate
+- cross-certificate
+- self-issued certificate
+- self-signed certificat
     +- end entity certificate

Notes of Self-Signed Certificate

1. "The trust anchor information may be provided to the path processing procedure in the form of a self-signed certificate." [RFC5280]

2. "When the trust anchor is provided in the form of a self-signed certificate, this self-signed certificate is not included as part of the prospective certification path." [RFC5280]

Note of Self-Issued Certificate

1. "Name constraints are not applied to self-issued certificates (unless the certificate is the final certificate in the path)." [RFC5280]

2. "However, a CA may issue a certificate to itself to support key rollover or changes in certificate policies. These self-issued certificates are not counted when evaluating path length or name constraints."

3. The pathLenConstraint field of basic constrains extension "gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path."

4. The valude of inhibit anyPolicy extension "indicates the number of additional non-self-issued certificates that may appear in the path before anyPolicy is no longer permitted."

Identify a Self-Issued Certificate

RFC5280 requires that if the names in the issuer and subject field in a certificate match according to the comparison rules of internationalized names in distingushed names, then the certificate is self-issued. Please refer to section 7.1 of [RFC5280] about the comparison rules.

However, RFC3280 does not define the comparison rules, which requires that, "A certificate is self-issued if the DNs that appear in the subject and issuer fields are identical and are not empty." The specificate implies a binary comparison of the subject and issuer fields.

I think a good practice would have the same binary subject and issuer fields while issue a self-issued certificate.

Identify a Self-Signed Certificate

Sounds like a stupid title, just as its name implies, it is self signed, so it is self identifiable, i.e., the public key bound into the certificate could be used to verify the digital signature of the same certificate. Definitely, it's true. OK, we get a bi-steps process:

1. identify that a certificate is a self-issued certificate;
2. Verify the certificate digital signature with the public key bound.

That's a precious process, but not a effective process. Digital signature verify normally hurts a lot of performance, and generally it is not needed to verify the digital signature during build a prospective certification path.

Identify a Self-Signed Certificate Effectively

Self-signed, in another words, the key bound into the certificate is the same as the key used to sign the certificate. Could we identify it by comparing the key bound and the key used to generate the certificate signature? Here comes the authority key identifier extension and the subject key identifier extension, refer to RFC3280/RFC5280 for details.

To facilitate certification path construction, the specification requires that the authority key identifier extension and the subject key identifier extension MUST be appear in all conforming CA certificate. "There is one exception; where a CA distributes its public key in the form of a "self-signed" certificate, the authority key identifier MAY be omitted."

With the help of the two key identifier extensions, we get the following steps:

1. identify that a certificate is a self-issued certificate;
2. for conforming CAs, if the subject key identifier extension appears, but no authority key identifier extension, it is a self-signed certificate; if the both appear, it is a self-signed certificate when the KeyIdentifier is identical.
3. for non-conforming CAs, Verify the certificate digital signature with the public key bound.

Suggested Practices:

1. Always issue certificate with subject key identifier extension and authority key identifier extension.

2. Always include the keyIdentifier field in the authority key identifier extension.

3. Always have the same binary subject and issuer fields while issue a self-issued certificate.

4. Only issue self-issued certificate as CA certificate.

5. For TLS, always send the intermediate self-issued certificate within the response certificate list, otherwise, the recepient normally cannot build a certification path to its trust anchors.

The Problemtic Practices Encountered

1. A self-signed CA certificate issues 1+ self-issued end entity certificates.

There is no problem to issue such self-issued End-Entity certificate, but I'm afraid many PKIX libraries would not be able to handle it properly. If your application dependents on third party's PKIX library, and if you have to issue such certificates, please do check the library and make sure it supports such cases.

2. A self-signed CA certificate issues a self-issued certificate as an indirect CRL issuer.

It is special example of self-issed end-entity certificate. Some CRL verification library cannot handle such a indirect CRL issuer correctly, please double check the library to make sure such indirect CRL issuer is supported.

Java SE SDK support the above two problemtic cases at OpenJDK 7 build 60. If you application have to support above cases, you need OpenJDK 7 build 60 at least.

Saturday May 23, 2009

SunJSSE and TLSAES

TLSAES defines AES ciphersuites for TLS, and from TLS version 1.1, the AES cipher suites are merged in TLS specification. The AES supports key lengths of 128, 192 and 256 bits.  However, the TLSAES specification only defines ciphersuites for 128-bits and 256-bits keys.

In Java security context, there is a important concept, "jurisdiction policy". The JCA framework includes an ability to enforce restrictions regarding the cryptographic algorithms and maximum cryptographic strengths available to applets/applications in different jurisdiction contexts (locations). Any such restrictions are specified in "jurisdiction policy files".

Due to import control restrictions by the governments of a few countries, the jurisdiction policy files shipped with the JDK from Sun Microsystems specify that "strong" but limited cryptography may be used. An "unlimited strength" version of these files indicating no restrictions on cryptographic strengths is available for those living in eligible countries (which is most countries). But only the "strong" version can be imported into those countries whose governments mandate restrictions. The JCA framework will enforce the restrictions specified in the installed jurisdiction policy files. 

According to the default jurisdiction policy, AES_128 is the "strong" cryptography, but AES_256 is one of "unlimited strength" cryptographies, which means that AES_256 cannot be used with the default installed JDK jurisdiction policy files 

From JDK1.4.2, the SunJSSE provider supports a number of AES_128 and AES_256 cipher suites. In 1.4.2, AES_256 cipher suites were not enabled, even if the unlimited strength JCE jurisdiction policy files were installed. From J2SE 5.0, AES_256 cipher suites are enabled automatically if the unlimited strength JCE jurisdiction policy files are installed.

OK, then here comes the conclusion now:

1. For JDK 1.4.2, one need to explicit set the AES cipher suites in the codes in order to get it enabled. 

2. For AES 128 cipher suites, one can use it with the default installed JDK.

3. TLS does not define AES_192 cipher suites. 

4. For AES 256 cipher suites, one need to use the "unlimited strength" jurisdiction policiy files.

For JDK 6, the "unlimited strength" jurisdiction policiy files could be downloaded from "Other Downloads" on http://java.sun.com/javase/downloads/index.jsp, and there is a README file inside, which describe the export/import issues, and how to install the additional policies. Please DO read the export/import lines and make sure you are allowed to use those "unlimited strength" policies.

 

FIPS 140 Compliant Mode for SunJSSE

In the Java™ 6 Security Enhancements, it says that "The SunJSSE provider now supports an experimental FIPS 140 compliant mode.  When enabled and used in combination with the SunPKCS11 provider and an appropriate FIPS 140 certified PKCS#11 token, SunJSSE is FIPS 140 compliant."  Except that, we cannot find any more document on how to enable FIPS mode and how the FIPS mode works with SunJSSE. Normally, developers could a few hints from Andreas blog,. The Java PKCS#11 Provider and NSS, althought it is far from enough to understand the FIPS mode of SunJSSE. The following is a unpublished document, hope it helps.

 

FIPS 140 Compliant Mode for SunJSSE

In Sun's Java SE implementation version 6 or later, the SunJSSE provider, which contains the SSL/TLS implementation, can be configured to operate in a FIPS 140 compliant mode instead of its default mode. This document describes the FIPS 140 compliant mode (subsequently called "FIPS mode").

Configuring SunJSSE for FIPS Mode

SunJSSE is configured in FIPS mode by associating it with an appropriate FIPS 140 certified cryptographic provider that supplies the implementations for all cryptographic algorithms required by SunJSSE. This can be done in one of the following ways:

  1. edit the file ${java.home}/lib/security/java.security and modify the line that lists com.sun.net.ssl.internal.ssl.Provider to list the provider name of the FIPS 140 certified cryptographic provider. For example if the name of the cryptographic provider is SunPKCS11-NSS, change the line from

      security.provider.4=com.sun.net.ssl.internal.ssl.Provider

    to

      security.provider.4=com.sun.net.ssl.internal.ssl.Provider SunPKCS11-NSS

    The class for the provider of the given name must also be listed as a security provider in the java.security file.

  2. at runtime, call the constructor of the SunJSSE provider that takes a java.security.Provider object as a parameter. For example, if the variable cryptoProvider is a reference to the cryptographic provider, call new com.sun.net.ssl.internal.ssl.Provider(cryptoProvider).

  3. at runtime, call the constructor of the SunJSSE provider that takes a String object as a parameter. For example if the cryptographic provider is called SunPKCS11-NSS call new com.sun.net.ssl.internal.ssl.Provider("SunPKCS11-NSS"). A provider with the specified name must be one of the configured security providers.

Within a given Java process, SunJSSE can be used either in FIPS mode or in default mode, but not both at the same time. Once SunJSSE has been initialized, it is not possible to change the mode. This means that if one of the runtime configuration options is used (option 2 or 3), the configuration must take place before any SSL/TLS operation.

Note that only the specified configured provider will be used by the SunJSSE for any and all cryptographic operations. All other cryptographic providers including those included with the Java SE implementation will be ignored and not used.

Difference Between FIPS Mode and Default Mode

In FIPS mode, SunJSSE behaves in a way identical to default mode, except for the following differences.

In FIPS mode:

  • SunJSSE will perform all cryptographic operations using the cryptographic provider that was configured as described above. This includes symmetric and asymmetric encryption, signature generation and verification, message digests and message authentication codes, key generation and key derivation, random number generation, etc.

  • If the configured cryptographic provider reports any error by throwing an exception, SunJSSE will abort the current operation and propagate the exception to the application.

  • If the configured cryptographic provider believes it had a critical error such as a self test failure per FIPS guidelines, it needs to remain in an error state until it is re-initialized. The application using the SunJSSE configured with the FIPS cryptographic module will have to be restarted. This ensures that the FIPS module will not allow critical errors to compromise security.

  • Only TLS 1.0 and later can be used. SSL 2.0 and SSL 3.0 are not available. Any attempt to enable SSL 2.0 or 3.0 will fail with an exception.

  • The list of ciphersuites is limited to those that utilize appropriate algorithms. The current list of possible ciphersuites is given below. Any attempt to enable a ciphersuite not on the list will fail with an exception.

Ciphersuites Usable in FIPS Mode

The following is the current list of ciphersuites which can be used by SunJSSE in FIPS mode with their names and the id as assigned in the TLS protocol provided that the configured cryptographic FIPS module supports the necessary algorithms. Note that although SunJSSE uses the prefix SSL_ in the name of some of these ciphersuites, this is for compatibility with earlier versions of the specification only. In FIPS mode, SunJSSE will always use TLS 1.0 or later and implement the ciphersuites as required by those specifications.

SSL_RSA_WITH_3DES_EDE_CBC_SHA

0x000a

SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA

0x0016

TLS_RSA_WITH_AES_128_CBC_SHA

0x002f

TLS_DHE_DSS_WITH_AES_128_CBC_SHA

0x0032

TLS_DHE_RSA_WITH_AES_128_CBC_SHA

0x0033

TLS_RSA_WITH_AES_256_CBC_SHA

0x0035

TLS_DHE_DSS_WITH_AES_256_CBC_SHA

0x0038

TLS_DHE_RSA_WITH_AES_256_CBC_SHA

0x0039

TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA

0xC003

TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA

0xC004

TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA

0xC005

TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA

0xC008

TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA

0xC009

TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA

0xC00A

TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA

0xC00D

TLS_ECDH_RSA_WITH_AES_128_CBC_SHA

0xC00E

TLS_ECDH_RSA_WITH_AES_256_CBC_SHA

0xC00F

TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA

0xC012

TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA

0xC013

TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA

0xC014

TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA

0xC017

TLS_ECDH_anon_WITH_AES_128_CBC_SHA

0xC018

TLS_ECDH_anon_WITH_AES_256_CBC_SHA

0xC019

Conclusion

When SunJSSE is configured in FIPS 140 compliant mode together with an appropriate FIPS 140 certified cryptographic provider, for example Network Security Services (NSS) in its FIPS mode, SunJSSE is FIPS 140 compliant. 

 

Wednesday May 13, 2009

Please remove the unsafe dependence on Permission.toString()

Recently, we made a correction on the implement of java.security.Permission.toString(). The specification says, "Returns a string describing this Permission. The convention is to specify the class name, the permission name, and the actions in the following format: '("ClassName" "name" "actions")'."[1] That is, the specification requires all components, ClassName, name, and actions, to be enclosed in double quotes, but JDK implementation of this method ignores this requirement, which returns string without double quotes. It seems that double quotes make sense, to differentiate between permissions of the same class with name "permit to write" and empty actions and another permission with name "permit to" and actions "write". A bug reported agasint the issue[2], and the bug was fixed recently[3].

But shortly after, I received many queries that the update break some test cases. In those test cases, the Permission.toString() is used to check against a hard coded string, which is expected no double qotes, for example,

    try {
    } catch (AccessControlException ace) {
        if (ace.getPermission().toString().indexOf("FilePermission filename read") < 0) {
            // run the codes that the exception is not a FilePermission/filename/read.
        } else {
            // run the codes that the exception is a FilePermission/filename/read.
        }
    }

The above codes' behaviors will diff with the bug fix[3], because now the the Permission.toString is expected as '"FilePermission" "filename" "read"', instead of 'FilePermission filename read".

For those application that depends on the Permission.toString() like above, I would suggest update those codes with Permission.getClass().getName(), Permissin.getName(), and Permission.getActions() accordingly. The above codes would look like:

    try {
    } catch (AccessControlException ace) {
        Permission perm = ace.getPermission();
        if (perm.getClass().getName().equals("FilePermission") && perm.getName().equals("filename") && perm.getActions("read")) {
            // run the codes that the exception is not a FilePermission/filename/read.
        } else {
            // run the codes that the exception is a FilePermission/filename/read.
        }
    }

The above code is much safer the using the Permission.toString(). For the issue, I only get reports on test cases, no reports to me on practical appliaction by now. But in case your application have similar usage as the above hard coded example, please DO remove the dependence for your application on JDK 7.

[1]: http://java.sun.com/javase/6/docs/api/java/security/Permission.html#toString()
[2]: http://bugs.sun.com/view_bug.do?bug_id=6549506
[3]: http://hg.openjdk.java.net/jdk7/tl/jdk/rev/b656e842e1be

 

About

A blog on security and networking

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today