Monday Sep 28, 2015

Standards Corner: The IETF Publishes SCIM V2

Last Friday, the IETF published SCIM v2 as RFC7643 (SCIM Core Schema) and RFC7644 (SCIM Protocol) as well as SCIM’s Use Cases as RFC7642. As editor I want to pass along my thanks to my co-authors:  Morteza Ansari, Kelly Grizzle, Chuck Mortimore, and Erik Wahlstrom and to Kepend Li, editor of the SCIM use case document (RFC7642). I also want to thank the members and the chairs of the SCIM WG as well as the many developers over the past few years who provided valuable feedback to the specifications.

For those that aren’t already familiar with SCIM, it is a new RESTful HTTP protocol that promises to make provisioning and management of User accounts in applications and cloud providers dramatically easier. Instead of older more complex protocols like LDAP and X.500, or writing many different proprietary connectors, developers can now depend on an easy to implement RESTful protocol.

SCIM V1 began as an Open Web Foundation initiative in 2010 by Salesforce, PingIdentity, Sailpoint, UnboundID, and Nexus Technology as a way to provision users to cloud service providers. Kelly Grizzle of Sailpoint, one of the original SCIM spec co-authors and co-author for SCIMv2, put it this way in his CIS2014 presentation: "SCIM, Why It’s More Important and More Simple, Than you Think":

"SCIM was founded about [five] years ago at [CIS] in 2010. We needed a identity management protocol. A standard potocol. There’s a bunch of different protocols out there and we wanted a standard one….In May 2011, we really kicked off work under Open Web Foundation. A loose standards body to get a spec launched and off the ground. In Dec 2011, we came out with a 1.0 spec."

While SCIM v1 was a great start, there was more work to do. For example, it was not clear how implementers could define new attributes and extend SCIM.  Could SCIM be used for any identity information and not just Users? What is the schema model? How should clients and service providers negotiate the inevitable difference in the attributes they need and use about users? Did service providers have to accept data exactly as provided? If a client asked a service provider to delete a user resource, was the service provider obligated to do so? If a user is deleted and they come back, can they regain their old profile?  What should happen when a client tries to update a read-only attribute? SCIM V2 addresses all of these issues.

SCIM V2's low-friction approach makes provisioning easy to implement and use. At the recent Cloud Identity Summit last summer, Ian Glazer of Salesforce spoke about how the new Identity standards, including SCIM, are about to have their TCP/IP moment in history:

"Standards for federated single sign-on and attribute distribution are especially strong. Historically user provisioning has not been great but it is about to get much better with SCIM 2.0. Authorization, in the form of XACML and its related profiles, is robust and capable and its adoption curve ought to be bending upwards." 

Here is a summary of some of the new features of SCIM V2:

1. SCIMv2 Has an Extensible, Robust Schema Model

It is fitting that Ian Glazer’s talk, was titled is Identity having its TCP/IP moment. As TCP/IP became a major influence in the design of SCIM v2. By that I mean the working group, in its desire to keep things “simple”, adopted a “robust” philosophy first proposed by Jon Postel in the development of TCP/IP

"TCP implementations should follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others."

For SCIM, rather then designing a schema system that throws errors for every non-conformance (for example as is done with XML Schema), the working group decided to have a flexible set of processing rules defined by a set of attribute characteristics. SCIM service providers are free to ignore what they do not understand or do not care about. A SCIM request fails only when something is known to be wrong. In another example, a SCIM request that attempts to update an attribute that the service provider has declared as readOnly may still succeed since the attribute modification is simply ignored. This may seem trivial, but it allows for clients to do simple things like use HTTP “GET” to retrieve a resource and then perform an HTTP “PUT” to put the resource back after changing one or more attributes. Rather then rejecting the request, the SCIM service provider simply picks out what changes are relevant and processes the request.

For more information on robust schema, see my post: A Robust Schema Model for SCIM

2. Focus on JSON      

Support for XML was dropped early in the process. This was in part due to the declining interest inXML. But I think this also enabled a change in philosophy about schema enforcement (see #1) and the desire to keep SCIM simple. 

3. Resource Types

SCIM V2 now allows service providers to offer new resources beyond just Users and Groups.  To define this, SCIM V2 has a new endpoint called “/ResourceTypes” listing the types of resources that are supported by a service provider. Each SCIM ResourceType defines the main SCIM “schema” uri, any extension schemas (e.g. like an enterprise user), and the actual endpoint where the resource is located.  For example, the resource type for “User” is defined:

    "schemas": ["urn:ietf:params:scim:schemas:core:2.0:ResourceType"],
    "id": "User",
    "name": "User",
    "endpoint": "/Users",
    "description": "User Account",
    "schema": "urn:ietf:params:scim:schemas:core:2.0:User",
    "schemaExtensions": [
       "required": true

Resource types make it possible to define new objects and clarifies how SCIM schema can be extended in the future. Of course, while service providers are free to invent their own, the SCIM WG may specify new schemas and resource types for new resources where interoperability is needed.

 4. New PATCH Method

SCIM has a new PATCH command that is now based on RFC6902 known as “JSON Patch” by Paul Bryan and Mark Nottingham.  In the SCIM version, a new filter syntax (which is the same as a SCIM query filer) may be used to select specific records of multi-valued complex attributes. This gives clients the ability to update specific sub-attribues of attributes like “addresses” (e.g. update a street name). It also improves modification of large multi-value attributes like group “members”.

5. Clarifications on Authentication and Security

Many reading the SCIMv1 specification wondered how SCIM does a “bind” operation (coming from LDAP). The specification now clarifies that SCIM is just a normal HTTP service and as such, uses all of the HTTP standard authentication schemes including OAuth. While SCIM doesn’t itself do “authentication”, SCIM servers may be used by authentication systems to retrieve credential information and match password values as part of authentication service architecture.

6. Search

One ongoing concern that some adopters had with SCIMv1.1 was that supported searching only via HTTP GET. Since SCIM’s primary objective is the provisioning of personal information, it was concerning to expose personal information in URLs as part of a GET request. Section 9.4 of HTTP RFC7231 tells more about the story:

9.4. Disclosure of Sensitive Information in URIs

URIs are intended to be shared, not secured, even when they identify secure resources. URIs are often shown on displays, added to templates when a page is printed, and stored in a variety of unprotected bookmark lists. It is therefore unwise to include information within a URI that is sensitive, personally identifiable, or a risk to disclose. Authors of services ought to avoid GET-based forms for the submission of sensitive data because that data will be placed in the request-target. Many existing servers, proxies, and user agents log or display the request-target in places where it might be visible to third parties. Such services ought to use POST-based form submission instead. Since the Referer header field tells a target site about the context that resulted in a request, it has the potential to reveal information about the user's immediate browsing history and any personal information that might be found in the referring resource's URI. Limitations on the Referer header field are described in 
   Section 5.5.2 to address some of its security considerations.  

To address this concern, SCIM v2 supports searching by using the HTTP POST method and a  “/.search” at the end of a SCIM URI to indicate the client is performing a search request instead of creating a new resource. Further, because a “/.search” can be applied anywhere, SCIM can now support a search request against a specific resource type endpoint (e.g. /Users), or the entire server (from the root). 

For more information about SCIM, see the SCIM web page where you'll find an overview, a list of implementations and links to the specifications.  

Tuesday Feb 24, 2015

Standards Corner: A 'Robust' Schema Approach for SCIM

Last week, I had a question last week about SCIM's (System for Cross-domain Identity Management).  How does the working group recommend handling message validation? Doesn't SCIM have a formal schema?

To be able to answer that question, I began to realize that the question was about a different style of schema than SCIM supports. The question was assuming that “schema” is defined how XML defines schema as a way to validate documents.

Rather than focus on validation, SCIM’s model for schema is closer to what one would describe as a database schema much like many other identity management directory systems of the past. Yet, SCIM isn't necessarily a new web protocol to access a directory. It is also for web applications to enable easy provisioning. The SCIM schema model is "behavioural" - it defines the attributes and associated attribute qualities a particular server supports. Do clients need to discover schema? Generally speaking they do not. Let’s take a closer look at schema in general and how SCIM’s approach supports cross-domain schema issues.

[Read More]

Tuesday Dec 16, 2014

Standards Corner: IETF SCIM Working Group Reaches Consensus

Today in the Standards Corner, Phil Hunt blogs about the recent consensus call for SCIM (System for Cross-domain Identity Management). What is new about it? Why SCIM in relation to LDAP?[Read More]

Wednesday Jun 18, 2014

Standards Corner: IETF Revisits HTTP/1.1

HTTP has been one of the most successful IETF specifications aside from the Internet itself. When it was created in 1999, the authors of HTTP had no idea how big and how widely used it would be.  For many years the focus was on the evolving world-wide-web and HTML. The web itself went through many transformations with the introduction of Ajax and then HTML5 by the W3C.  Meanwhile, non-browser use of HTTP has been steadily growing especially with the exploding popularity of smart devices, the Internet of Things, and in particular RESTful APIs.

Last week, the IETF officially did away with RFC2616, the main specification document that defined HTTP/1.1. RFC2616 has been broken up into 6 specifications, RFC7230 through 7235.

[Read More]

Friday May 30, 2014

Standards Corner: Preventing Pervasive Monitoring

On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF.[Read More]

Wednesday Apr 09, 2014

Standards Corner: Basic Auth MUST Die!

Basic Authentication (part of RFC2617) was developed along with HTTP1.1 (RFC2616) when the web was relatively new. This specification envisioned that user-agents (browsers) would ask users for their user-id and password and then pass the encoded information to the web server via the HTTP Authorization header. This form of authentication is still being requested today. Why?[Read More]

Thursday Mar 13, 2014

Standards Corner: Maturing REST Specifications and the Internet of Things

Last week was the IETF's 89th meeting in London. Phil Hunt summarizes news relating to RESTful services (OAuth, JOSE, JSON, SCIM) and new work beginning at the IETF on an authorization standard for the Internet of Things.[Read More]

Tuesday Nov 05, 2013

Standards Corner: OAuth WG Client Registration Problem

Phil Hunt is an active member of multiple industry standards groups and committees (see brief bio at the end of the post) and has spearheaded discussions, creation and ratifications of  industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column.

Author: Phil Hunt

This afternoon, the OAuth Working Group will meet at IETF88 in Vancouver to discuss some important topics important to the maturation of OAuth. One of them is the OAuth client registration problem.

OAuth (RFC6749) was initially developed with a simple deployment model where there is only monopoly or singleton cloud instance of a web API (e.g. there is one Facebook, one Google, on LinkedIn, and so on). When the API publisher and API deployer are the same monolithic entity, it easy for developers to contact the provider and register their app to obtain a client_id and credential.

But what happens when the API is for an open source project where there may be 1000s of deployed copies of the API (e.g. such as wordpress). In these cases, the authors of the API are not the people running the API. In these scenarios, how does the developer obtain a client_id?

An example of an "open deployed" API is OpenID Connect. Connect defines an OAuth protected resource API that can provide personal information about an authenticated user -- in effect creating a potentially common API for potential identity providers like Facebook, Google, Microsoft, Salesforce, or Oracle. In Oracle's case, Fusion applications will soon have RESTful APIs that are deployed in many different ways in many different environments. How will developers write apps that can work against an openly deployed API with whom the developer can have no prior relationship?

At present, the OAuth Working Group has two proposals two consider:

Dynamic Registration

Dynamic Registration was originally developed for OpenID Connect and UMA. It defines a RESTful API in which a prospective client application with no client_id creates a new client registration record with a service provider and is issued a client_id and credential along with a registration token that can be used to update registration over time.

As proof of success, the OIDC community has done substantial implementation of this spec and feels committed to its use. Why not approve?

Well, the answer is that some of us had some concerns, namely:
  1. Recognizing instances of software - dynamic registration treats all clients as unique. It has no defined way to recognize that multiple copies of the same client are being registered other then assuming if the registration parameters are similar it might be the same client.
  2. Versioning and Policy Approval of open APIs and clients - many service providers have to worry about change management. They expect to have approval cycles that approve versions of server and client software for use in their environment. In some cases approval might be wide open, but in many cases, approval might be down to the specific class of software and version.
  3. Registration updates - when does a client actually need to update its registration? Shouldn't it be never? Is there some characteristic of deployed code that would cause it to change?
  4. Options lead to complexity - because each client is treated as unique, it becomes unclear how the clients and servers will agree on what credentials forms are acceptable and what OAuth features are allowed and disallowed. Yet the reality is, developers will write their application to work in a limited number of ways. They can't implement all the permutations and combinations that potential service providers might choose.
  5. Stateful registration - if the primary motivation for registration is to obtain a client_id and credential, why can't this be done in a stateless fashion using assertions?
  6. Denial of service - With so much stateful registration and the need for multiple tokens to be issued, will this not lead to a denial of service attack / risk of resource depletion? At the very least, because of the information gathered, it would difficult for service providers to clean up "failed" registrations and determine active from inactive or false clients.
  7. There has yet to be much wide-scale "production" use of dynamic registration other than in small closed communities.

Client Association

A second proposal, Client Association, has been put forward by Tony Nadalin of Microsoft and myself. We took at look at existing use patterns to come up with a new proposal. At the Berlin meeting, we considered how WS-STS systems work. More recently, I took a review of how mobile messaging clients work. I looked at how Apple, Google, and Microsoft each handle registration with APNS, GCM, and WNS, and a similar pattern emerges. This pattern is to use an existing credential (mutual TLS auth), or client bearer assertion and swap for a device specific bearer assertion.

In the client association proposal, the developer's registration with the API publisher is handled by having the developer register with an API publisher (as opposed to the party deploying the API) and obtaining a software "statement". Or, if there is no "publisher" that can sign a statement, the developer may include their own self-asserted software statement.

A software statement is a special type of assertion that serves to lock application registration profile information in a signed assertion. The statement is included with the client application and can then be used by the client to swap for an instance specific client assertion as defined by section 4.2 of the OAuth Assertion draft and profiled in the Client Association draft. The software statement provides a way for service provider to recognize and configure policy to approve classes of software clients, and simplifies the actual registration to a simple assertion swap. Because the registration is an assertion swap, registration is no longer "stateful" - meaning the service provider does not need to store any information to support the client (unless it wants to).

Has this been implemented yet? Not directly. We've only delivered draft 00 as an alternate way of solving the problem using well-known patterns whose security characteristics and scale characteristics are well understood.

Dynamic Take II

At roughly the same time that Client Association and Software Statement were published, the authors of Dynamic Registration published a "split" version of the Dynamic Registration (draft-richer-oauth-dyn-reg-core and draft-richer-oauth-dyn-reg-management). While some of the concerns above are addressed, some differences remain. Registration is now a simple POST request. However it defines a new method for issuing client tokens where as Client Association uses RFC6749's existing extension point. The concern here is whether future client access token formats would be addressed properly. Finally, Dyn-reg-core does not yet support software statements.


The WG has some interesting discussion to bring this back to a single set of specifications. Dynamic Registration has significant implementation, but Client Association could be a much improved way to simplify implementation of the overall OpenID Connect specification and improve adoption. In fairness, the existing editors have already come a long way. Yet there are those with significant investment in the current draft. There are many that have expressed they don't care. They just want a standard. There is lots of pressure on the working group to reach consensus quickly.

And that folks is how the sausage is made.

Note: John Bradley and Justin Richer recently published draft-bradley-stateless-oauth-client-00 which on first look are getting closer. Some of the details seem less well defined, but the same could be said of client-assoc and software-statement. I hope we can merge these specs this week.

About the Writer:

Phil Hunt joined Oracle as part of the November 2005 acquisition of OctetString Inc. where he headed software development for what is now Oracle Virtual Directory. Since joining Oracle, Phil works as CMTS in the Identity Standards group at Oracle where he developed the Kantara Identity Governance Framework and provided significant input to JSR 351. Phil participates in several standards development organizations such as IETF and OASIS working on federation, authorization (OAuth), and provisioning (SCIM) standards.  Phil blogs at and a Twitter handle of @independentid.


Get the latest on all things Middleware. Join Oracle's Middleware Community today.

Find Us on facebook Follow us on twitter Catch Us on YouTube 


« November 2015