Friday Apr 06, 2012

Securing an ADF Application using OES11g: Part 2

To validate the integration with OES we need a sample ADF Application that is rich enough to allow us to test securing the various ADF elements.  To achieve this we can add some items including bounded task flows to the application developed in this tutorial. A sample JDeveloper 11.1.1.6 project is available here. It depends on the Fusion Order Demo (FOD) database schema which is easily created using the FOD build scripts.

In the deployment we have chosen to enable only ADF Authentication as we will delegate Authorization, mostly, to OES.  It is possible to integrate ADF authentication with Oracle Access Manager, as explained here, for example.

The welcome page of the application with all the links exposed looks as follows:




The Welcome, Browse Products, Browse Stock and System Administration links go to pages while the Supplier Registration and Update Stock are bounded task flows.  The Login link goes to a basic login page and once logged in a link is presented that goes to a logout page.  Only the Browse Products and Browse Stock pages are really connected to the database--the other pages and task flows do not really perform any operations on the database.

Required Security Policies

We make use of a set of test users and roles as decscribed on the welcome page of the application.  In order to exercise the different authorization possibilities we would like to enforce the following sample policies:

  1. Anonymous users can see the Login, Welcome and Supplier Registration links. They can also see the Welcome page, the Login page and follow the Supplier Registration task flow.  They can see the icon adjacent to the Login link indicating whether they have logged in or not.
  2. Authenticated users can see the Browse Product page.
  3. Only staff granted the right can see the Browse Product page cost price value returned from the database and then only if the value is below a configurable limit.
  4. Suppliers and staff can see the Browse Stock links and pages.  Customers cannot.
  5. Suppliers can see the Update Stock link but only those with the update permission are allowed to follow the task flow that it launches.  We could hide the link but leave it exposed here so we can easily demonstrate the method call activity protecting the task flow.
  6. Only staff granted the right can see the System Administration link and the System Administration page it accesses.

Implementing the required policies

In order to secure the application we will make use of the following techniques:

  • EL Expressions and Java backing beans: JSF has the notion of EL expressions to reference data from backing Java classes.  We use these to control the presentation of links on the navigation page which respect the security contraints.  So a user will not see links that he is not allowed to click on into. These Java backing beans can call on to OES for an authorization decision.  Important Note: naturally we would configure the WLS domain where our ADF application is running as an OES WLS SM, which would allow us to efficiently query OES over the PEP API.  However versioning conflicts between OES 11.1.1.5 and ADF 11.1.1.6 mean that this is not possible.  Nevertheless, we can make use of the OES RESTful gateway technique from this posting in order to call into OES.
You can easily create and manage backing beans in Jdeveloper as follows:

  • Custom ADF Phase Listener: ADF extends the JSF page lifecycle flow and allows one to hook into the flow to intercept page rendering.  We use this to put a check prior to rendering any protected pages, again calling on to OES via the backing bean.  Phase listeners are configured in the adf-settings.xml file.  See the MyPageListener.java class in the project.  Here, for example,  is the code we use in the listener to check for allowed access to the sysadmin page, navigating back to the welcome page if authorization is not granted:

                        if (page != null && (page.equals("/system.jspx") || page.equals("/system"))){
                             System.out.println("MyPageListener: Checking Authorization for /system");

                             if (getValue("#{oesBackingBean.UIAccessSysAdmin}").toString().equals("false") ){ 
                                 System.out.println("MyPageListener: Forcing navigation away from system" +

                                       "to welcome");
                                 NavigationHandler nh = fc.getApplication().getNavigationHandler(); 
                                 nh.handleNavigation(fc, null, "welcome");
                              } else {
                                 System.out.println("MyPageListener: access allowed");
                              }
                         }

  • Method call activity: our app makes use of bounded task flows to implement the sequence of pages that update the stock or allow suppliers to self register.  ADF takes care of ensuring that a bounded task flow can be entered by only one page.  So a way to protect all those pages is to make a call to OES in the first activity and then either exit the task flow or continue depending on the authorization decision.  The method call returns a String which contains the name of the transition to effect. This is where we configure the method call activity in JDeveloper:


We implement each of the policies using the above techniques as follows:

  • Policies 1 and 2: as these policies concern the coarse grained notions of controlling access to anonymous and authenticated users we can make use of the container’s security constraints which can be defined in the web.xml file.  The allPages constraint is added automatically when we configure Authentication for the ADF application.  We have added the “anonymousss” constraint to allow access to the the required pages, task flows and icons:
<security-constraint>
    <web-resource-collection>
      <web-resource-name>anonymousss</web-resource-name>
      <url-pattern>/faces/welcome</url-pattern>
      <url-pattern>/afr/*</url-pattern>
      <url-pattern>/adf/*</url-pattern>
      <url-pattern>/key.png</url-pattern>
      <url-pattern>/faces/supplier-reg-btf/*</url-pattern>
      <url-pattern>/faces/supplier_register_complete</url-pattern>
    </web-resource-collection>
  </security-constraint>
  • Policy 3: we can place an EL expression on the element representing the cost price on the products.jspx page: #{oesBackingBean.dataAccessCostPrice}. This EL Expression references a method in a Java backing bean that will call on to OES for an authorization decision.  In OES we model the authorization requirement by requiring the view permission on the resource /MyADFApp/data/costprice and granting it only to the staff application role.  We recover any obligations to determine the limit. 
  • Policy 4: is implemented by putting an EL expression on the Browse Stock link #{oesBackingBean.UIAccessBrowseStock} which checks for the view permission on the /MyADFApp/ui/stock resource. The stock.jspx page is protected by checking for the same permission in a custom phase listener—if the required permission is not satisfied then we force navigation back to the welcome page.
  • Policy 5: the Update Stock link is protected with the same EL expression as the Browse Link: #{oesBackingBean.UIAccessBrowseStock}.  However the Update Stock link launches a bounded task flow and to protect it the first activity in the flow is a method call activity which will execute an EL expression #{oesBackingBean.isUIAccessSupplierUpdateTransition}  to check for the update permission on the /MyADFApp/ui/stock resource and either transition to the next step in the flow or terminate the flow with an authorization error.
  • Policy 6: the System Administration link is protected with an EL Expression #{oesBackingBean.UIAccessSysAdmin} that checks for view access on the /MyADF/ui/sysadmin resource.  The system page is protected in the same way at the stock page—the custom phase listener checks for the same permission that protects the link and if not satisfied we navigate back to the welcome page.

Testing the Application

To test the application:

  • deploy the OES11g Admin to a WLS domain
  • deploy the OES gateway in a another domain configured to be a WLS SM. You must ensure that the jps-config.xml file therein is configured to allow access to the identity store, otherwise the gateway will not be able to resolve the principals for the requested users.  To do this ensure that the following elements appear in the jps-config.xml file:
    • <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">
                   <description>LDAP-based IdentityStore Provider</description>
        </serviceProvider>
    • <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">
                   <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>
                   <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/>
      </serviceInstance>
    • <serviceInstanceRef ref="idstore.ldap"/>
  • download the sample application and change the URL to the gateway in the MyADFApp OESBackingBean code to point to the OES Gateway and deploy the application to an 11.1.1.6 WLS domain that has been extended with the ADF JRF files. You will need to configure the FOD database connection to point your database which contains the FOD schema.
  • populate the OES Admin and OES Gateway WLS LDAP stores with the sample set of users and groups.  If  you have configured the WLS domains to point to the same LDAP then it would only have to be done once.  To help with this there is a directory called ldap_scripts in the sample project with ldif files for the test users and groups.
  • start the OES Admin console and configure the required OES authorization policies for the MyADFApp application and push them to the WLS SM containing the OES Gateway.
  • Login to the MyADFApp as each of the users described on the login page to test that the security policy is correct.
  • You will see informative logging from the OES Gateway and the ADF application to their respective WLS consoles.
  • Congratulations, you may now login to the OES Admin console and change policies that will control the behaviour of your ADF application--change the limit value in the obligation for the cost price for example, or define Role Mapping policies to determine staff access to the system administration page based on user profile attributes.

Some ADF Development Notes

Some general notes on ADF development which I encountered while developing the sample application:

  • May need this on WLS startup in order to allow us to overwrite credentials for the database, the signal here is that there is an error trying to access the data base: -Djps.app.credential.overwrite.allowed=true
  • Best to call Bounded Task flows via a CommandLink (as opposed to a go link) as you cannot seem to start them again from a go link, even having completed the task flow correctly with a return activity.
  • Once a bounded task flow (BTF) is initated it must complete correctly  via a return activity—attempting to click on any other link whilst in the context of a  BTF has no effect.  See here for example:
  • When using the ADF Authentication only security approach it seems to be awkward to allow anonymous access to the welcome and registration pages.  We can achieve anonymous access using the web.xml security constraint shown above (where no auth-constraint is specified) however it is not clear what needs to be listed in there….for example the /afr/* and /adf/* are in there by trial and error as sometimes the welcome page will not render if we omit those items.  I was not able to use the default allPages constraint with for example the anonymous-role or the everyone WLS group in order to be able to allow anonymous access to pages.
  • The ADF security best practice advises placing all pages under the public_html/WEB-INF folder as then ADF will not allow any direct access to the .jspx pages but will only allow acces via a link of the form /faces/welcome rather than /faces/welcome.jspx.  This seems like a very good practice to follow as having multiple entry points to data is a source of confusion in a web application (particulary from a security point of view).
  • In Authentication+Authorization mode only pages with a Page definition file are protected.  In order to add an empty one right click on the page and choose Go to Page Definition.  This will create an empty page definition and now the page will require explicit permission to be seen.
  • It is advisable to give a unique context root via the weblogic.xml for the application, as otherwise the application will clash with any other application with the same context root and it will not deploy

Securing an ADF Application using OES11g: Part 1

Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES).

In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6.

ADF Security Basics

ADF Bascis

The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components.

ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components.

The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema.

ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations.

It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF.

ADF Security

ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time.

The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer.

One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server.

One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not:

In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page).

An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies.

To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml:


When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file. 

Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups:


For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process.

If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console.

Security Requirements for an ADF Application

With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items:

  1. Authentication, including a user and user-group store
  2. Authorization for page access
  3. Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected. 
  4. Authorization for viewing data coming from the data access layer

In the next posting we will describe a sample ADF application and required security policies.

References

  1. ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application
  2. Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2
  3. Now, securely deploying your secured ADF application

Tuesday Apr 03, 2012

Downloading stuff from Oracle: an example

Introduction

Oracle has a lot of software on offer.  Components of the stack can evolve at different rates and different versions of the components may be in use at any given time.  All this means that even the process of downloading the bits you need can be somewhat daunting.  Here, by way of example, and hopefully to convince you that there is method in the downloading madness,  we describe how to go about downloading the bits for Oracle Identity Manager  (OIM) 11.1.1.5.

Firstly, a couple of preliminary points:

  • Folks with Oracle products already installed and looking for bug fixes, patch bundles or patch sets would go directly to the Oracle support website.

Downloading Oracle Identity Manager 11.1.1.5    

To be sure we download the right versions, first locate the Certification Matrix for OIM 11.1.1.5: first go to the Fusion Certification Page then go to the “System Requirements and Supported Platforms for Oracle Identity and Access Management 11gR1” link.


Let’s assume you have a 64 bit Linux Machine and an Oracle database already.  Then our  goal is to end up with a list of files like the following:


jdk-6u29-linux-x64.bin                    (Java JDK)
V26017-01.zip                             (the Repository Creation Utility to create the DB schemas)
wls1035_generic.jar                       (the Weblogic Application Server)
ofm_iam_generic_11.1.1.5.0_disk1_1of1.zip (the Identity Managament bits)
ofm_soa_generic_11.1.1.5.0_disk1_1of2.zip (the SOA bits)
ofm_soa_generic_11.1.1.5.0_disk1_2of2.zip
jdevstudio11115install.exe                (optional: JDeveloper IDE)
soa-jdev-extension.zip                    (optional: SOA extensions for JDeveloper)


Downloading the bits

1.    Download the Java JDK, 64 bit version 1.6.0_24+.
2.    Download the RCU: here you will see that the RCU is mentioned on the Identity Management home page but no link is provided.  Do not panic.  Due to the amount and turnover of software available only the latest versions are available for download from the main Oracle site.  Over time software gets moved on to the Oracle edelivery site and it is here that we find the RCU version we require:

a.    Go to edelivery: https://edelivery.oracle.com
b.    Choose Pack ‘ Oracle Fusion Middleware’ and ‘Linux x86-64’
c.    Click on ‘Oracle Fusion Middleware 11g Media Pack for Linux x86-64’
d.    Download: ‘Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.5.0) for Linux x86’ (V26017.zip)

3.    Download the Weblogic Application Server: in this step we will take the "Generic" distribution (with no bundled JVM) which is suitable for use on 64 bit systems: WLS 10.3.5
4.    Download the Oracle Identity Manager bits: one point to clarify here is that currently  the Identity Management bits come in two trains, essentially one for the Directory Services piece and the other for the Access Management and Identity Management parts.  We need to be careful not to confuse the two, in particular to be clear which of the trains is being referred to by  the documentation:

a.   So, with this in mind, go to ‘ Oracle Identity and Access Management (11.1.1.5.0)’ and download Disk1.

5.    Download the SOA bits:

a.    Go to the edelivery area as for the RCU in step 2 and download:
i.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 1 of 2)
ii.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 2 of 2)

6.    You will want to download some development tooling (for plugins or BPEL workflow development):

a.    Download Jdeveloper 11.1.1.5 (11.1.1.6 may work but best to stick to the versions that correspond to the WLS version we are using)
b.    Go to the site for  SOA tools and download the SOA Composite Editor 11.1.1.5

That’s it, you may proceed to the installation

Monday Oct 24, 2011

Using OES11g from the Oracle Database

Future versions of the Oracle database may include integration with Oracle Entitlement Server for fine grained authorization configuration.  However with the current versions (11g, 10g for example) any such integration is not available out of the box.

How then might we use OES to protect data in an Oracle database ?

As an example, consider a stored procedure querying and returning values from some database tables.  We can use OES to provide authorization on this data in the following way: we ask OES for an authorization decision and if the decision is allow we interpret any obligations as additional where clauses that we use to constrain the queries.

With OES performance is best if we can make the call from a Java SM client as we can then benefit from local copies of the authorization policies This reduces the overhead of the call to OES to the microsecond level.  So in the case of our stored procedure, if it is being invoked from Java, then it would be best to call OES from Java and pass in any constraints to the stored procedure as parameters.  However this may not always be possible.

In this case we can use the facility of the Oracle database to load java classes and to call them from stored procedures.  As OES offers a webservice XACML interface one could load some Java code that would call out to OES. Another technique would be to use the OES gateway from my previous posting, calling it using a very simple Java class that does a HTTP GET for the appropriate URL.

This set of scripts and SQL demonstrate how to load such a Java class to the Oracle database and configure a database function to allow the Java class to be called from a sample SQL script. If one is using Oracle 10g database then as it has a 1.4 JDK one must compile the class with JDK 1.4.  Oracle Database 11g has a 1.5 JDK so JDK 1.5 can be used in that case.

 The steps to prepare the example are:

  • compile the Java class with the appropriate JDK (use the mk.sh script)
  • load the java classes to the data base (use the oes-loadjava.sh script)
  • run the database preparation script to define a function to interface to the Java class and allow the database user appropriate permissions to execute Java code.  The example assumes the user SYSTEM is being used.
  • run the SQL script to show the usage of the call to OES.  It runs a raw query on the scott.emp table with no constraints and then it calls the Java class which will call the OES Gateway to recover an authorization from OES.  If there are obligations returned the procedure looks for an obligation key of scott.emp and if it finds one it uses the value--for example (sal<2000)-- as a constraint when querying the table.

References



A RESTful interface to Oracle Entitlements Server 11g

OES's job is to provide authorization decisions to clients.  Clients may use a Java API implementing the OpenLiberty OpenAZ PEP API interface as well as an XACML web service interface.

However it may be easier for some clients to query OES over a REST style API.  This applies particularly to old Java or non-Java clients or to clients running in more restricted environments such as smart phones or other embedded devices.  Another example would be a stored procedure running in a database.  The ability to query a server using the REST style--essentially using HTTP URLs--simplifies the client by reducing dependencies and simplifying the interface.

In this posting I demonstrate a gateway to OES that that presents a natural REST style interface accessing the OES client PEP API.

The PEP API

 Firstly, recall the OpenAZ PEP API.  A typical call to the Java client API would look something as follows:

  contextMap.put("level","5);
  contextMap.put("speciality","Energy");
  PepResponse response =
    PepRequestFactoryImpl.getPepRequestFactory().newPepRequest(
      "glen.byrne",                             //subject
      "execute",                                //_action,
      "TradingApp/Options/BlackScholes/UK-Gas", //resource
      contextMap).decide();                     //attributes map
  System.out.println("Response from OES for request: {" + user + ", " +
    action + ", " + resource + "} \nResult: " + response.allowed());
  Map obligations = response.getObligations();

The request asks whether a given subject (with a context Map of attribute value pairs) has the right to carry out an action on a specific resource.  The so called obligations are a list of key/value pairs, returned with an allowed decision, which the client may interpret as refinements to the authorization decision.  For example, they might provide a where clause to allow a database client to provide which rows of a table a user is allowed to see.

Ầ la REST

A natural way to encode such a query as an HTTP request would be to issue an HTP GET for a URL such as the following:

http://<machine>:<port>://<context root/TradingApp/Options/BlackScholes/UK-Gas/execute?user=glen.byrne&level=5&speciality=Energy

So we are mapping the resource to a URL and the user and context map are passed as query parameters.

This small JDeveloper project implements a gateway to OES which provides such an interface to OES.

To deploy the gateway you must deploy it to a Weblogic 10.3.5 domain which has been prepared as a WLS SM with the OES 11g Client.  This is a standard operation to configure a WLS domain to support Web applications calling to OES.  Once deployed you may test the gateway by accessing the URL htp://localhost:7001/oesgateway/sayHello.  This will return the string 'Hello World!'

Once you have configured OES with an Application, Resource Type, Resource and Authorization policy you can then query OES via the gateway by accessing URLs as shown in the example above.  The general pattern is that all context attributes are passed as query parameters and you access a URL of this form: http://localhost:7001/oesgateway/<resource string>/<action>/?user=<user name>.

A web browser can be used to access the URLs from the gateway.  A client program accessing the gateway is included in the project and a very simple Java program directly executing a HTTP get is available here.

The gateway depends on the Jersey 1.2 API and the OES 11g 11.1.1.5 Client API.

Note that in order for the resolution of user groups to work you need to ensure that the jps-config.xml file where the Gateway is deployed has it's identity store configured--look for the following elements in that file:

  • <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">
                 <description>LDAP-based IdentityStore Provider</description>
      </serviceProvider>
  • <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">
                 <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>
                 <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/>
    </serviceInstance>
  • <serviceInstanceRef ref="idstore.ldap"/>





Reassigning Leaver tasks in OIM11g

The use case

OIM11g allows arbitrarily complex approval rules to be defined via it's integration with SOA BPEL workflow.  It also allows the management of user lifecycle, for example the classic joiner-mover-leaver cycle.

So what happens to a user's outstanding approvals if the user leaves the company or for example enters a disabled state for some reason ?  Failure to address this point leaves end-user requests stalled, waiting on an approver who may never respond.

The solution

Using a combination of OIM11g's event handlers to detect an event and the SOA Workflow Services API one can easily address this by reassigning leaver tasks to some agreed user.

This JDeveloper project implements an OIM11g scheduled task that scans for user's in a Locked state and reassigns their outstanding approval tasks to their manager.  The MDS data for the scheduled task is included in the config directory.  The scheduled task can be loaded to OIM as described here.  One can use the same technique in an event handler to do the reassignment at the moment where the user enters a Locked or Disabled state.  Note that once the user is Disabled one cannot reassign the approval tasks...so we have to make the reassignment action before the user is disabled.

The task scans all users matching the regular expression parameter to the scheduled task.  For each such user it uses the Workflow Services API to recover any tasks assigned to that user that are in the IWorkflowConstants.TASK_STATE_ASSIGNED or IWorkflowConstants.TASK_STATE_INFO_REQUESTED states--we are only interested in reassigning currently active tasks.  See the getTasksForUser() method.  OIM can recover the parameters (username, password, SOA URL) to talk to SOA using the following code:

  BPELConfig bpelConfig = Platform.getConfiguration().getBPELConfig();

For each task currently assigned to a target user we use the reassignTask() method to reassign the task to the user's manager, or to a hardcoded user if no manager is configured.  The key call to the Workflow Services API is this one, where wfCtx is the Workflow context and taskSvc is a reference to the task service:

  taskSvc.reassignTask(wfCtx, t.getSystemAttributes().getTaskId(),  newAssignee);

In the sample code one can see how to go from the BPEL Config object to recovering a task service and workflow context objects that can be used to do this reassignment.

Conclusion

OIM11g allows us to detect and responds to user-lifecycle events in a agile way, ensuring that any business processes in flight are kept moving in the face of user's entering locked, disabled or deleted states.

References

  • SOA Workflow Services Developers Guide: examples of using the Worklist API
  • Jar files required for the WorkList Services API.  These jar files are available with JDeveloper: Use the Help->Check For Updates menu to install the SOA Extension:
    • soa/modules/oracle.soa.fabric_11.1.1/bpm-infra.jar
    • soa/modules/oracle.soa.fabric_11.1.1/fabric-runtime.jar
    • soa/modules/oracle.soa.workflow_11.1.1/bpm-services.jar
    • oracle_common/modules/oracle.xdk_11.1.0/xml.jar
    • oracle_common/modules/oracle.xdk_11.1.0/xmlparserv2.jar
  • SOA Workflow Services API Javadoc




Monday Oct 17, 2011

OIA: Entitlements outside roles in BI Publisher

My colleague Rene has some great postings explaining the importance of keeping an eye on entitlements that fall outside roles whilst developing an RBAC model and how to achieve that within OIA.

In this posting I just add another example report which will expose entitlements falling outside roles but this time formatted for use with BI Publisher.  This will be useful to customers wishing to use BI Publisher as a reporting tool.

When loaded into BI Publisher the report will generate a listing by Business Unit, by Resource, by user for all the user's entitlements that fall outside the role model.  You can specify a Business Unit and Resource as parameters to the report.  This report will only include attributes flagged as minable to allow it to avoid including attributes that are unimportant from an entitlement  point of view (like Firstname, for example).

 The two BI Publisher files which define this report--the.xdo file which contains the SQL definition and the .rtf formatting file--are available here.

Tuesday Aug 23, 2011

Oracle XE DB and OIM 11g ? Nope

The express edition of the Oracle database will work with some of the Identity products, albeit unsupported.  For example, by experience OES will work fine in a development environment with version 10.2.0.1.0 of XE.  Oracle XE will also work with the SOA Suite in a development environment, the doc even providing a hint on getting the rcu for SOA suite to work with XE.

 However one doesn't get far before running into an issue trying to install OIM11g on XE--the rcu pre-requisite checks for OIM11g fail with 'Error: JVM is not installed on the Database'.  'Java support in the database'  is one of the features not included with XE, as documented in the XE Licensing information.

See here for more on the different editions of the Oracle 11g database.


Monday Apr 12, 2010

The Sun Role Manager (Oracle Identity Analytics) webservices interface

Oracle Identity Analytics (lately Sun Role Manager) provides a web service interface to query and update it's data.  Data such as roles, business units, users and audit information can be managed in this way.

This small JDeveloper project consumes the SRM 5.03 WSDL and demonstrates some calls to the web service.

You will need to parameterize src/com/oracle/gte/oia/Main.java with the correct URL and sample role and business unit names for your deployment of SRM.  Recompile and then run com.oracle.gte.oia.Main.class.
The authentication to SRM is done using the WS-Security UsernameToken profile, see here for a description. To provide the username/password to SRM the class src/com/oracle/gte/oia/util/handlers/ClientAuthenticationSOAPHandler.java implements a handler that inserts the appropriate security header into outgoing SOAP messages.

Monday Mar 01, 2010

Weblogic 10.3.2 on Mac OS X

Unsupported configuration, but can be handy to run Weblogic natively on the Mac for quick tests: the following command line installs it successfully on Mac OS X 10.6.2, java 1.6.0_17 (otherwise can get spurious insufficient disk space errors):

java -Xms1024M -Xmx1024M -XX:MaxPermSize=128m -Dos.name=unix -jar wls1032_generic.jar

Friday Nov 13, 2009

In the mire: cleaning open provisioning tasks in Oracle Identity Manager

Looking at Oracle Identity Manager (OIM) I found I was accumulating open provisioning tasks that had either failed or that I simply no longer wanted.  The problem is the product does not offer a way to delete them.  Furthermore some resources (like the Sun DS Connector) will not allow you to start a new provisioning task for that resource until the old one has completed.  The obvious scheduled task 'Remove Open Tasks', as described here  appears to be more a cosmetic cleanup of some views into open tasks.  In fact looking into the task code, it does a time based 'DELETE from' on the OTI table--this turns out not to resolve the open provisioning task problem.  So, short of reverting to a clean snapshot of the database, what to do ?  I should say that the following is more in the way of an exploration that a recommendation.

The folks on the OTN forum offered some help but that was inconclusive.  Time to roll up the sleeves.

There are 225 tables in the OIM schema. Which ones are involved when a provisioning task is created ? What we need is a way to diff the database before and after an operation.  This will give an idea of the tables involved, the data model OIM is using and may allow us to remove those tasks--however recommended that may be.

For the purpose of reverse engineering table usage for individual application actions (such as in my case initiating a provisioning task) what we want is something that will indicate table data that has changed and do it's best to present those changes in a text form.  The delta will typically be small so we do not require a fancy diff capability.  It would be nice if the tool could work against Oracle, MS SQL Server and MySQL.  'Ah feel a wee tool comin' oooon.

This JDBC based tool, DbDump, does it's best to dump a specified set of tables to a text file. You will need to configure the jdbc properties and copy in the database driver jar files. We can then run a diff tool to compare before and after outputs.  The command line diff tool is adequate though windiff.exe or FileMerge.app for example are easier to read.

1451c1451
< 'ADMINSERVER_WLSTORE'::'ID:-1','TYPE:-1','HANDLE:2''RECORD:000000707B7365727665723D41646D696E53657276657221686F73743D3132372E302E302E3121646F6D61696E3D626173655F646F6D61696E2173746F72653D41646D696E5365727665725F4F494D5F4A44424353544F5245217461626C653D41646D696E5365727665725F574C53746F72657D980976C5055875C800000124D3BC92EF'
---
> 'ADMINSERVER_WLSTORE'::'ID:-1','TYPE:-1','HANDLE:2''RECORD:000000707B7365727665723D41646D696E53657276657221686F73743D3132372E302E302E3121646F6D61696E3D626173655F646F6D61696E2173746F72653D41646D696E5365727665725F4F494D5F4A44424353544F5245217461626C653D41646D696E5365727665725F574C53746F72657D980976C5055875C800000124D3C43CE8'
20792a20793
> 'AUD_JMS'::'AUD_JMS_KEY:187','AUD_CLASS:UserProfileAuditor','IDENTIFIER:86','JMS_VALUE:BLOB','DELAY:1','FAILED:0','CREATE_DATE:2009-11-8 12.28.7.719000000','UPDATE_DATE:2009-11-8 12.28.7.719000000''PARENT_AUD_JMS_KEY:null'
33785a33787
> 'OBI'::'OBI_KEY:145','OBJ_KEY:11','REQ_KEY:null','ORC_KEY:null','OBI_STATUS:Approved','OBI_DEP_REQUIRED:null','OBI_STAGE_FLAG:2','QUE_KEY:null','USR_KEY:null','OBI_DATA_LEVEL:null','OBI_CREATE:2009-11-8 12.28.7.0','OBI_CREATEBY:1','OBI_UPDATE:2009-11-8 12.28.7.0','OBI_UPDATEBY:1','OBI_NOTE:null''OBI_ROWVER:0000000000000000'
33863a33866
> 'OIU'::'OIU_KEY:84','OBI_KEY:145','ORC_KEY:170','USR_KEY:86','OST_KEY:85','POL_KEY:null','REQ_KEY:null','OIU_PWD_MUST_CHANGE:null','OIU_PWD_FLAGGED:null','OIU_POLICY_BASED:null','OIU_POLICY_REVOKE:null','OIU_SERVICEACCOUNT:0','OIU_DATA_LEVEL:null','OIU_CREATE:2009-11-8 12.28.7.0','OIU_CREATEBY:1','OIU_UPDATE:2009-11-8 12.28.7.0','OIU_UPDATEBY:1','OIU_NOTE:null','OIU_ROWVER:0000000000000002','OIU_LAST_ATTESTED_BY:null','OIU_LAST_ATTESTED_ON:null''OIU_OFFLINED_DATE:null'
33968a33972
> 'ORC'::'ORC_KEY:170','TOS_KEY:32','ORD_KEY:1','PKG_KEY:34','ORC_SUPPCODE:00     ','ACT_KEY:1','REQ_KEY:null','PKH_KEY:null','USR_KEY:86','ORC_ASSIGNED_TO:1','ORC_STATUS:P','ORC_TOS_INSTANCE_KEY:170','ORC_PACKAGE_INSTANCE_KEY:170','ORC_SUBTOSKEY:null','ORC_REFERENCEKEY:null','ORC_DEPENDS:null','ORC_LAST_UPDATE:2009-11-8 12.28.7.0','ORC_LAST_UPDATEBY:1','ORC_SUBORDER:null','ORC_SERVICEORDER:null','ORC_PARENT_KEY:158','ORC_REQUIRED_COMPLETE:0','ORC_ORDERBY_POLICY:null','ORC_TARGET:0','ORC_TASKS_ARCHIVED:null','ORC_DATA_LEVEL:null','ORC_CREATE:2009-11-8 12.28.7.0','ORC_CREATEBY:1','ORC_UPDATEBY:1','ORC_UPDATE:2009-11-8 12.28.7.0','ORC_NOTE:null''ORC_ROWVER:0000000000000002'
34291a34296
> 'OSH'::'OSH_KEY:257','SCH_KEY:241','STA_KEY:4','OSH_ACTION:Engine','OSH_ASSIGN_TYPE:Default task assignment','OSH_ASSIGNED_TO_USR_KEY:1','OSH_ASSIGNED_TO_UGP_KEY:null','OSH_ASSIGNED_BY_USR_KEY:null','OSH_ASSIGN_DATE:2009-11-8 12.28.7.0','OSH_DATA_LEVEL:null','OSH_CREATE:2009-11-8 12.28.7.0','OSH_CREATEBY:1','OSH_UPDATE:2009-11-8 12.28.7.0','OSH_UPDATEBY:1','OSH_NOTE:null''OSH_ROWVER:0000000000000000'
34522a34528
> 'OSI'::'SCH_KEY:241','ORC_KEY:170','MIL_KEY:146','REQ_KEY:null','TLG_KEY:null','RSC_KEY:null','OSI_RECOVERY_FOR:null','OSI_RETRY_FOR:null','OSI_ASSIGNED_TO:null','TOS_KEY:32','PKG_KEY:34','ACT_KEY:1','ORD_KEY:1','ORC_SUPPCODE:00     ','OSI_ASSIGN_TYPE:Default task assignment','OSI_ESCALATE_ON:null','OSI_ASSIGNED_TO_USR_KEY:1','OSI_ASSIGNED_TO_UGP_KEY:null','OSI_RETRY_ON:null','OSI_RETRY_COUNTER:null','OSI_CHILD_TABLE_KEY:null','OSI_CHILD_OLD_VALUE:ecFFyIei7ntqs5tETSu38w==','OSI_ASSIGNED_DATE:2009-11-8 12.28.7.0','SCH_INT_KEY:null','OSI_LOG_KEY:null','OSI_DATA_LEVEL:null','OSI_CREATE:2009-11-8 12.28.7.0','OSI_CREATEBY:1','OSI_UPDATE:2009-11-8 12.28.7.0','OSI_UPDATEBY:1','OSI_NOTE:null''OSI_ROWVER:0000000000000000'
34648a34655
> 'OTI'::'OTI_KEY:190','SCH_KEY:241','SCH_TYPE:null','SCH_STATUS:P','SCH_DATA:null','SCH_PROJ_START:2009-11-8 12.28.7.0','SCH_PROJ_END:2009-11-8 12.28.7.0','SCH_ACTUAL_START:2009-11-8 12.28.7.0','SCH_ACTUAL_END:null','SCH_ACTION:null','SCH_OFFLINED:0','ORC_KEY:170','MIL_KEY:146','OSI_RETRY_FOR:null','OSI_ASSIGNED_TO:null','PKG_KEY:34','REQ_KEY:null','OSI_ASSIGNED_TO_USR_KEY:1','OSI_ASSIGNED_TO_UGP_KEY:null','OSI_ASSIGNED_DATE:2009-11-8 12.28.7.0','ACT_KEY:1','OSI_ASSIGN_TYPE:Default task assignment','PKG_TYPE:Provisioning','STA_BUCKET:Pending','OBJ_KEY:11','OTI_CREATE:2009-11-8 12.28.7.0','OTI_UPDATE:2009-11-8 12.28.7.0','OTI_CREATEBY:1','OTI_UPDATEBY:1','OTI_ROWVER:0000000000000000','OTI_DATA_LEVEL:0''OTI_NOTE:null'
38599a38607
> 'SCH'::'SCH_KEY:241','SCH_TYPE:null','SCH_STATUS:P','SCH_PROJ_START:2009-11-8 12.28.7.0','SCH_PROJ_END:2009-11-8 12.28.7.0','SCH_ACTUAL_START:2009-11-8 12.28.7.0','SCH_ACTUAL_END:null','SCH_DATA:null','SCH_REASON:null','SCH_ACTION:null','SCH_DATA_LEVEL:null','SCH_CREATE:2009-11-8 12.28.7.0','SCH_CREATEBY:1','SCH_UPDATEBY:1','SCH_UPDATE:2009-11-8 12.28.7.0','SCH_NOTE:null','SCH_ROWVER:0000000000000000''SCH_OFFLINED:null'

So the tables involved are: OSI, OSH, SCH, OIU, OTI, ORC and OBI.

Using the ORC_KEY and SCH_KEY the relevant rows can be deleted from the above tables in the order listed.  An order is implied as there are contraints on the tables.

select sch_key from OIMUSER.OTI where orc_key=XXX;

So that is be one way to remove the tasks brutally--whether supported or not.

Using the Client API to complete the tasks

Rather than deleting rows one can try to at least complete the tasks manually, making a call to the tcProvisioningOperationsIntf interface (see here for code examples):

tcProvisioningOperationsIntf provIntf =
   (tcProvisioningOperationsIntf) getUtilityFactory().getUtility("Thor.API.Operations.tcProvisioningOperationsIntf");
provIntf.setTasksCompletedManually(tasksArray);


The tasksArray contains a list of task instance ids.  What are these? The Description field of the open provisioning task in the admin interface contains a number. It turns out that this number is the ORC key. However, in order to call into the Oracle API, as suggested on the OTN forum, we need the task instance id.  This turns out to be the SCH_KEY can be recovered using the query shown above.

To use the API to complete the tasks appropriately actually requires two steps.  The first is to make sure that when the task is completed that the appropriate status mapping takes place.  Do this in the Design Console by going to the Process Definition of the task, for example 'Create User' for your resource.  Then to the 'TaskToObjectStatusMapping' tab.  When a task is completed from the API, the completion code goes to 'MC', for 'Manually Complete'.  So we need to map 'MC' to 'Provisioned' in order to get the task to complete appropriately.  If the task was stuck in the 'System Validation' phase then mapping 'MC' to for example 'Revoked' will cause the provisioning event to be completed, allowing us to launch further provisioning tasks without any issues. 

Of course these manually completed tasks are still visible in the admin interface.  Perhaps the best way to clean them out would be to run the Task Archival maintenance scripts that come with the product as described here.

Conclusion

OIM does not appear to offer any easy way to clean up unwanted open provisioning tasks.  In fact it does not appear to offer any documented way at all to clean them up.  The best one can do in terms of what the product supports is to complete them and possibly then hide these tasks from view or archive them off.  In dev environments and possibly POCs going straight to the tables may be best, but without an official spec of the data model this is always risky.

Tuesday Oct 27, 2009

Sun Identity Manager ActiveSync with OpenDS

The innovative folks on the OpenDS project are also paying attention to compatibility with existing DSEE applications.  Niiice.

One useful DSEE feature used by Sun Identity Manager (SIM) is the retro changelog.  SIM uses the changelog to pick up ActiveSync events from DSEE and up till now, while for example reconciliation worked fine against OpenDS, the changelog in OpenDS was not quite as compatible as SIM required.

With this version 2.3 nightly  build of OpenDS however SIM can start to use the changelog for Active Sync.  Whilst this is not as yet an officially supported ActiveSync usage, this is very useful for developing SIM customizations against an LDAP repository, for demos or POCs and so on.

To enable the changelog you need to enable the server for replication:

Here is a screen shot showing a create event arriving in the SIM (8.1.04) IDE.  Both the LDAP Adapter and Connector pick up the event successfully.  Caveat: I have not tested all the event types.




Friday Oct 23, 2009

Sun Role Manager 5.0 quick install

Sun Role Manager 5.0 is available for download.

As with any web application the installation involves a bit of unpacking, copying, configuration and database preparation. 

The steps are all documented and I guess folks will script up these tasks as suits them.  The dolphin knocked up a standalone ant build file to help with this and I gilded his lily. The script unpacks the product war file, copies in custom jar files, modifies configuration files (like the jdbc.properties file and log4j.properties file), sanity checks the configuration of the MySQL database, allows the schema name of the database to be changed and rebundles the war file.  Finally it prints some friendly reminders about heap size and environment variables.

The advantage of an ant script is that it is cross platform (tested on Mac OS X and Windows XP).  It is tested against MySQL, but is easily extensible to other databases.  The zip file is here.

To get started, unzip the file and type 'ant'--this shows the targets and help text.

You need to copy dependencies to the custom directory--they will be bundled into the generated war file by the script:

  • The weka jar file for role mining can be downloaded here
  • For repositories other than MySQL you will need to copy the jdbc driver into the custom directory.

Modify the build-config.properties file to define for example the location of the product zip file, the home directory for SRM, the database connectivity parameters and the log file location.

You can now run the script to do the installation.  A typical run would go like this:

ant -projecthelp
ant show-build-properties
ant clean
ant repo-drop-mysql
ant repo-create-mysql
ant dist

You can now deploy the generated war file from the dist directory, for example for Tomcat:

cp dist/rbacx.war ${TOMCAT_HOME}/webapps

Lovely.

 
  

Monday Oct 19, 2009

SNMP != JMX

This is a general comparison of SNMP with JMX as monitoring and management technologies.  It will be most useful for those used to one world and confronted by the other or those new to the management world.  For specific information on MIBs or JMX you must seek elsewhere.

SNMP and JMX are both technologies for facillitating the mangement and monitoring of hardware and software systems. They both facillitate the reading and writing of data (snmp: get/set packets, jmx: getter/setter methods). They both allow messages to be sent to other management components (snmp: traps, jmx: notifications). JMX, because of it's object oriented nature (see below) more naturally accomodates the notion of invoking methods remotely to trigger management actions.

JMX is the standard API for managing applications in Java it's part of the J2SE platform (J2SE 5.0 and later). JMX is defined by JSR 3. The definition of Connector mechanisms which allow remote access are defined elsewhere eg. JMX Remote is JSR 160. SNMP is defined by a bunch of RFCs.

Some comparison points

  • SNMP: (+) classes, libraries and tool kits exist for C and Java.
  • JMX: (-) Limited support for native C products.
  • SNMP: (+/-) Fixed access protocol (SNMPv1,v2,v3)
  • JMX: (+/-) Various transports can be used (JMX Connectors)
  • SNMP: (-) Fixed relational model for representing data (table based).
  • JMX: (+) The data is represented as Java Objects, refined by a set of interfaces and typically implemented according to some design patterns.
  • JMX: defines some additional helper services (eg. managing relationships between JMX objects).
  • SNMP: doesn't define this kind of service.
  • SNMP: (-) you have to be an SNMP expert to deal with SNMP.
  • JMX: (+) you don't have to be a Java or JMX expert to deal with JMX.
  • SNMP: (-) pushes the complexity to management applications (relational model (indexes, inter table indexing, inter MIB indexing etc...), protocol limitations (PDU size, data size, etc...), model complexity (decribing complex things with only simple types), specific modeling language (SMI)
  • JMX: (+) makes it possible to keep the complexity at the agent level. (possibility to invoke methods, JMX is Java)
  • SNMP: (+) already defines many standard MIBs
  • JMX: There are few standard models for JMX, for example the J2EE? one defined by JSR 77 and the J2SE? one defined by JSR 163/174.
  • SNMP: (+) Authorization and Access control is defined in SNMPv3.
  • JMX: Some aspects of Access control are defined in JMX (see the next point).
  • SNMP: (-) Configuration of security in SNMP is either inexistant (v1,v2) or complex (v3), and cannot usually be mapped to existing infrastructures.
  • JMX: (-/+) Configuration of security for JMX is not totally addressed, but you can usually map it to existing infrastructures (e.g. use native operating system RBAC model).  Configuration of access control is addressed (or at least one way to do it is via Java Permissions), but configuration of authorization is not, except in so far as you can use the JMXMP support for SASL to build on existing configuration you might have elsewhere.

It's important also to set the historical and market context:

  • SNMP: Originates from the hardware management world (eg. routers, switches, computers).
  • JMX: Originates as an attempt to provide a native Java based management technology for Java products.
  • SNMP: has good market presence for hardware components but is generally not so popular for managing more complex software deployments. There appears to be a phenomenon that alot of customers ask for it as a check-box item, even though it's utility is questionable: for example in DS deployments the native LDAP interface tends to be preferred. Even server products where SNMP is already in placetend to agree that SNMP is pretty ugly.
  • JMX: has good uptake in the market. For Java server products it appears to be the natural choice of management technology.

Conclusion

SNMP is best adapted for management/monitoring tasks for hardware or infrastructure type products. It is largely a check-box item with customers purchasing deployments of middleware products.

See Also



Monday Aug 10, 2009

Climbing Mount RBAC: shun the snowy bit

There is an image I use that offers an informal way to understand one process for creating roles as part of an Enterprise Role Model (ERM) project.  You will probably never capture 100% of your entitlements in your ERM, but you will capture enough to realize significant Return on Investment (ROI) by improvements to business,  infrastructure and compliance processes.  Presenting it in this way as 'Mount RBAC' is an idea I first saw expressed by Squire Earl in the green pastures of the Sun campus in Austin, Texas.

So here is the image:


Where to start: the bottom

Start with the observation that there are typically small sets of entitlements that most users will receive. Usually this is not hard to identify: for example desktop login and email access.  Other candidates at this level would be an entry in and anonymous search access to a corporate white pages directory.  Typically this role would be called something like 'BaseAccess' or 'Employee'.  This level of the model can be thought of as being linked to the notion of worker type.  Typical worker types might be permanent employees, contractors, interns and so on.  We can think of these roles as being quite crude: they capture large numbers of users and define vanilla access to standard systems.  This approach also obeys the principle of least privilege: we will then go on to add additional entitlements to the user based on a finer grained analysis of his business functions and HR attributes.  We can see that this aids automation of the hiring process, for, once a worker is identified with a type we can provision the systems required to get him productive on day one of his job.

Where next: finer grained entitlements

We can proceed with analyzing the HR attributes to uncover further role definitions, linking entitlements to sets of users defined at the large scale in structures like 'Division', continuing down to 'Department' 'and 'JobFunction' or 'JobTitle'.  Of course the sets are not necessarily all contained inside one another Russian doll style: other attributes like 'Location' or 'BuildingNumber' or say 'ProjectName' are orthogonal to the pure job function and business activities structures.    What we find is that as we move to finer grained analysis it becomes more useful to use tooling to uncover the relationships between entitlements and sets of users.  So we can kick start the ERM definition process by _defining_ obvious roles but use a tool based role mining process once we have exhausted the more easily defined  roles.  Experience here is that efforts that rely solely on the definition approach tend to flounder in the mire of committee like attempts to determine what the roles should be.  A better approach at this level of granularity is to let tooling mine out the existing relationships and use those roles as the basis from which to refine further.

Where to stop: the snowy bit

Now one problem projects can run into is 'role explosion'.  The problem there is that so many roles are identified that managing those roles starts to be even more costly than managing the original lists of entitlements that we started out with.  This is why Mount RBAC has a snowy bit: we recognize up front that there will be aspects of a user's access rights that are exceptional, temporary or otherwise not worth the effort of bringing into the role model. This does not pose an audit or compliance risk because we do track those entitlements even though they lie outside the role model.

Conclusion

If you put your project's Business Analysts, HR, IT people, and middleware software in a giant bucket and shook it for 12 months the chances of a meaningful ERM deployment emerging is pretty low.  An alternative is an evolutionary path, each step offering tangible ROI. The refinement process described here where we start with large sets of users and work towards smaller sets with finer grained business roles provides one such approach.  At appropriate points you will need to deploy the right tooling--and stop when you start to get cold feet.

About

user12587121

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today