Thursday Jan 14, 2010

2010 The Year of Entitlements

2010 opens with the Identity Management field steaming forward and starting to wrestle with entitlements management.  This and the next few blogs will look at definitions, problem areas, and solutions emerging on the fine grain entitlements front.

[Read More]

Wednesday Dec 16, 2009

Never Use Emails As A Unique Identifier

This blog could just be the title and thats it. Nuff said.

One recurring major mistake I see over and over again is the use of email as a way to track unique user accounts.

Seems like a good idea.  When a new user is created, email is usually one of the first accounts they get provisioned. The email string is unique to the user, so why not use it as a way to identify user accounts across all systems? Its almost always a data element in the LDAP directory, so its available across the network for any user looking to authenticate a user.   Next thing you know, the ERP system uses it to track user accounts, the healthcare web site uses it, etc.

So what's the big deal?

The problem is two fold. First, by using a user's email account name, you are compromising the overall security of your domain.  Give me your business card and I already have one half of your login sequence.  Email account names are easy to guess, leaving one only to work out the password to gain access to many corporate systems.

And secondly, email account names may not be permanent.  Names change when one gets married. Users may want mail aliases to use nicknames or variations. Or even bigger (and this one hits close to home at this current time), the name of the mail domain may change.   If and when Sun gets acquired, everyone employee of Sun will most likely get a mail account in the Oracle.com domain.  Employeename@sun.com becomes employeename@oracle.com.  All audit trails become much more complex because you have to track across muliple UserID's to recreate a user's access.

So what is recommended?  An employee number, preferably greater than six digits. Most HR systems assign each employee a unique number.  The number is usually not easily known.  And its stays the same when the employee gets married or changes their name.  And it stays with the employee forever, even if they leave and then return later.  Add a few letters as well, say initials, would help insure uniqueness across all identity domains.

Monday Dec 07, 2009

Identity Crisis is Back Online

Identity Crisis is back on line with some changes.[Read More]

Thursday Jun 19, 2008

New IdM 8.0 Data Exporter

Sun IdM Data ExporterThe new Identity Manager 8.0 data exporter is a major new feature. As mentioned before, the architecture for Identity Manager is "data sparse", meaning we mostly store meta data about users and accounts, not the actual value. This greatly reduces the chance of passing "stale data" to systems under management. When you want to know a user's email address is, Identity Manager retrieves it from the email system, not from a repository. You have the freshest data, not what you think it was the last time you sync'd up.

This model has proven to be the correct approach in provisioning design. But it comes up short when customers want to do more auditing and performance analysis. When all you have is current fresh data, its tough to ask the question "who had access to the financial system the first two week of the month when a security breach occurred?" or "how long on average does it take for a user to be provisioned?". We can do it, but it gets messy in the audit logs.

So now 8.0 has a data exporter sub-system built in. It allows the deployment team to determine what objects in IdM they want to report on and capture that data from within IdM and post it to a series of staging tables. These staging tables can then be accessed by the warehouse ETL to bring the data into the warehouse.

The process is really quite elementary. A data exporting task is started that will form the data from the underlying IdM objects. You can run the task to only bring out information that has changed from the last time the data export task was run. The data is then loaded into the staging tables. This data is then transferred into the customer's dataware house of choice.

There are two schemas that need to be used. The first is an ObjectClass schema that helps form the object data within IdM into something that matches the staging tables schema. IdM 8.0 comes with scripts to build these staging tables in all of the supported repositories. They represent most of the important objects within IdM. If you want to export extended attributes, you will have to modify these ObjectClass schemas and tables. IdM also provides export schemas to read the staging tables and help modify the data into something the warehouse beans can consume. As shown above, we also provide a way to connect to the external tables and perform basic forensic reports from within IdM.

The beauty of this staging table dual schema approach is it abstracts the underlying structure of IdM from the data export interface, allowing changes to be made to IdM in future versions and not "break" existing export configurations. It allows IdM to be changed and the warehouse to change and not require a complete reconfiguration.

The new data export feature uses Java Hibernate as the underlying transformation engine, which aids in the mapping of the IdM Java Objects into relational database table structures. IdM 8.0 provides default Warehouse Interface Code (WIC) that are Java classes that define the underlying schema of common IdM objects. Not all IdM objects are defined, but the important one are. They should work in a majority of the IdM deployments, but if extended or custom attributes need to be exported, these WIC classes will need to be redefined and regenerated. We include both binary and source versions of this code to help in the deployment configuration.

If you are considering using the data exporter feature, be sure to consider the impact it will have on the server it is deployed on. You may want to consider having a dedicated server for the exporting engine, as it is resource intensive. You definitely do not want to run it on a server that has a user GUI interface running, as this will slow the response time. You can schedule the exporter tasks to run in off hours to lower this impact. Good news, the new data exporter subsystem has JMX interfaces on both the pre and post staging tables beans to allow performance monitoring on how fast the exporter is working.

More in future posts. For now, good documentation is available from our public website.
About

Sean ONeill

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today