Monday Oct 19, 2009


This is a general comparison of SNMP with JMX as monitoring and management technologies.  It will be most useful for those used to one world and confronted by the other or those new to the management world.  For specific information on MIBs or JMX you must seek elsewhere.

SNMP and JMX are both technologies for facillitating the mangement and monitoring of hardware and software systems. They both facillitate the reading and writing of data (snmp: get/set packets, jmx: getter/setter methods). They both allow messages to be sent to other management components (snmp: traps, jmx: notifications). JMX, because of it's object oriented nature (see below) more naturally accomodates the notion of invoking methods remotely to trigger management actions.

JMX is the standard API for managing applications in Java it's part of the J2SE platform (J2SE 5.0 and later). JMX is defined by JSR 3. The definition of Connector mechanisms which allow remote access are defined elsewhere eg. JMX Remote is JSR 160. SNMP is defined by a bunch of RFCs.

Some comparison points

  • SNMP: (+) classes, libraries and tool kits exist for C and Java.
  • JMX: (-) Limited support for native C products.
  • SNMP: (+/-) Fixed access protocol (SNMPv1,v2,v3)
  • JMX: (+/-) Various transports can be used (JMX Connectors)
  • SNMP: (-) Fixed relational model for representing data (table based).
  • JMX: (+) The data is represented as Java Objects, refined by a set of interfaces and typically implemented according to some design patterns.
  • JMX: defines some additional helper services (eg. managing relationships between JMX objects).
  • SNMP: doesn't define this kind of service.
  • SNMP: (-) you have to be an SNMP expert to deal with SNMP.
  • JMX: (+) you don't have to be a Java or JMX expert to deal with JMX.
  • SNMP: (-) pushes the complexity to management applications (relational model (indexes, inter table indexing, inter MIB indexing etc...), protocol limitations (PDU size, data size, etc...), model complexity (decribing complex things with only simple types), specific modeling language (SMI)
  • JMX: (+) makes it possible to keep the complexity at the agent level. (possibility to invoke methods, JMX is Java)
  • SNMP: (+) already defines many standard MIBs
  • JMX: There are few standard models for JMX, for example the J2EE? one defined by JSR 77 and the J2SE? one defined by JSR 163/174.
  • SNMP: (+) Authorization and Access control is defined in SNMPv3.
  • JMX: Some aspects of Access control are defined in JMX (see the next point).
  • SNMP: (-) Configuration of security in SNMP is either inexistant (v1,v2) or complex (v3), and cannot usually be mapped to existing infrastructures.
  • JMX: (-/+) Configuration of security for JMX is not totally addressed, but you can usually map it to existing infrastructures (e.g. use native operating system RBAC model).  Configuration of access control is addressed (or at least one way to do it is via Java Permissions), but configuration of authorization is not, except in so far as you can use the JMXMP support for SASL to build on existing configuration you might have elsewhere.

It's important also to set the historical and market context:

  • SNMP: Originates from the hardware management world (eg. routers, switches, computers).
  • JMX: Originates as an attempt to provide a native Java based management technology for Java products.
  • SNMP: has good market presence for hardware components but is generally not so popular for managing more complex software deployments. There appears to be a phenomenon that alot of customers ask for it as a check-box item, even though it's utility is questionable: for example in DS deployments the native LDAP interface tends to be preferred. Even server products where SNMP is already in placetend to agree that SNMP is pretty ugly.
  • JMX: has good uptake in the market. For Java server products it appears to be the natural choice of management technology.


SNMP is best adapted for management/monitoring tasks for hardware or infrastructure type products. It is largely a check-box item with customers purchasing deployments of middleware products.

See Also

Monday Sep 21, 2009

Defrag your disk assets: defragmenting disks on Mac OS X, Snow Leopard

Upgrading to Snow Leopard on my MacBook Pro laptop, I was curious about disk defragmentation and wondered whether to do a complete fresh install and so on. Received wisdom to now was by the time one really needs to defragment the disk with Mac OS X, it is time in any case for a new disk.

Mmmm.  Clearly some people have thought about this and do not exactly agree:

Following the mac attorney's article, I did the following:

  • used the native Mac OS X Disk Utility to repair permissions
  • had the Mac do a disk check and repair itself by booting the Mac into safe mode (by keeping shift pressed after the chime at boot time). 
  • My disk was at about 70% utilization, so I archived a lot of software packages, videos and VMs I had lying on the hard disk, so got that down to 50% utilization.
  • I ran the iDefrag demo software which analyzes the disk (takes about 5 minutes or so).  As the article explains, Mac OS X is good at keeping files contiguous but the free space on the disk was at 98% fragmentation.

Mmm...not sure that matters really, but me be thinkin dat aint right nohow, so I bought the full version of iDefrag for 24EUR. 'Hi, I'm a Mac, you will need to pay a third party for system maintenance tools that PC gives you for free.'.

The defragmentation goes like this:

  • start this process after work: it took about 3-4 hours to run the full defrag!
  • use the CDMaker application included with the commercial copy of iDefrag to create a bootable DVD (a CD is too small) so that iDefrag can do it's work on the offline hard disk.  CDMaker downloads a boot template from the internet and burns the DVD.  (I tried to get it to load from my Snow Leopard installation disk, but it seemed to take a long I cancelled that and went for the internet option).
  • Boot onto the DVD (by holding down the 'C' key after chime at boot time).
  • Choose to run a full defrag.

Prior to the defragmentation the free space was pretty much uniformly distributed accross the disk, afterwards it looks alot better:

Monday Aug 10, 2009

Climbing Mount RBAC: shun the snowy bit

There is an image I use that offers an informal way to understand one process for creating roles as part of an Enterprise Role Model (ERM) project.  You will probably never capture 100% of your entitlements in your ERM, but you will capture enough to realize significant Return on Investment (ROI) by improvements to business,  infrastructure and compliance processes.  Presenting it in this way as 'Mount RBAC' is an idea I first saw expressed by Squire Earl in the green pastures of the Sun campus in Austin, Texas.

So here is the image:

Where to start: the bottom

Start with the observation that there are typically small sets of entitlements that most users will receive. Usually this is not hard to identify: for example desktop login and email access.  Other candidates at this level would be an entry in and anonymous search access to a corporate white pages directory.  Typically this role would be called something like 'BaseAccess' or 'Employee'.  This level of the model can be thought of as being linked to the notion of worker type.  Typical worker types might be permanent employees, contractors, interns and so on.  We can think of these roles as being quite crude: they capture large numbers of users and define vanilla access to standard systems.  This approach also obeys the principle of least privilege: we will then go on to add additional entitlements to the user based on a finer grained analysis of his business functions and HR attributes.  We can see that this aids automation of the hiring process, for, once a worker is identified with a type we can provision the systems required to get him productive on day one of his job.

Where next: finer grained entitlements

We can proceed with analyzing the HR attributes to uncover further role definitions, linking entitlements to sets of users defined at the large scale in structures like 'Division', continuing down to 'Department' 'and 'JobFunction' or 'JobTitle'.  Of course the sets are not necessarily all contained inside one another Russian doll style: other attributes like 'Location' or 'BuildingNumber' or say 'ProjectName' are orthogonal to the pure job function and business activities structures.    What we find is that as we move to finer grained analysis it becomes more useful to use tooling to uncover the relationships between entitlements and sets of users.  So we can kick start the ERM definition process by _defining_ obvious roles but use a tool based role mining process once we have exhausted the more easily defined  roles.  Experience here is that efforts that rely solely on the definition approach tend to flounder in the mire of committee like attempts to determine what the roles should be.  A better approach at this level of granularity is to let tooling mine out the existing relationships and use those roles as the basis from which to refine further.

Where to stop: the snowy bit

Now one problem projects can run into is 'role explosion'.  The problem there is that so many roles are identified that managing those roles starts to be even more costly than managing the original lists of entitlements that we started out with.  This is why Mount RBAC has a snowy bit: we recognize up front that there will be aspects of a user's access rights that are exceptional, temporary or otherwise not worth the effort of bringing into the role model. This does not pose an audit or compliance risk because we do track those entitlements even though they lie outside the role model.


If you put your project's Business Analysts, HR, IT people, and middleware software in a giant bucket and shook it for 12 months the chances of a meaningful ERM deployment emerging is pretty low.  An alternative is an evolutionary path, each step offering tangible ROI. The refinement process described here where we start with large sets of users and work towards smaller sets with finer grained business roles provides one such approach.  At appropriate points you will need to deploy the right tooling--and stop when you start to get cold feet.

Friday Feb 06, 2009

Nokia E71, email, web, SFR

For email, first configure an inbox: Menu->Communic->Messaging.

Select Options->Settings->E-mail->New mailbox and configure a new mailbox with your mail server settings.

Now set up an access point, call it 'Internet Mobile (email)'  using the Access Point Name websfr.

Access Points are manged here: Menu->Tools->Settings->Connection->Access Points

Now when you choose to retrieve email from your inbox, select 'Internet Mobile (email)' as the access point.

For web browsing, set up another Access Point using the Access Point name wapsfr.  Select this one when prompted for an access point by the web browser (the globe icon).

Nokia E71, Mac OS X 10.5.6, SFR, modem

The ginger dolphin and I regularly rediscover how to use our Nokia E71 mobile phones as modems for our Macs, connecting over Bluetooth via SFR.  It does work, but sometimes it seems to get stuck in a 'Disconnecting' state.  Logout/Login to the Mac to clear that.

The setup is as follows:

1. Get the modem scripts from here
2. Unzip/copy them to "/Library/Modem Scripts"
3. Set up a Bluetooth connection called NokiaE71 for your phone
3.1 Network Preferences-> + ->Bluetooth

3.2 Enter websfr for the Telephone Number

3.3 Hit Advanced
3.4 Configure as shown

3.5 Take your E71 and ensure that Bluetooth is turned on: Menu->Tools->Settings->Connection->Bluetooth

3.6 Network Preferences->NokiaE71->Set up Bluetooth Device

3.7 Hit '+' and allow the wizard to discover your phone

3.8 You now have a Bluetooth connection which can connect to your phone and use it as a modem.

4. To use it:
4.1 Network Preferences->NokiaE71->Connect

It will connect to your phone--you must accept the connection on the phone.

Thursday Jan 29, 2009

'Government' is not a four letter word

In certain circles 'Government' is a dirty word.  Amusing then to watch those financial chaps running back to Government for help.  However in a brazen display, worthy of Dick Turpin, they have put a new perspective on the phrase 'Tax is Theft' by themselves making off with our tax euros! Public subsidy, private profit.  Lovely.

In Identity circles the Government sector presents interesting challenges.  Some of the issues we see in the government sector that come up are federation, scalability, compliance and usability.

  • Federation is key for government because they have requirements to federate user identity and data across different government organizations--the department dealing with your health care data is not the one managing your driving license.  Here is a nice description of one EU federation use case.
  • The scalability issue arises due to the size of the user population (think of China!).   For example, most of the provisioning solutions on the market have grown out of enterprise scale deployments--moving up to scalability levels for government deployments requires innovative techniques and architectures.
  • Compliance is key because governments, in principle, are subject to more stringent laws on managing information about people.
  • Usability is key because we can make no assumptions about the nature of the user's knowledge.  In an enterprise we can ensure everyone has taken the 'Managing your Profile' training. We cannot do this so easily with 60 million users.  Internationalization, for example, is something that must be addressed up front, not as an afterthought.

The EU appears to be trying to get it's multiple heads around 'identity' as part of the eGovernment program.  There, you can find a link to their 'eIDM' roadmap.

Well I look forward to some fun POCs with the Sun Identity Management products in response to EU eGovernment RFPs!

Blog name could be better

In which Rob reflects on the quality of his blog name.[Read More]



« July 2016