Tuesday Jan 26, 2010

Improving Directory (DSEE) Import Rate Through ZFS Caching

As we all know, the process of importing data into the directory database is the first step in building a directory service. Importing is an equally important step in recovering from a directory disaster such as an inadvertent corruption of the database due to hardware failure or an application with a bug. In this scenario, a nightly database binary backup or an archived ldif could save the day for you. Furthermore, if your directory has a large number of entries (tens of millions) then the import process can be time consuming. Therefore, it is very important to fine tune the import process in order to reduce initialization and recovery time.

Most import tuning recommendations have focused on the write capabilities of the disk subsystem. Undeniably, it is the most important ingredient of the import process. However as we all know, the input to the import process is a ldif file which is used to initialize and (re)build the directory database. As demonstrated by our recent performance testing effort, the location of the ldif file is also very important. I'll mainly concentrate on ZFS in this post as time and again it has proven to be the ideal filesystem for the Directory. Note in some cases, you can save hour's of time by even the smallest gain in the import rate. Especially if your ldif file has tens of millions of entries.

Generally speaking there are few gotchas that need to be kept in mind for the import process. First thing is to ensure that you have a separate partition for your database, logs and transaction logs (this is actually true for any filesystem). For ZFS this translates into separate Pools. Similarly it is recommended to place the ldif file on a pool that is not being used for any other purpose during importing. This maximizes the read I/O for that pool without having to share it with any other process. In ZFS, the Adaptive Replacement Cache (ARC) cache plays an important role in the import process as seen in the table below. ZFS caches can be controlled via the primarycache and secondarycache properties that can be set via the zfs set command. This excellent blog explains these caches in detail. To understand and prove the effectiveness of these caches we ran few tests of imports on a SunFire X4150 system with ldif files of 3 million and 10 million entries each. The ldif file was generated using the telco.template via make-ldif. Details about the Hardware, OS and ZFS configuration and other useful commands are listed in the Appendix.

Dataset primarycache (6GB) secondarycache Time taken (sec) Import Rate (entries/sec)
3 Million
all all 887 3382.19
metadata metadata 1144 2622.38
metadata none 1140 2631.58
none none 1877 1598.3
all none 909 3300.33

10 Million
all all 3026 3304.69
metadata metadata 3724 2685.29
metadata none 3710 2695.42
none none 7945 1258.65
all none 3016 3315.65

The table shows the results of various combinations of primarycache and secondarycache on the ldifpool only. The db pool where the directory database is created always had primarycache and secondarycache set to all. The astute reader will observe from the Appendix that the ZFS Intent Log (ZIL) is actually configured on flash memory. This did not skew our results as we are concerned with the ldifpool (where the ldif file resides).

So going back to the table, as expected the primarycache (ARC in DRAM) is obviously the key catalyst in the read performance. Disabling it causes a catastrophic drop in the import rate primarily because prefetching also gets disabled and a lot more reads have to go to the disk directly. The charts below (data obtained via iostat -xc) depicts this very clearly as the disk are lot busier in reading when the primarycache is set to none for the 3 Million ldif file import.

So far, I have concentrated on discussing the primarycache (ARC). What about the secondarycache (L2ARC)? Typically the secondarycache is utilized optimally when used with a flash memory device. We did have flash memory device (Sun Flash F20) added to the ldifpool, however our reads were sequential and by design the L2ARC does not cache sequential data. So for this particular use case the secondarycache did not come into play as evident by the results in the table. Maybe if we limited the size of the ARC to just 1GB or less, the pre-fetches would have "spilled" over to the L2ARC and hence the L2ARC would have contributed more.

Finally a disclaimer, since the intent of this exercise is to show the effect of ZFS caches, the import rate results in the table are for comparison and not a benchmark. And i would also like to thank my colleagues who help me with this blog. These specialists are Brad Diggs, Pedro Vazquez, Ludovic Poitou, Arnaud Lacour, Mark Craig, Fabio Pistolesi, Nick Wooler and Jerome Arnou.


	zm1 # uname -a
	SunOS zm1 5.10 Generic_141445-09 i86pc i386 i86pc

	zm1 # cat /etc/release 
	                       Solaris 10 10/09 s10x_u8wos_08a X86
	           Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
	                        Use is subject to license terms.
	                           Assembled 16 September 2009

	zm1 # cat /etc/system | grep -i zfs
	\* Limit ZFS ARC to 6 GB
	set zfs:zfs_arc_max = 0x180000000
	set zfs:zfs_mdcomp_disable = 1
	set zfs:zfs_nocacheflush = 1

	zm1 # zfs set primarycache=all ldifpool
	zm1 # zfs set secondarycache=all ldifpool

	zm1 # echo "::memstat" | mdb -k
	Page Summary                Pages                MB  %Tot
	------------     ----------------  ----------------  ----
	Kernel                     189405               739    2%
	ZFS File Data               52657               205    1%
	Anon                       184176               719    2%
	Exec and libs                4624                18    0%
	Page cache                   7575                29    0%
	Free (cachelist)             3068                11    0%
	Free (freelist)           7944877             31034   95%

	Total                     8386382             32759
	Physical                  8177488             31943

NOTE: The system had three ZFS pools.  The “db” pool for storing the directory database and striped across 6 SATA disks with the ZIL on a flash memory. The “ldifpool” pool was were the ldif file, transaction and 
access logs were located.  In the import process the transaction and access logs are not used therefore the pool was entirely dedicated to the ldif file.

	zm1 # zfs get all ldifpool | grep cache
	ldifpool  primarycache          none                   local
	ldifpool  secondarycache        none                   local

	zm1 # zpool list   
	db       816G  2.25G  814G     0%   ONLINE  -
	ldifpool 136G  93.0G  43.0G    68%  ONLINE  -
	rpool    136G  75.6G  60.4G    55%  ONLINE  -

	zm1 # zpool status -v
	  pool: db
	 state: ONLINE
	 scrub: none requested

	        NAME        STATE     READ WRITE CKSUM
	        db          ONLINE       0     0     0
	          c0t1d0    ONLINE       0     0     0
	          c0t2d0    ONLINE       0     0     0
	          c0t3d0    ONLINE       0     0     0
	          c0t4d0    ONLINE       0     0     0
	          c0t5d0    ONLINE       0     0     0
	          c0t6d0    ONLINE       0     0     0
	          c2t0d0    ONLINE       0     0     0

	errors: No known data errors

	  pool: ldifpool
	 state: ONLINE
	 scrub: none requested

	        NAME        STATE     READ WRITE CKSUM
	        ldifpool    ONLINE       0     0     0
	          c0t7d0    ONLINE       0     0     0
	          c2t3d0    ONLINE       0     0     0

	errors: No known data errors

	  pool: rpool
	 state: ONLINE
	 scrub: none requested

	        NAME        STATE     READ WRITE CKSUM
	        rpool       ONLINE       0     0     0
	          c0t0d0s0  ONLINE       0     0     0

	errors: No known data errors

	ds@dsee1$ du -h telco_\*
	  48G   telco_10M.ldif
	  14G   telco_3M.ldif

	ds@dsee1$ grep cache dse.ldif | grep size
	nsslapd-dn-cachememsize: 104857600
	nsslapd-dbcachesize: 104857600
	nsslapd-import-cachesize: 2147483648
	nsslapd-cachesize: -1
	nsslapd-cachememsize: 1073741824

Monday Nov 23, 2009

SAMLv2 Account Mapping with OpenSSO and Transient Federation

By default OpenSSO uses Persistent Federation for account linking between an IDP and SP when SAMLv2 is used. This means two things from the point of view of a LDAP administrator.

1. Ideally the data stores on both IDP and SP should have OpenSSO schema
2. And the user entry should also be writable by the BIND DN defined in the Data Store.

To recap for persistent federation OpenSSO writes two attributes namely sun-fm-saml2-nameid-infokey and sun-fm-saml2-nameid-info to the users entry. The sun-fm-saml2-nameid-infokey acts at the Opaque Handle. It holds a uniquely generated random key that is common between the IDP and SP so that the two accounts can be linked. BTW instead of using these two attributes, one can specify your own too. This can be done under Configuration->Global->SAMLv2 Service Configuration.

To achieve this linking, a first time user first authenticates to the IDP and then to the SP. This way the user is manually providing the link between the two accounts. Once this link is established (by writing the above two attributes to the user's entry on each repository), the user no longer has to provide his credentials at the SP. This is actually what federating an account is all about.

There are however scenarios where one or both of the (LDAP) repositories that hold the user entry are read-only and/or no schema modification is allowed. This mandates the use of Transient Federation which basically does not write back anything to the user repositories, thus eliminating the need to worry about adding custom schema and also allows the ability to use a read-only repository.

To use transient federation all you have to do is to pass NameIDFormat=transient as a query parameter to the federation (SOAP) end point servlets. For example


However by default transient federation account mapping on the SP sides maps to the anonymous user as OpenSSO needs a physical object to create a session (this is not entirely true but that is a topic for another day). That means there is a many to one mapping from the IDP to the SP. If you are passing in attributes or some other information, this is not very desirable.

To overcome the issue of anonymous mapping you need alternate ways to link the two disparate accounts together which is the responsibility of the Account Mapper. OpenSSO engineers have already thought about these scenarios and added out-of-box functionality to the account mapper to support these scenarios.

Below are two ways of doing it without any customizations to the account mapper. Both require user repositories and obviously require a common attribute (and value) that links the accounts together (we would like to read your mind and provide a mind mapper but it is not possible with today's technology). Also both methods utilize transient federation so that nothing is written to the data store (user repository).

Method 1

On the hosted IDP

1. Click on Federation->IDP name->Assertion Content
2. Modify (delete and add) "transient" to as follows

For example: urn:oasis:names:tc:SAML:2.0:nameid-format:transient=uid

On the hosted SP

1. Click on Federation->SP name->Assertion Processinog
2. On the account mapper check "Use name ID as user ID"

\*\*\* Note the above method requires Express build 8 or later on the SP side.

Method 2

On the hosted IDP

1. Click on Federation->IDP name->Assertion Processing
2. In the Attribute Map add idpattribute=spattribute
For example uid=uid.

On the hosted SP

1. Click on Federation->SP name->Assertion Processing
2. Check Auto Federation. And provide the attribute name specified in step 2 above. For example uid.

\*\*\* Make sure that the NameIDFormat=transient is used as a query parameter to either the idpssoinit or spssoinit servlet.

Friday Oct 30, 2009

Installing Oracle WebLogic 10.3.1 (11gR1) on Mac OS X

Yes there are quite a few blogs on this but none of them is as complete as i would like it to be. So i am documenting my experience here.

1) First download the bits from Oracle. Key is to download the "Generic" Package Installer from here

2) Before starting to install you have to trick the installer into thinking that the local JDK is the generic Sun JDK. If you skip this step the installer will not accept the default Mac OS X JDK and complain that it is Invalid.

$ cd /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home
$ sudo mkdir -p jre/lib
$ cd jre/lib
$ sudo touch rt.jar
$ sudo touch core.jar

3) Now install WLS using the following command. The installation is pretty straightforward.

$ java -Xmx1024m -Dos.name=unix -jar wls1031_generic.jar

4) The next challange is to overcome the java.lang.OutOfMemory error that occurs when you try to access the console at http://localhost:7001/console. As a result the server hangs.

To recover from this error you actually have to kill the JVM. So edit the user_projects/domains/mydomain/bin/setDomainEnv.sh script first and change the line "if [ "${JAVA_VENDOR}" = "Unknown" ] ; then" to "if [ "${JAVA_VENDOR}" = "Sun" ] ; then"

5) One last thing that is recommended is to set USER_MEM_ARGS="-Xms256m -Xmx512m -XX:MaxPermSize=128m" in startWebLogic.sh script. I added this as the first line.

6) Finally start the server

$ cd user_projects/domains/mydomain && ./startWebLogic.sh

Wednesday Sep 16, 2009

OpenSSO 8 & SAML v2 AttributeStatement

A very useful and essential feature of OpenSSO is to allow attribute mappings. This enables you to send addtional attributes in the SAMLv2 assertion/response to the Service Provider. Once the attribute mapping is defined (can be done either from the GUI under the entities “Assertion Processing” tab or in the metadata itself), the map is sent as a name-value pair to the Service Provider. Also keep in mind that the mapping can and should be defined on the remote service provider so that if your hosted IDP is shared amongst multiple SP’s, each can have their own mapping. For example here the map was defined from the GUI as USERID=employeeNumber for one of the remote SP’s.

<saml:AttributeStatement><saml:Attribute Name="USERID"><saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">121898</saml:AttributeValue><saml:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="xs:string">007</saml:AttributeValue></saml:Attribute></saml:AttributeStatement>

Once the Service Provider receives the assertion and has been configured to look for the attribute name USERID, it will grab the value and do whatever it needs to. One such real life example is SalesForce.com CRM. In OpenSSO 8 Express Build 8, there is a wizard to support easy configuration of federation with SalesForce.com which results in a map definition automatically.

One problem that i ran into (not related to the product, phew..) was that however many maps i defined i could not see them in the assertion. As a matter of fact i could not even see the tag. Turns out that earlier i had modified the Authentication->Core setting from Profile=required to Profile=ignored. Reverting back to Profile=required fixed the issue and the assertion started to pass the attributes.

Monday Aug 17, 2009

OpenSSO Agent 3.0 and SSL Termination

While working with a customer whose J2EE 3.0 agent was (deployed on Tomcat 6 in non-SSL mode) behind a SSL enabled load balancer i.e., SSL Terminated, we had to set these two properties in the centralized configuration of the Agent under the "Advanced" tab to make it work properly otherwise obviously the agent was redirecting everything to http and port 80 which the Load Balancer was blocking. Also note that we created the policies with a resource of "https" (as desired).

Sunday Feb 08, 2009

Identity Suites Essentials OpenSSO Tutorials

We have just made available an internal suite of tutorials to the public on http://wikis.sun.com/display/ISE.

These tutorials include step by step installation and configuration information both for OpenSSO and Identity Manager. They also include common post installation use cases that we have seen deployed at various customers.

I intend to continue contributing additional tutorials/use cases to the OpenSSO Tutorial so keep watching it.

Wednesday Oct 15, 2008

How to Federate with Google Apps using OpenSSO as the Identity Provider

My colleague Pat Patterson had written a howto on how to Federate with Google Apps using an much older build of OpenSSO. At that time he had to use a custom account mapper (i.e. write code) to map the NameID in the SAML v2 response. Now with the latest OpenSSO builds the custom account mapper is no longer required as account mapping for SAML v2 is supported OOTB using the administrative console of OpenSSO thanks to Heng-Ming Hsu (another colleague) who recently added this functionality to OpenSSO's DefaultIDPAccountMapper.

Here is a simple writeup on how to federate with Google Apps using OpenSSO. I did this in less than 10 minutes but i already had Glassfish installed and opensso.war deployed. Note that this writeup is using non-SSL connections. In production it is recommended to use SSL enabled web servers.


OpenSSO latest nightly build
\* http://download.java.net/general/opensso/nightly/latest/opensso/opensso.zip (i used Oct 6th build)

Any OpenSSO Supported Container
\* http://opensso.dev.java.net/public/use/docs/fampdf/rn.pdf
\* I used Glassfish V2R2 http://glassfish.dev.java.net

A Premier Account for Google Applications
\* http://google.com/a


The OpenSSO is the Identity Provider (IDP) and Google Apps is the Service Provider (SP). We will use SAML v2 as the Single Sign-On (SSO) protocol between the two and create a Circle Of Trust (COT) on the IDP.

Note your browser will need the Quicktime(TM) plugin to view the videos

1. Deploy opensso.war on your container

Download opensso.zip, extract opensso.war and deploy it on your container. For Glassfish it is very simple and done via the "asadmin deploy" command (for the feint hearted the Glassfish administrative console can also be used to deploy the war file).

Carefully read the release notes to see if your container requires any pre-deployment tasks such as modifying your container's server.policy file

-bash-3.00# ./asadmin deploy --user admin --passwordfile /var/tmp/asadmin_passwd --port 4848 --enabled=true --contextroot /opensso /var/tmp/opensso/deployable-war/opensso.war
Command deploy executed successfully.
-bash-3.00# ./asadmin stop-domain
Domain idp stopped.
-bash-3.00# ./asadmin start-domain

Starting Domain idp, please wait.Log redirected to /var/opt/glassfish/domains/idp/logs/server.log.
Redirecting output to /var/opt/glassfish/domains/idp/logs/server.log
Domain domain1 is ready to receive client requests. Additional services are being started in background.
Domain [idp] is running [Sun Java System Application Server 9.1_02 (build b04-fcs)] with its configuration and logs at: [/var/opt/glassfish/domains].
Admin Console is available at [http://localhost:4848].
Use the same port [4848] for "asadmin" commands.
User web applications are available at these URLs:
[http://localhost:8080 https://localhost:8181 ].
Following web-contexts are available:
[/web1 /__wstx-services /opensso ].
Standard JMX Clients (like JConsole) can connect to JMXServiceURL:
[service:jmx:rmi:///jndi/rmi://utopia:8686/jmxrmi] for domain management purposes.
Domain listens on at least following ports for connections:
[8080 8181 4848 3700 3820 3920 8686 ].
Domain does not support application server clusters and other standalone instances.

2. Configure OpenSSO after deploying to your container

Run though the OpenSSO configuration wizard by pointing your browser to the containers URL and opensso context. In my case it is http://idp.unopass.net/opensso

\* To download this video click here

3. Configure IDP on OpenSSO via the Workflow Wizard

One of the defining features of OpenSSO is its workflow wizards which help you to create a hosted IDP/SP or remote IDP/SP very quickly without the need to create metadata files and importing manually.

\* To download this video click here

4. Configure SP on Google Apps

\* To download this video click here

5. Define Name Identifier (NameID) mapping

Google Apps requires that the userid be sent back in the SAML response. OpenSSO does not do this by default but now provides a very simple way of mapping the nameid to any attribute in the users profile (in the ldap directory).

\* To download this video click here

6. SSO into Google Apps using your new OpenSSO IDP

Finally test the SSO by trying to access http://mail.google.com/a/<your domain>. You should NOT be prompted by the traditional Google Login screen, rather you should be redirected to IDP's (OpenSSO) Login page. You should log into OpenSSO with the same userid but password can be different (Hint: you need to create this user if it does not already exist in OpenSSO).

\* To download this video click here

After watching this video keep in mind that SSO demos are never impressive unless you show what is happening behind the scenes. One way to do so is to show the SAML 2 protocol (SOAP) messages.

The good news is that they can be grabbed from the debug logs of OpenSSO. You have to enable "message" level debugging first form the OpenSSO Administrative Console under Configuration->Sites. You will then be able to see the AuthN requests and SAML assertion in the debug log called "Federation". For example.

AuthN Request

<?xml version="1.0" encoding="UTF-8"?>
<samlp:AuthnRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" ID="glcmf
hikbbhohichialilnnpjakbeljekmkhppkb" Version="2.0" IssueInstant="2008-10-14T00:5
7:14Z" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Provider
Name="google.com" IsPassive="false" AssertionConsumerServiceURL="https://www.goo
gle.com/a/unopass.net/acs"><saml:Issuer xmlns:saml="urn:oasis:names:tc:SAML:2.0:
assertion">google.com</saml:Issuer><samlp:NameIDPolicy AllowCreate="true" Format
="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" /></samlp:AuthnRequest>


IDPSSOUtil.sendResponse: SAML Response content :
<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" ID="s247893b2ec9
0665dfd5d9bd4a092f5e3a7194fef4" InResponseTo="hkcmljnccpheoobdofbjcngjbadmgcfhaapdb
nni" Version="2.0" IssueInstant="2008-10-15T17:24:46Z" Destination="https://www.goo
gle.com/a/unopass.net/acs"><saml:Issuer xmlns:saml="urn:oasis:names:tc:SAML:2.0:ass
ertion">http://idp.unopass.net:80/opensso</saml:Issuer><samlp:Status xmlns:samlp="u
<samlp:StatusCode xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
</samlp:Status><saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" I
D="s295c56ccd7872209ae336b934d1eed5be52a8e6ec" IssueInstant="2008-10-15T17:24:46Z"
<saml:Issuer>http://idp.unopass.net:80/opensso</saml:Issuer><Signature xmlns="http:
<CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<Reference URI="#s295c56ccd7872209ae336b934d1eed5be52a8e6ec">
<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQua
tion Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml:SubjectConfirmationData InResponseTo="hkcmljnccpheoobdofbjcngjbadmgcfhaapdbnn
i" NotOnOrAfter="2008-10-15T17:34:46Z" Recipient="https://www.google.com/a/unopass.
</saml:Subject><saml:Conditions NotBefore="2008-10-15T17:14:46Z" NotOnOrAfter="2008
<saml:AuthnStatement AuthnInstant="2008-10-15T17:24:46Z" SessionIndex="s2bb816b5a88

Thursday Sep 04, 2008

OpenSSO custom auth module for RSA Access Manager (formely ClearTrust)

I have just checked in code for a custom auth module that validates a RSA ClearTrust SSOToken (CTSESSION) in the OpenSSO CVS repository. This was a result of getting inspired by the much more feature rich SiteMinder module.

Just like the SiteMinder auth module, the obvious two use case of this module are:
  • Co-existence with OpenSSO while authenticating to ClearTrust
  • Migrating to OpenSSO in a ClearTrust environment

This module has been deployed in production and a colleague of mine has polished the code a bit more (my Java skills are ahm...lets say require a jump start most of the time). I'll be checking in the polished code later next week.

For details on deploying this module refer to the README.txt file in the following directory


Wednesday Feb 28, 2007

OpenFM Quick Installation Notes

  • Grab openfm.war from opensso.dev.java.net
  • In WS 7.0 create a new listener called fm or use web container of your choice.
  • Deploy openfm.war to container
  • Hit the deployed URL with web browser, it takes you to configurator.jsp
  • Fill in the information, hit save, configuration failed.
  • Looked at the webserver error log, apparently it is trying to create a directory /AccessManager.
  • Realized that the user under which web server is running is webservd and by default the home directory for it is "/".
  • Changed the home directory of webservd to /home/webservd, restarted the container
  • Hit the configurator again, this time success!

Friday Aug 11, 2006

Run Java Enterprise System on Slackware Linux

These notes are Java ES 3 (2005Q1).[Read More]

Friday Sep 30, 2005

MIME Types Mess!

The reason for my first blog is that i just spent a couple of hours trying to figure out why my mozilla was opening \*.sxw documents with staroffice 7 whereas everywhere i looked it was associated to staroffice 8. Since i use KDE i was looking thru all the places where kde determines its MIME types and associated application. Then i thought that since mozilla is build on GTK+ perhaps it has an affinity to gnome, so i started looking under ~/.gnome, /usr/share/mime-info etc. Finally thru a strace i saw the ~/.mailcap was being used. Once i moved/deleted this file, mozilla automatically picked up the mime type defined in KDE. But no wait actually later on i found out that it did not pick it up from KDE. It actally picked it up from /etc/mailcap which was added by Star Office 8 installation. So back to square one. Why can't there be a standard for this unless as usual i am ignorant and there is actually one. BTW i also thought that using gtk-qt-engine will help but it didn't. It did however help greatly in unifying my GTK+ look and feel to the same as KDE. To me this is all a big mess. Every desktop application depending on what toolkit it is based on (or not based on) seems to have its own affinity to where to look for these MIME types! Ah well i have vented my frustration. I feel better now!



« February 2017