Friday Jan 23, 2009

Rollercoaster week & last post

Copied from my now only blog:

It was great at Sun - good luck to you all!


Wow - what a week this was... I have been through quite some ups and downs, and that is not even mentioning the fact that the U.S. got a new administration.

Bad news first: not only did I have a mild form of food poisining (not that there was anything 'mild' about it, but I heard it can be much worse), but I am also affected by the workforce reduction at Sun. Yes, that's right... after a meager 11+ years I am on to new adventures elsewhere. To all those that I have been working with: it was a very interesting and mostly fun ride. I really had a sense of being able to work on something big and accomplish a lot, but the energy and the creativity at Sun was very inspiring. I met a lot of smart people there, and I hope that I will have the chance to continue working with them, one way or another.

Going forward, I see myself continuing on the themes that I have been dealing with for a while now: interoperability, web-centric (now cloud) computing, and the related identity and security aspects. There is a lot of work ahead, and I am quite determined to continue contributing. 

Since my age-old email at Sun will cease to work soon, you will now be able to reach me though an interim alias:[1]. I am also on Facebook and LinkedIn, so please feel free to connect with me:

With more time on my hands for now, I will also start spamming your RSS readers... just kidding - but I will write more here now, so stay tuned.

But now for the good news: yesterday my application to become a U.S. citizen was approved and - assuming all goes well - I will take my Oath in early March. Contrary to its horrible reputation my experience with USCIS (formerly INS) was actually quite good: yes, they are bureaucratic (you should have seen the piles of files they had on me), but overall the process was quite efficient and fast: it will have taken less than 6 months from sending in the application to my Oath ceremony.

Interestingly enough, my becoming a U.S. citizen will also open new doors on the job market: as of March I will be able to get a security clearance, work on certain government contracts, etc. The timing could not have been better.


[1]Sorry for putting the "removethispart" subdomain in - obviously it is only after the @ sign.

Thursday Jan 22, 2009

Onto new adventures

After a little more than 11 years, I will be leaving Sun this spring. It was a very interesting time, and I wish the people and the company all the best. 

In case you are still reading this blog, please change your bookmarks to point to my personal blog:

If you would like to get in touch with me, please feel free to connect with me on LinkedIn or Facebook:

Tuesday Sep 02, 2008

Google's Path to World Domination

No, I am not talking about Google Chrome (yet). But it is related: if you look at

it seems that Germany has already conquered Denmark, Benelux, Switzerland, and Austria-Hungary. It could also be a the EUSSR with its capital in Brussels...

  Or maybe this is a completely new country call "Googleland", where every citizen deposits all their data in a save datacenter, identified by a unique id. "Information Self-Determination" is a basic human right, and any data merchant will get shot on sight. 

  The only exception is the operator of the datacenter (that would be Google, being compensated for their services by an unalienable right to use any of the data for targeted advertising campaigns), or any data thief that offers information on citizens suspected of being involved in terrorism, sedition, or tax evasion.

Tuesday Aug 12, 2008

Freedom of Press? In Massachusetts? Nah ...

  This is just another installment of how the freedom of expression and scientific research is being sacrificed on the altar of "public safety" and "property rights". From the CNET article:

"A federal judge on Saturday granted the Massachusetts transit authority's request for an injunction preventing three MIT students from giving a presentation about hacking smartcards used in the Boston subway system."

  To summarize this incident: a couple of student find a giant security hole is a publicly financed payment system. They inform the authorities and involved parties to given them a change to work on the situation. The faceless bureaucrats respond in the way any large (and thus inefficient) organization will respond: ignorance and disbelief. The students follow the time-honored tradition of publicizing their results and suddenly the gears spring into actions: federal courts, FBI, and preliminary injunctions appear. The official reason is "public safety", but everyone involved knows that this is just a very lame excuse. In truth, it is the desire of an inadequately powerful state-sponsored enterprise to hide their incompetence and silence their "subjects".

  The fact that this can even be done is the availability of unconstitutional laws (at least in spirit) like the DMCA and similar utterly meritless legislation. Coming from Europe, I am used to the frequent oppression of freedoms, even today. So far, the U.S. has been setting an example of how e.g. the freedom of expression should be interpreted. This gag order by a federal judge in Boston (sic!) is an untenable limitation of this right. It goes against some of the most fundamental principles enshrined in the Constitution and the Bill of Rights. 

  For more information, check the EFF website

tags: , ,

Monday Aug 11, 2008

Giving credit where credit is due ...

I just laughed out loud:

Go King Homer I. of Spain!

tags: ,

Securing OpenID@Work - Again

  Last year we announced an experiment at Sun: in order to gather more information about the operational characteristics of "user-centric" identity technologies, we decided to roll out an OpenID provider for Sun employees. This OpenID provider was intended to be used by Sun employees for personal usage at various OpenID sites that have been popping up at some places.

  This experiment involved various parts of the company, including field people, products folks, the security team, and our Chief Privacy Officer. We negotiated a number of requirements for our experiment: employee privacy must be maintained at all times, the system must not interfere with any other Sun authentication or business system, etc. All this was quite achievable and we passed the--albeit lax, since it was an experiment--security and privacy reviews, the focus of which was the protection of Sun employees and property.

  The weakness in Debian-generated certificates and the recent DNS cache poisoning attack vector resulted in a triple whammy: weak certificates, broken DNS, weak protocol. After Ben's report last week, we revisited the current design of the service and came up with a few recommendations. Mark Wilcox notes that my list could be hard to follow, especially for non-technical users. To some extend[1] I do agree with him, so we decided to take additional steps on our end to improve security:

  The very core of these changes lies in the idea that we are introducing HTTPS based OpenIDs for our users. In OpenID 1 and 2, a RP normalizes any identifier without scheme prefix into an unsecured HTTP based identifier. Only a prefixing the OpenID identifier with the https:// scheme will result in discovery over an TLS secured transport channel. This looks a lot "geekier" than the somewhat more appealing "naked" OpenID identifiers, but as a result of this change, the lookup will now be handled completely over server-authenticated channels. 

  In order to make this approach useful, we would need the cooperation of OpenID RPs: in the current specs, and are two separate entities, which--in my opinion--makes no sense. If RPs would start recognizing these two identifiers as the same entity, it would help improve the security of the OpenID protocol in a quite significant way, since users could easily migrate from the broken insecure HTTP discovery protocol over to the somewhat more secure HTTPS transport.

  Obviously, this cannot address the key-reandomness weakness, but then ... only time can. Meanwhile we have to rely on CRLs and OSCP checking for certificate revocation.

  UPDATE: The approach outlined above might be criticized for equating two URI that are not equivalent (https://... and http://...). I appreciate this from a principal point of view, but extreme time require extreme measures ;-).

  A reasonable way to address the potential security implication for current https://.. identifiers would be for the RP to perform a one-time security upgrade: assuming that the RP recognizes a particular claimed_id e.g. Whenever there is a login with the same identifier over HTTPS (i.e. claimed_id is in the example), the RP can 'upgrade' the account to an HTTPS-only account.

  On the OP side, any account for https://x.y.z should trigger the complete block for any http://x.y.z ids.

  Thanks to John Bradley for some stimulating discussions on these issues.


[1] Mark mentions OAAM's strong authenication and risk based authentication technologies as potential solutions for OpenID's weaknesses. Maybe it is because I am not familir with this product, but I cannot see how OAAM can help with the weaknesses that occur through using HTTP (as opposed to HTTPS) for discovery. Likewise, socially engineered attacked can be effective in circumventing stronger authentication mechanisms (such as pictures or virtual keyboards) for less technology-sophisticated users: instead of their image, the rogue OP displays an error message along the lines of "Sorry, your personal image is currently unavailable due to maintenance. Please use standard authentication in the meantime." This might not work with all users, but I know enough folks who would login also with the "standard authentication" scheme.

Thursday Aug 07, 2008

Some security advice for our OpenID users

With the recent news about the DNS cache vulnerability, users are more exposed than ever to potential security attacks, including phishing or pharming attacks, that apply to OpenID as well as other network systems. For example, the ability to redirect DNS requests through cache poisoning opens the door to a significant OpenID security risk: if the OpenID provider is not employing TLS with server-side authentication — preferably mutual authentication — any affected DNS server could redirect the client to a pharming site that looks like the user's real OP, but is not. If OpenID required transport and authentication over HTTPS, this would be less of a problem.

In order to limit the risk, we are advising the users of our OpenID@Work provider to make sure that they follow these guidelines, which might be useful for others, as well:
  • Make sure that you systems are fully patched.
  • Verify that the DNS server you use (usually provided by your ISP) is patched and not subject to DNS cache poisoning. You can verify this at Dan Kaminsky's web site. If you find that your ISP has not down their job, complain. Loudly.
  • Use certificate revocation lists. These list contain the serial numbers of revoked certificates and they can be easily consumed by most modern browsers. For the SunPKI list, just point your browser to and make sure that your browser refreshes it regularly. Other companies have their own CRLs (e.g. Verisigns are here).
  • Be extra careful when accessing your authentication web site: can easily be mistaken for or

In addition, we recommend that Sun employees use the corporate VPN for all sensitive corporate business, and — obviously — not use the experimental OpenID@Work authentication service, or any OpenID authentication service, for anything of value.

UPDATE: The Sun PKI CRLs is also here, which is the official distribution point for Sun/Verisigns issued certificates. In addition, these certificate support OCSP verification at

Ben Laurie and Robin Wilton also published information relating to the weaknesses of OpenID.


Thursday Jul 24, 2008

A patently good idea

The U.S. Patent and Trademark Office (USPTO) is considering to invalidate many (if not most) software patents and significantly restrict the issuance of new process patents. No doubt, intellectual property does deserve decent protection, and I think that this move by the USPTO will in fact result in better protection of property: copyright law provides ample protection against IPR theft while not getting in the way of real innovations.

To draw a technical comparison, process patent law protects the API, while copyright law protects the implementation. Although it takes a lot of thought to come up with a good API, it should be the implementation that is at the heart of the competition to not harm the end-user.

In this sense, the new direction of the USPTO will benefit the end-users (consumer as well as application developers) by allowing the concrete implementation of ideas to compete while keeping interoperability at the idea-level intact. In the end, the entire market will benefit including the vendors by lowering the barrier for interoperability significantly. 


Friday Jul 18, 2008

Using Abdera and the Jersey Client API

Marc recently published a short tutorial on how to use Apache Abdera with Apache Abdera with our reference implementation of JAX-RS, Jersey. His code is server side, i.e. it explains using Jersey and Abdera for creating RESTful web services with Atom payload[1]. In this article I will give an example on how the Jersey client API can be used to consume such a service with realitve ease.

It is hopefully known that Jersey contains a very simple, yet effective HTTP client API. Core to it is the heavy use of the builder pattern for creating and configuring requests. For our example, I start with creating the client:

  Client c = Client.create();

  WebResource r = c.resource(new URI(someLocation));

We can now get the InputStream from the WebResource to read the Atom feed into an Abdera Feed:

  InputStream is = (InputStream) r.get(InputStream.class);

  Document<Feed> doc = Abdera.getNewParser().parse(is);
  Feed feed = doc.getRoot();

  for (Entry entry : feed.getEntries()) {


Now let's say we want to post an entry to the resource in Marc's article. In this case we would also have to use his AbderaSupport class, which implementes the proper MessageBodyReader and MessageBodyWriter interfaces for the Abdera objects. On the server side providing these interfaces is enough, but on the client side we need to configure the Jersey client. The following code helps doing this:

  public static class AbderaClientConfig extends DefaultClientConfig {


      public Set<Class<?>> getProviderClasses() {

          Set<Class<?>> classes = new HashSet<Class<?>>();
          return classes;

Thus completing our sample app: 
  ClientConfig cf = new AbderaClientConfig();

  Client c = Client.create(cf);

  WebResource r = c.resource(new URI(someLocation));
  Entry entry = AbderaSupport.getAbdera().newEntry(); 
  ClientResponse cr = r.type(MediaType.APPLICATION_XML).put(ClientResponse.class, entry);  


[1] Tim pointed out that this style should properly called "AtomPub", and not APP, AtomPub/Sub or similar.

VRM Workshop 2008

This week's VRM Workshop at the Berkman Center in Cambridge, MA was quite interesting. It helped me quite a bit to sort out how Identity Management and VRM intersect, but also differ in some respect. To put it in a nutshell, I believe that the biggest difference between the two is that they are essentially two different ways of looking at the same problem. "Traditional" identity management has been focusing largely on the varies subjects (and objects), their characterization, and how they can be mapped to digital artifacts. VRM seems to be taking a more procedural approach by focusing more on the processes and interactions of these subjects, objects, and digital artifacts. In this sense, VRM and identity management are very much complimentary.

You can find more information about Doc Searls ideas about VRM on the Berkman wiki and on his blog.


OpenSocial and Liberty

OpenSocial and Liberty People Services are really tackling the same problem - only from two opposing sites. While the LAP PS has established a solid foundation for secure identity management, OpenSocial has started out to define an API for allowing social networking (but also other) software from different source to be run in one (or more than one) container. Ideally, these container abstract away the underlying platform and thus enable application portability. This becomes quite useful, since facebook, MySpace, Orkut, etc. have extremely similar types of applications (friends/relationships, photo sharing, contact management, and many more). Having these applications being portable across different allows application developers to focus more on added functionality and less on platform plumbing.

Liberty - on the other hand - has created the necessary infrastructure to enable individuals to set preferences and share information about themselves with other in a secure and privacy preserving way. The protocols used in ID-WSF and people service (and not APIs) can enable containers to communicate with others based on user's policies and requests.

libertyalliance opensocial

Monday Jun 30, 2008

High Potential?

The current economic situation is not exactly ideal: amongst many significant issues, one of the most concrete and pressing problems of today is the highly volatile energy market. Many current problem in the world (such as clean water, food, housing) could be solved almost completely, given that there is sufficient energy at hand[1].

Electric energy generation has seen a variety of approaches: some of them are quite childish, while others lack in public acceptance. Ultimately, only a sound mix of nuclear fusion and a select number of reasonable renewables such as solar or geothermal energy source (were available) will make sense.

However, electricity is not particularly easy to store, making it by far less attractive for any type of transport, especially individual transport. No technology that has been available so far has created a reasonable alternative to fossil hydrocarbon fuels: they have a sufficient energy density, are easy to handle, and the technology is very well understood. Alternatives such as canola-based diesel or ethanol-enriched gasoline are mostly carbon-ineffective ways of wasting money and alimenting lobbies.

Now, a new genetics based approach is making the rounds in various news outlets: LS9 is a South San Francisco company that succeeded in creating microorganisms that can produce hydrocarbons from renewable sugar sources. In other words, it will soon be possible to replace the back-yard compost heap with a small LS9 reactor that produces gasoline instead of dirt.

It will be interesting to see, if this technology can actually scale to a level where a large (and energy hungry) economy such as the U.S., China, or the E.U. can rely on this renewable fuel for a significant portion of their needs. But even if this approach is not fit for mass energy production, it still guarantees the available of hydrocarbon based products (i.e. plastics) in the post-fossil age.

[1] Obviously, in today's world there is also in many cases a lack of political will, but that is - at least to some extend - again a result of scarce energy.

Friday Jun 27, 2008

Review Friday: Sony XDR-F1HD HD-Radio Component Tuner

To day, I would like to take a peek at a technology that has been living in the shadows for some time. While HDTV and digital broadcast over-the-air have been getting some attention lately (especially with the January 17, 2009 deadline looming), digital radio broadcast have not been getting any significant media attention in the U.S.A.
One of the reasons for the lack of attention might be that the digital radio standard chosen by the FCC has been met with some serious criticism. The two arguments that are most profound here in my mind are sound quality and proprietariness.
Nevertheless, since I am listening to a lot of radio during the day, I have decided to give this broadcast system a try. For receiving, I chose the Sony XDR-F1HD component tuner that allows most easy integration with a standard stereo system. Connections are made simply through RCA style component wires. The system comes with an AM and FM antenna cable, but standard connection (e.g. to you home TV antenna) are available. The unit is very simple to configure and has - in addition to the radio program information - a large clock. The display is illuminated.
Reception of FM HD radio stations is - overall - pretty good, even under adverse conditions. My antenna is setup inside the Sun office, which is a steel reenforced concrete building with excellent radio shielding qualities (sigh!). In addition, the indoor antenna cable is close to two CRT monitors and a variety of transformers. Most strong stations (such as WGBH) are readily avilable with little or no reception problems. However, AM reception is rather spotty and so far I have only been able to receive WBZ when holding the antenna at 83 degrees North-North-West about 3'7" above my desk.
The sound quality is most of the times acceptable. The radio signal codec is a proprietary version of the AAC encoding, encoded at 36 kbit/sec. This is far from being CD quality, but it does remove the noise floor of the FM signal to a large extend.
Overall, I would probably recommend this setup, as long as the broadcasting community is dedicated to continue using this sytem.

TechEd Online Panel on Web Services Interoperability

During TechEd 2008, I participated in a Panel discussion on Web Services Interoperability. Microsoft just put up the tape on their TechNet Library site. They also have a WMV video feed, and a MP3 audio-only feed.

Thursday Jun 26, 2008

Along those lines ...

In my earlier article today I pointed out a rather significant security blunder in Germany, where a number of municipal IT departments failed to secure their systems. This lead  to exposure of at least 500,000 personal data records to the internet - so far I have not heard that any affected person was informed about their involuntary expose to identity thieves.

In this context it seems a little untimely to publicly announce a new electronic signature program that will start in 2012.Under this program, anyone claiming any benefits from any public source (unemployment, social security, etc.) will be required to use a smart card with a personal key. In addition, employer will have to submit all salary and compensation information to a federal, centralized database that will be fully accessible to all participating government agencies on the federal, state, and local level. Contained in this database are obviously all employer records, but - in all likelihood - also all data records of current or past applications for government benefits. Employees are expected to pay for these new services themselves, with private sector  financial institutions or government agencies playing the role of the trust broker.

This program is sold to the public in two ways: on the one hand, it is supposed to save the employers and the government agencies a lot of money by streamlining reporting and decision making processes. On the other hand, in its centralized form it is expected to help limit welfare fraud, which is quite common in Germany. 

In and by itself, such a database seems harmless enough: it has some tangebile benefits, including significant savings for the private and public sector. However, this effort does not stand by itself. Over the past couple of years, privacy from prying government eyes has been under the most severe attack immaginable: A comprehensive tax ID that is coma




« July 2016