Polishing and Tuning
By Jacob Kessler on Sep 09, 2008
It seems that the closer a software product gets to release, the more time is spent making little adjustments to it and the less time is spent adding new features. This is probably out of some sort of fear about breaking things right before release and having to stay up all night trying to fix the code.
So, since GlassFish is preparing for its next V3 release, focus has shifted to providing good defaults, protecting the user from their own errors, integrating the dynamic pool with the rest of the code, and making sure that it was all working correctly. Unfortunately, that does not make for interesting blogging material ("The default number of runtimes is now 1!").
The major change that happened to the dynamic pool, then, was the introduction of much stricter thread safety, since race conditions are a bad thing. There was also an addition of some safety checks (Is your hard maximum greater than or equal to your hard minimum? Is your starting number of runtimes between your hard maximum and hard minimum?) and adding sensible reactions when end users make mistakes like that. It also became more well-mannered in its logging, announcing at the beginning what bounds it is using and, if requested, logging every transaction from the pool, its thoughts about it, and the pool's current state. Hopefully, this will make the transition to an intelligent pool as painless (and hopefulyl unnoticable) as possible.
Speaking of intelligent pools, the other thing that happened this week was that the people developing Grizzly mentioned that they might be interesting in using the dynamic pool within their project. It always feels good when someone else requests your code to use in their own project, and I always like it when things move towards more intelligent, dynamic systems as opposed to static systems.
With the approaching release, there has also been a new focus on bug hunting. While the one that I've gone and hunted down turned out not to be an actual code problem, I think that I can expect the next bit to focus on wading through code and figuring out why it's not working the way we want it to. Having code work the way you intended it to is always a good thing, since it reduces the number of surprises that the users need to deal with. If only I could write an AI that would run through the code and ask me questions like "Did you really mean to do that?" or "Have you thought about what will happen when..." or "That won't work, because...". Unfortunately, I think that Alan Turing would end up disagreeing with me if I tried to do it generally.