JVM Language summit
By Jacob Kessler on Sep 30, 2008
Last week, I attended the JVM Language Summit, which was quite interesting even if I didn't understand all of it, because I don't have my full programming language nerd credentials... yet.
It was essentially three days of talks about the Java Virtual Machine, the improvements that are being made to it, and the different kinds of languages that people are trying to run on it, from the "quick and easy" languages like Ruby and Python to intensely mathmatical languages like Fortress.
Learning about the internals of the JVM and the improvements that are being made (and have been made) to it, from the new InvokeDynamic instruction to the runtime JIT optimizations, was extremely interesting. Particularly, learning a bit about the kinds of runtime code analysis that the compiler is able to do was interesting as an AI/theorem proving problem, since that's essentially what it is, both in terms of the conditions for running something through the JIT compiler and in terms of proving that, for example, this bit of code will not be reached and can just be removed. It also contributes to my continued amazement at the amount of help that we've managed to program ourselves to make programming easier, from optimizing compilers to IDEs to frameworks like Rails. Gone are the days (I think) where the super-coder who writes everything in hand-optimized assembly can make code that is faster than a compiler. Gone are the days where you had to learn how to implement your own heapsort: it's a library function now. The tools get better, the programming gets easier, better tools are written, and I can only see that cycle continuing.
On the other hand of improvements to the JVM were the things that people were doing with it, in terms of using languages other than Java on the JVM. I know that, theoretically, any turing-complete langauge (which the JVM runs) should be able to represent a program that will do anything else, including computing the output of any other program, regardless of language. But the idea that a) people are actually doing that and b) that combined with the extra optimizations that can be done, it actually ends up working at around the same speeds as non-JVM implementations, is really cool. It's one more step away from the intricacies of the hardware and onto a platform that handles all of that stuff for you, especially since (as was mentioned at the conference), when Intel introduces a new chip with a new instruction on it, the JVM can change itself and all of the old programs will be able to take advantage of it, something that doesn't happen with programs that aren't running on a VM. That, in turn, would free the chip designers to work on those kinds of instructions, and progress is accelerated. Progress is good.