For the record, I posted my talk about Java 8.
[Update 4/06] The videos are already coming out from Channel 9, including my talk on Java 8 (mostly about closures) and an update from Jeroen Frijters on hosting the JVM on .NET.
I recommend Martin Odersky’s keynote about new facilities for expression reflection in Scala. As befits a typeful language, the notation for what Lisp-ers think of as “backquote-comma” and F#-ers call code quotation is based on the type system. The amazing part (at least to me) is that there is no modification to the language parser: no special pseudo-operators or new brackets. I guess this is a generalization of C#’s special treatment of anonymous lambda expressions, added for LINQ.
Here is an example based on the talk, showing a macro definition and its use of an typeful expression template:
def assert(cond: Boolean, msg: Any) = macro Asserts.assertImpl
object Asserts {
def raise(msg: Any) = throw new AssertionError(msg)
def assertImpl(c: Context)
(cond: c.Expr[Boolean], msg: c.Expr[Any]) : c.Expr[Unit] =
c.reify( if (!cond.eval) raise(msg.eval) )
}
"if (!$cond) raise($msg)"
or a Lisp backquote `(if (not ,cond) (raise ,msg))
, but the Scala version is hygenically parsed, scoped, and typed. Note also the crucial use of path-dependent types (c.Expr
) to allow compilers freedom to switch up their representations.Speaking of C#, Mads Torgersen talked about new features in C# for asynchronous programming, the async and await keywords which go with the Task
type pattern (which includes the varying notions of promise and future). You use async/await in nested pairs to shift code styles between coroutined (async) and blocking (awaited). The source notation is the same, but coroutined code is compiled using state machines. It is similar (but internally distinct) from the older generator/yield notation. To me it looks like a fruitful instance of the backquote-comma notational pattern.
Coroutines are a good way to break up small computations across multiple cores, so it is not surprising that they are a hot topic. Robert Griesemer’s talk on Go was a tour de force of iterative interactive development, in which he started with a serialized Mandelbrot image server and added 25% performance (on two cores: 200 ms to 160 ms) by small code changes to partition the request into coroutined tasks. Each task generated a line of the result image, so the adjustment was simple and natural. At a cost to interoperability, a segmented stack design allows tasks to be scheduled cheaply; fresh stacks start at 4 kilobytes. The use of CSP and a built-in channel type appear to guide the programmer around pitfalls associated with concurrent access to data structures. This is good, since Go data structures are low-level and C-like, allowing many potential race conditions.
Go includes a structural (non-nominal) interface type system which makes it simple to connect disparate data structures; the trick is that an interface-bearing data structure is accompanied by a locally created vtable. Such fine-grained interfacing reminds me of an oldie-but-goodie, the Russell programming language.
During Q/A I noted that their language design includes the “DOTIMES bug”, and their demo code required a non-obvious workaround to fix it. This was answered in the usual circular way, to the effect that the language spec implies the broken behavior, so users just have to live with it. (IMO, it is like a small land mine in the living room.) Happily for other programmers, C# is fixing the problem, and Java never had the problem, because of the final variable capture rule. Really, language designers, what is so hard about defining that each loop iteration gets a distinct binding of the loop variable? Or at least emitting a diagnostic when a loop variable gets captured?
(By the way, the Java community is interested in coroutines also, and Lukas Stadler has built a prototype for us to experiment with. It seems to me that there is a sweet spot in there somewhere, with user-visible async evaluation modes and JVM-mediated transforms, that can get us to a similar place. As a bonus, I would hope that the evaluation modes would also scale down to generators and up to heterogenous processing arrays; is that too much to ask? Perhaps a Scala-like DSL facility is the right Archimedean lever to solve these problems.)
Walter Bright and Andrei Alexandriu presented cool features of the D language. A key problem in managing basic blocks is reliably pairing setup and cleanup actions, such as opening a file and closing it. C++ solves this with stack objects equipped with destructors, a pattern which is now called RAII (resource allocation is initialization). As of 7, Java finally (as it were) has a similar mechanism, although it must be requested via a special syntax. Similarly, D has a special syntax (scope(exit)
) for declaring cleanup statements inline immediately next to the associated setups. This is intriguingly similar to to Go’s defer
keyword, except that Go defers dynamically pile up on the enclosing call frame, while the D construct is strictly lexical. Also, D has two extra flavors of cleanup, which apply only to abnormal or normal exits. The great benefit of such things is being able to bundle setups and cleanups adjacently, without many additional layers of block nesting. D also ensures pointer safety using 2-word fat pointers. (This reminded me of Sam Kendall’s early work with Bounds Check C, which used 3-word fat pointers. D is a good venue for such ideas.)
In keeping with the keynote theme of quasi-reflective computation at compile time. D has a macro facility introduced with the (confusing) keyword mixin
, and called “CTFE” (compile-time function execution). Essentially, D code is executed by the compiler to extrude more D code (as flat text) which the compiler then ingests. The coolest part of all this is the pure
attribute in the D type system, which more or less reliably marks functions which are safe to execute at compile time. There is also an immutable
attribute for marking data which is safe for global and compile-time consumption.
Here are some other tidbits gleaned from my notes:
x = x+1
.)Complex[Int]
), do you get a Complex[Rational]
and if so, why? What happens when you replace Complex
and Rational
by types defined by non-communicating users? Apparently their framework can handle such things.