Notes on an Architecture for Dynamic Invocation
By John.Rose-Oracle on Dec 13, 2007
In a previous post I enumerated the various parts that go into a call site in the JVM. In order to support the program structure of other languages, especially dynamic languages, the JVM must extend the present schemes of method definition, invocation, and linkage. We don’t want to create completely new mechanisms, since we want to reuse the highly developed infrastructure of mature JVM implementations.
The JSR 292 Expert Group, which I chair, is working on a specification to address this problem in detail, via carefully chosen changes to the bytecode architecture. The first EDR (early draft review) will be available in a few weeks. Meanwhile, I would like to point out some design requirements for method calling in a multi-language world. The key new requirements are dynamic relinking, programmable linkage, programmable guards, descriptor polymorphism, and method handles.
- dynamic relinking allows dynamic linkage decisions to be revoked
- programmable linkage allows dynamic linkage decisions to be made reflectively by “bootstrap methods”
- programmable guards allows call sites to select among overloaded methods according to language-specific rules
- descriptor polymorphism allows call sites and adapters to be managed and connected generically
- method handles let bootstrap code and call sites refer directly to bytecoded methods
In Java, as with many languages and systems, it is convenient for each call site to be dynamically linked. That is, a call’s meaning is not fully determined until it is first executed. At that point, various symbols might be looked up and matched together, and as a result the JVM will decide where the call gets directed, and how the type of the receiver might affect the selection of that method.
In many languages, a call site’s meaning is always partially provisional, in that future events can change the effect that a call will have. Although Java does not allow the redefinition of classes and methods, some languages do, and this means a call site might occasionally be “relinked”, so that a method previously called by it is no longer a target. Many JVMs already do something like this in response to class loading, if a previously monomorphic method becomes polymorphic. (Hotspot has an operation called “deoptimization” which can have this effect.) The new requirement is that call sites can somehow be edited under the control of language-specific runtimes.
A simple way to satisfy this requirement is to allow call sites to be cleared, thus forcing them to be relinked, but with a new chance to take into account changes in program structure (such as method loading or unloading). The very simplest would be to allow all call sites everywhere to be cleared by some sort privileged operation. A very complex way would be to allow call sites to be reflected and edited in detail, thereby allowing the dynamic runtime to be looking over the JVM’s shoulder at all times.
The JVM performs dynamic linkage according to fixed rules. These rules are designed with Java in mind. For example, the symbolic method requested by the call site must exactly match the resolved method identified after linkage, and the actual method eventually invoked. The method names and descriptors must exactly match among all the methods. Only the classes (or interfaces) of these methods are allowed to vary, along type hierarchy relations. The set of methods that can be overloaded on a call site (at run-time) is limited to actual methods defined in subtypes of the resolved class (or interface).
However, it is common, especially in dynamic languages, for different callers to call the same method in different ways. This means that not all call sites (in a multi-language JVM) will match the actual methods called. There will be mismatches in descriptors and in method names. Also, the actual method eventually called may or may not have a type hierarchy relationship to the classes (or interfaces) mentioned in the call’s symbolic type or symbolic descriptor.
It is quite possible that a call which is apparently directed at a receiver argument’s class is first routed to a generic adapter method. Adapter methods performs some sort of language-specific bookkeeping or argument transformation, before (or after) calling the ultimate target method. Or perhaps the call jumps into a language-specific interpreter which emulates the intended method, which has yet to be committed to bytecodes.
The upshot of all this is that dynamic linkage must become programmable. Parallel to cases where the JVM runs hard-coded algorithms to link Java call sites, it must also be possible for the JVM to defer to a language-specific bootstrap method to decide how to link a language-specific call site. This bootstrap method will be given information about the call site and actual arguments, and decide how to route the call.
It must then return to the JVM an indication of how to handle similar calls in the future. At this point, the call site will be linked. Or, I should say, it will be more linked than before, because it is often possible that fundamentally new types will occur later at the same call site, as the program executes further. Such new types will trigger new calls to the bootstrap method.
This leads us to the next problem, of determining applicability. After a bootstrap method has linked an actual method to a call site, it may still be necessary to check that this method is still applicable to each new set of actual arguments, or whether the linkage process is required to begin again.
(In Java, after a call site has been linked, there is no second chance for relinking if the method is not applicable to an unexpected argument. An error must be raised.)
In addition, if a call site is overloaded (as some must be, in most dynamic languages), there may be several actual methods at a call site, each applicable to a different set of arguments.
The crucial idea here is applicability, which is a condition that is checked on every call, and is expected to be usually true for at least one previously linked method.
It follows that we need an idea of a guard, which is a bit of logic which classifies the arguments of a call site and determines whether an actual method should be called. Or, which of several methods should be called. In Java call sites, guarding is performed by examining the receiver object’s class, and picking the most specific matching method defined within that class.
Here are some classification schemes used to dynamically classify arguments:
- symbolic type specifiers may help choose among alternate methods (as in Lisp)
- structure-template matching is used on tagged tree-like structures (e.g., Haskell)
- unification matching with backtracking determines applicability in Prolog
- equality checks are suitable for a dynamic switch (as shown with PyPy)
A language may also look at arguments beyond the receiver; this is called multiple dispatch. It is useful for operations which have algebraic symmetries, such as arithmetic, or for operations with many ad hoc optimizations on argument restrictions.
A guard is inseparably linked to a method it guards. In fact, it is better to think of a guard as a prologue to a method, rather than some sort of “mini-method” attached to it. The guard prologue of a method sniffs at the incoming arguments, and if something is wrong, backs up the invocation forcing the call site to select another method, relink, or throw an error.
There must be a specific code sequence for “backing out” of a method call when a guard fails. Rather than invent a new bytecode for this, the JVM can use a system-defined exception type to expression guard failure. This need not be expensive.
The HotSpot JVM uses a technique like this to speed some calls. If you look at the source code, you’ll find a distinction between “verified” and “unverified” entry points in methods. The former is a short prefix of code which guards the method to determine applicability. If the guard fails, the system either throws an error or edits the calling instruction to perform a slower but more powerful (vtable-based) calling sequence. Dynamic languages deserve a similar mechanism.
The possibility of stacking adapters onto target methods adds interest to the design of guards. A guarding adapter would perform checks on incoming arguments, throwing control back to the caller on guard failure, and passing normal control to the ultimate target on success. The target may in turn apply further guards, or may be a further adapter with some other function.
The previously described mechanisms could be easily simulated in Java code. Call sites could be represented as invocations on associated stateful objects. Language-specific logic could be described as a concrete subclass of an abstract class with operations like “link this call site”. What would methods look like? The interface
java.util.concurrent.Callable provides a hint: The actual method called by a call site must be something like an interface object with a single
The problem with interfaces
The problem with such interfaces is that can only provide access to a narrowly restricted set of methods methods, ones which satisfy these restrictions:
- the receiver type explicitly implements the interface and the method
- the method name is determined by the interface
- the descriptor (type signature) is also determined by the interface
By contrast, dynamic languages often need to break out of these restrictions:
- processing values over which they have little definitional control (platform types like boxed integers or collections)
- calling methods of a variety of names, and even of computed names
- invoking a method according to a descriptor chosen by the caller based on incomplete knowledge of the callee
Put another way, it is impractical to define a customized interface for every distinct shape of call. And, in different ways, it is impractical to force a single interface to handle all calls. Proposals to put function types into Java try to steer a middle ground, but for the language implementor, the problem is that interfaces contain too much type information, without the right kind of genericity.
What about reflection?
Taking a different tack,
java.lang.reflect.Method provides a way to invoke any imaginable method, but with two serious problems. First, there is no way to express general adapters as reflective methods, since an adapter is implemented (usually) as a multi-use method partially controlled by data, which indirectly calls its ultimate target method. But reflective methods do not have a data component; they are good only for a single use.
The second problem with reflective methods is that they make one descriptor (type signature) serve for all, with a large simulation overhead. Primitive types must be boxed, as must return values, thrown exceptions, and the argument list as a whole. Reflective methods trade away the JVM’s native invocation sequence in exchange for genericity over calling sequences.
Polymorphism without simulation
These unpleasant alternatives point to an as-yet unexploited middle ground in the JVM, of true polymorphism over calling sequences, which execute natively and without simulation overhead. Consider a hypothetical
invokesignature instruction, like an
invokeinterface but without an associated interface. The adjusted restrictions on the receiver would be:
- the receiver implements the desired method
- the method has the name and descriptor specified by
This hypothetical instruction (by itself) does not meet the other requirements in this note, especially those pertaining to linkage. In order to link such a call programmatically, it must be possible for create target methods for the call to branch to, and these will have to be objects in their own right. (Else adapters become impossible.) In a true dynamic invocation scenario, there are two receivers, the nominal receiver of the call (which is probably just a distinguished first argument), and the target method whose guards have been satisfied (if any). As noted above, the target method may be an adapter, a direct reference to a real bytecoded method, or something complex like a closure over a tree-walking interpreter.
The method name in the invoke instruction is significant to the linkage process, but the actual invocation of the target method should use a system-wide name (say,
invoke) so that we don’t have to rebuild an adapter architecture for every distinct method name. The adjusted restrictions on the call are therefore:
- the nominal receiver has no restrictions
- the target method implements a method named
- the target method accepts the nominal receiver and other arguments according to the instruction’s descriptor, perhaps with small variations
Since descriptors do not play a central role in linking such calls, it is reasonable to relax the requirement that caller and callee descriptors match exactly. We must preserve both the (likely) intent of the caller and the type-integrity of the system, as checked by the verifier. But, just as the verifier allows some shift between the stacked types in a frame type-state and the types in a call descriptor, the JVM can allow certain safe shifts, such as:
- boxing and unboxing
- conversion of a reference to one of its supertypes
- casting of a reference to a subtype
That last is slightly doubtful, since it could fail. But it is a convenient feature in many languages, and would ease the bootstrap method’s task of creating adapters, since they could then be simplified by type erasure. It is a requirement if target methods are to “play nicely” with Java’s type erasure rules.
There are only a few more bits to look at before we can begin to imagine how a bootstrap method might link target methods into a dynamic call site, and those pertain to method handles.
So, a bootstrap method is asked to provide a target method for a dynamic invocation site. What happens next? In the case where the dynamic language is making a Java-like call (perhaps to a system-level object like a
OutputStream), all we need is to emulate both the compile-time and run-time lookups to find the desired method. The bootstrap method should then link that method into the call site.
This linkage could be done with reflective methods, but this is a very limited solution. It is better to define a new kind of reference, a method handle, which refers as directly and nimbly as possible to the desired bytecoded method. What interface should it implement? Answer: Not much of one, since interfaces are not generic enough across signatures. The method handle object must behave as if it contains a method named
invoke (or whatever the standard name is), with a descriptor which exactly matches the bytecoded method. The bootstrap method can return this method handle, which then (as sketched above) will then allow the call to complete in the desired manner.
This is very odd, since the JVM does not anywhere else have instance-specific methods. But descriptor polymorphism, if applied to arbitrary methods, requires such odd objects. In effect, the descriptor itself serves in place of the object’s interface. Indeed, the JVM may implement method handles with such interfaces under the covers, though that may be inefficient. It is more likely that the JVM will implement calls to such things with a quick object-specific descriptor check, followed by a direct branch to the bytecoded method.
The interesting operations on method handles are the same as with any system of functional types:
- creating one from an isolated method
- creating one from a bit of code, with associated data values
- invoking one on some arguments
- creating an adapter around one
- asking one for its argument and return types
It is a problem of detailed design, for another day, to explain how each of these operations can be expressed in JVM bytecode without further innovations in the JVM’s type structure. Preferably, they should be expressible in standard Java APIs. Certainly, inner classes or closures can provide a plentiful source of
invoke method receivers, to use as adapters or even interpreter entry points. Also, the JVM can provide higher-order functions to perform currying, varargs management, and other similar transformations. These functions can be controlled via a reflective-style API that simulates argument lists with boxing, but the JVM can implement the opaque method handles themselves without the boxing.
Just as normal Java calls are checks for legal access at linkage time, dynamic calls must also be checked. When the details are worked out, we find that access checks must be done (by the bootstrap method, on behalf of the caller) when a bytecoded method handle is created. Once the handle is created, no more access checks are needed. (Access checking on every call is an additional major overhead with reflective methods.) The privilege to access a private method is given to anyone who receives a handle to that method. Is this less secure than the present system? No, because the module which created the handle must have had access to that method in the first place, and could therefore have wrapped another kind of handle around the same method, and passed it around just as freely. Method handles are just as safe, and inherently faster, than reflective methods.
I hope the logic of this design persuades you that dynamic languages need these features. They are not a cheap weekend hack, but they will pay for themselves as the JVM expands its usefulness with the new languages that are arising today.
- dynamic relinking lets languages change their application structure dynamically
- programmable linkage gives full authority over method linkage to the language
- programmable guards are needed to express post-linkage, per-call type checks
- descriptor polymorphism lets calls run at native JVM speed, without simulation overheads
- method handles give the language the full run of all bytecoded methods, at native JVM speeds