The JVM Language Summit 2010


I’ve just come back from three days in Santa Clara, spending time with some of the brightest people in the Java world – the JVM language summit is truly a fantastic collection of great people. And I was there too…

THe goal of the JVM language summit is to collect the people who work with languages on the JVM and have them share their projects, their experiences and their networks – and let them network with the people in charge of implementing the JVM’s for different companies. This year, a lot of discussion about JSR 292 and project lambda was on the plate. The presence of hardware and VM people was also more pronounced. I counted principals for at least six different virtual machines in the audience or presenting (Hotspot, JRockit, J9, Azul, Maxine, and Monty).

Among the experienced platform and language people there, some of the notables included Kresten Krab Thorup, Joshua Bloch, Bob Lee, Neal Gafter, John Rose, Brian Goetz, Alex Buckley, Rich Hickey, Charles Nutter, Cliff Click, Doug Lea, Per Bothner and many more. A great collection of people.

As an example of the funny happenstance that can happen in this collection of people, I was sitting rebinding my Java implementations for Mac OS X – and I had remove lots of links in /usr/bin. A few minutes later the person next to me started asking some questions about my experience with Java on the Mac – and it turns out he’s the manager for the Apple JVM team. Or at one point Rich Hickey reported on a quite puzzling problem that causes bad semantics when iterating over data that doesn’t fit in memory – and Cliff Click immediately opens up his laptop, says “give me an half hour and I’ll see what I can do”.

Another funny anecdote was when Doug Lea pointed out that if you use fibonacci to test performance against yourself or others, it’s important that the implementations actually agrees about the first values of fib. Funnily enough, I saw three different implementations of the ground rule in fib during the summit – all of them different. (if n < 2 return 1, if n<=2 return n, if n < 2 return n).

There were way too many interesting presentations and discussions for me to be able to talk about all of them – instead I just wanted to give some highlights.

Charles Nutter

Charles gave a quick introduction to JRuby and Mirah, and what kind of optimizations JRuby is currently doing. He also talked about how far he’s gotten in inlining invoke dynamic calls inside of JRuby (and he’s gotten very far – it’s really cool).

Fredrik Öhrström

Fredrik is the JRockit representative on JSR 292, and way too smart. He presented a solution to how you can use method handles integrated with function types to solve many of the current problems in project lambda. A very powerful and interesting presentation.

Doug Lea

Doug spent his keynote trying (quite successfully) to concinve the room of the hegemony of fork-join as a good solution to concurrency problems. A very good and thought provoking keynote.

Josh Bloch

Last year at the JVM language summit, Josh talked about what he called “the Semantic Gap”. This year, after being beat up by some linguists, he changed the name of this concept to “Performance Anxiety”. The basic idea is that in our current infrastructure we have traded performance for predictability. Two examples from his talk about when this happens in Java was pretty interesting. He had one benchmark that consistently showed about the same numbers for the same JVM run, but differed between JVM runs. There was no undeterminism in the benchmark itself, but they benchmark times continued to oscillate between 0.7 and 0.85 depending on JVM run. Cliff Clicks explanation for this is that it is probably the compilation planner, which is a separate thread. Depending on when that thread runs the compilation strategy will be different, and makes a difference in times. And it’s really hard for the programmer to take this difference into account.

The other example is simpler (and don’y change your code because of this). In some circumstances it turns out that & is faster than && in Java, because a && will short curcuit, which means it will branch. The single ampersand will always execute both sides, which means the CPU can pipeline both of them to execute at the same time.

All the examples he shown comes down to the same thing – we can’t really reason intuitively about the performance of our language constructs anymore. Our systems have become to complex in order to support better performance, and we give up predictability to get that performance. And at the end of the day it doesn’t even matter if you go down to C or assembler – you still cannot control exactly what the CPU is doing anymore.

Kresten Krab Thorup

Kresten is the CTO of Trifork, and one of the main organizers of many of my favorite conferences (like JAOO and QCon). The last nine months he has worked on an Erlang implementation for Java, which he talked about. It seems to be a very good implementation, and he’s getting surprisingly good performance and context switching numbers. In fact, several of the ideas in Seph will be stolen from Erjang.

Rémi Forax

Rémi showed off his PHP.reboot project, implemented using JSR 292 and getting quite good performance. His JSR 292 backport seems to be really useful and I think I’ll use that to make sure Seph can run on pre Java 7 machines. Good stuff.

Rich Hickey

Rich spent some time collecting comments from people in the room of what was problematic with the JVM in its current incarnation. To start us off, he showed one piece of hilarious/horrible Clojure code. Any one wants to guess what it does?

static public Object ret1(Object ret, Object nil) {
    return ret;
}

public static int count(Object o){
    if(o instanceof Counted)
        return ((Counted) o).count();
    return countFrom(Util.ret1(o, o = null));
}

We then went on to a few other things (which you can find on the JVM Language Summit wiki). The consensus seemed to be that tail calls is really very important. Last year, it wasn’t as crucial but now that we see how powerful method handles and lambda will be, tail calls turn out to be very nice to have. Hopefully we can make that happen.

JSR 292

The JSR 292 expert group got lots of chances to work on ideas and designs for the future. Lots of interesting results came out of these discussions. Some of the more notable ones are skisses on how method handles and function types can work together, how invoke dynamic and bootstrap method can be used to implement defender methods and several other interesting ideas.

All in all it has been a fun few days, going far out in language and implementation geekiness. I hope to come back to this next year.



Questioning the reality of generics


I’ve been meaning to write about this for a while, since I keep saying this and people keep getting surprised. Now maybe I’m totally wrong here, and if that’s the case it would be nice to hear some good arguments for that. Here’s my current point of view on the subject anyway.

A specter is haunting the Java community – the specter of generics.

Java introcued a feature called generics in Java 5 (this feature is generally known under the name of parametric polymorphism in the literate). Before Java 5 it wasn’t possible to create a reusable collection that would ensure the type safety at compile time of what you put in to that collection. You could create a collection of for example Strings and have that working correctly, but if you wanted to have a collection of anything, as long as that anything was the same type, you were restricted to doing runtime checks, or just having good tests.

Java 5 made it possible to add type parameters to any other type, which means you could create more specific collections. There are still problems with these – they interact badly with native arrays for example, and wildcards (Java’s way of implementing co= and contravariance) have ended up being very hard for Java developers to use correctly.

Java and C# both added generic types at roughly the same time. The C# version of generics differed in a few crucial ways, though. The most important difference in implementation is that C# generics are reified, while Java generics use type erasure. And this is really the gist of this blog post. Because over and over I hear people lament the lack of reified generics in Java, citing how good C# and the CLR is to have this feature. But is that really the case? Is reified generics a good thing? Of course, that always depends on who is asking the question. Reified might well be good for one person but not another. Here you will hear my view.

Reified? Huh?

So what does reified generics mean, anyway? It is probably easiest to explain compared to the Java implementation that uses type erasure. Slightly simplified: in Java generics doesn’t exist at runtime. It is purely a fiction that the compiler uses to handle type checking and make sure you don’t do anything bad with your collection. After the generics have been type checked, they are used to generate casts and type checks in the code using generics, some metadata is inserted into the class file format, and then the generic information is thrown away.

In contrast, on the CLR, generic classes exist as specific versions of their class. The same class with different generic type arguments are really different classes. There are no casts happening at the implementation level, and the CLR will as a result generate more specific code for the generic code. Reflection and dynamic type checks is also possible on the CLR. Having reified generics means basically that they exist at runtime, that the virtual machine knows about them and handles them correctly.

Multi-language virtual machines

The last twenty years something interesting has happened. Both out hardware and software has gotten mature enough that a new generation of virtual machines have entered the market. Traditionally, virtual machines for languages were made for specific languages, such as Pascal, Lisp and Smalltalk, and possibly except for SECD and the Warren machine, there haven’t really been any virtual machines optimized to running more than one language well. The JVM didn’t start that way either, but it turned out to be more well suited for it than expected, and there are lots of efforts to make it an even better platform. The CLR, Parrot, LLVM and Rubinius are other examples of things that seem to become environments rather than just implementation strategies for languages.

This is very exciting, and I think it’s a really good thing. We are solving very complex problems where the component problems are best solved in different ways. It seems like a weird assumption that one programming language is the best way of solving all problems. But there is also a cost associated with using more than one language. So having virtual machines act as platforms, where a sharked chunk of libraries are available, and the cost of implementation is low, makes a lot of sense.

In summary, I feel that the JVM was the first step towards a real viable multi-language virtual machine, and we are currently in the middle of the evolution towards that point.

Solving the problems

So why not add reified generics to the JVM at this point? It could definitely be done, and using an approach similar to the CLR, where libraries are divided into pre and post reified makes the path quite simple from an implementation standpoint. On the user side, there would be a new proliferation of libraries to learn – but maybe that’s a good thing. There is a lot of cruft in the Java standard libraries that could be cleaned up. There are some sticky details, like how to handle the API’s that were designed for erased generics, but those problems could definitely be solved. It would also solve some other problems, such as making it possible for Scala to pattern match on type parameters and solving part of the problem with abstracting over primitive types. And it’s absolutely possible to do. It would probably make the Java language into a better language.

But is it the only solution? At this point, making this kind of change would complicate the API’s to a large degree. The reflection libraries would have to be completely redesigned (but still kept around for backwards compatibility). The most probable result would be a parallel hierarchy of classes and interfaces, just like in the CLR.

Refified generics are generally being proposed in discussions about three different things. First, performance, second, making it easier for some features in Scala and other statically typed languages on the JVM, and thirdly to handle primitives and primitive arrays a bit better. Of these, the first one is the least common, and the least interesting by far. JVM performance is already nothing short of amazing. The second point I’ll come back to in the last section. The third point is the most interesting, since there are other solutions here, including unify primitives with objects inside the JVM, by creating value types. This would solve many other problems for language implementors on the JVM, and enable lots of interesting features.

The short stick

I believe in a multi language future, and I believe that the JVM will be a core part of that future. Interoperability is just too expensive over OS boundaries – you want to be on the same platform if possible. But for the JVM to be a good environment for more than one language, it’s really important that decisions are made with that in mind. The last few years of fantastic progress from languages like Rhino, Jython, JRuby, Groovy, Scala, Fantom and Clojure have shown that it’s not only possible, but benificial for everyone involved to focus on JVM languages. JSR 223, 292 and several others also means the JVM is more and more being viewed as a platform. This is good.

Generics is a complicated language feature. It becomes even more complicated when added to an existing language that already has subtyping. These two features don’t play very well together in the general case, and great care has to be taken when adding them to a language. Adding them to a virtual machine is simple if that machine only has to serve one language – and that language uses the same generics. But generics isn’t done. It isn’t completely understood how to handle correctly and new breakthroughs are happening (Scala is a good example of this). At this point, generics can’t be considered “done right”. There isn’t only one type of generics – they vary in implementation strategies, feature and corner cases.

What this all means is that if you want to add reified generics to the JVM, you should be very certain that that implementation can encompass both all static languages that want to do innovation in their own version of generics, and all dynamic languages that want to create a good implementation and a nice interfacing facility with Java libraries. Because if you add reified generics that doesn’t fulfill these criteria, you will stifle innovation and make it that much harder to use the JVM as a multi language VM.

I’m increasingly coming to the conclusion that multi language VM’s benefit from being as dynamic as possible. Runtime properties can be extracted to get performance, while static properties can be used to prove interesting things about the static pieces of the language.

Just let generics be a compile time feature. If you don’t there are two alternatives – you are an egoist that only care about the needs of your own language, or you think you have a generic type system that can express all other generic type systems. I know which one I think is more likely.



Video of my geek night “Creating a language on the JVM” presentation


The day before QCon 10 days ago, I gave a presentation at the ThoughtWorks geek night, about creating languages for the JVM. It ended up being two hours long, and is now available from Skillsmatter, here: http://skillsmatter.com/podcast/java-jee/language-on-jvm.



The ThoughtWorks Geek Night


Last Tuesday ThoughtWorks hosted a Geek Night, where I was the headline. My presentation was about Creating languages for the JVM, which as it turns out is a quite broad topic. The London office do these kind of geek nights now and again, and this time we ended up being about full capacity for what the office could hold, which is about 50-60 people. It was really crowded, actually. That is a good problem to have, though. I am glad so many turned up to see me do a two hour rambling presentation that ended up being like a non-stop fire hose of information.

I’m not going to try to summarize the actual presentation. There were way to much information in it for that. Luckily, SkillsMatter had a film camera there, and I’ve been told the video should be up anytime. Once that happens I will post about it here. In the meantime, the slides can be found here: http://olabini.com/presentations/CreatingALanguageForTheJVM.pdf.

It was an interesting subject to present about though. I think this material could easily be a one-day tutorial. I wonder if anyone would be interesting in having such a tutorial at their conference? And whether anyone would show up for such a subject…

It was fun, though. I had a great time, and I hope the people who showed up didn’t end up being disappointed with the material.



Language explorations


I blogged about looking at languages a while back. At that point I didn’t know what my next language to explore would be. I got lots of excellent suggestions. In the end I decided to try OCaML, but gave that up quickly when I found out that half of the type system exists to cover up deficiencies in the other half of it. So I went back and decided to learn Scala. I haven’t really had time to start with it though. Until now, that is.

So let’s get back to the motivation here? Why do I want to learn another language? Aren’t I happy with Ruby? Well, yes and no. But that’s not really the point. You can always point to the Prags one-language-a-year, but that’s not it either. I mean, it’s really good advice, but there is a more urgent reason for me to learn Scala.

I know many people have said this before, but it bears repeating. Everyone doesn’t share this opinion, but have a firm belief that the end of big languages is very close. There won’t be a next big language. There might be some that are more popular than others, but the way development will happen will be much more divided into using different languages in the same project, where the different languages are suited for different things. This is the whole Polyglot idea. And my take on it is this: the JVM is the best platform there is for Polyglot platform, and I think we will see three language layers emerge in larger applications. Now, the languages won’t necessarily be built on top of each other, but they will all run on the JVM.

The first layer is what I called the stable layer. It’s not a very large part of the application in terms of functionality. But it’s the part that everything else builds on top off, and is as such a very important part of it. This layer is the layer where static type safety will really help. Currently, Java is really the only choice for this layer. More about that later, though.

The second layer is the dynamic layer. This is where maybe half the application code resides. The language types here are predominantly dynamic, strongly typed languages running on the JVM, like JRuby, Rhino and Jython. This is also the layer where I have spent most of my time lately, with JRuby and so on. It’s a nice and productive place to be, and obviously, with my fascination for JVM languages, I believe that it’s the interplay between this layer and the stable layer that is really powerful.

The third layer is the domain layer. It should be implemented in DSL’s, one or many depending on the needs of the system. In most cases it’s probably enough to implement it as an internal DSL within the dynamic layer, and in those cases the second and third layer are not as easily distinguishable. But in some cases it’s warranted to have an external DSL that can be interacted with. A typical example might be something like a rules engine (like Drools).

I think I realized a long time ago that Java is not a good enough language to implement applications. So I came up with the idea that a dynamic language on top of Java might be enough. But I’m starting to see that Java is not good enough for the stable layer either. In fact, I’m not sure if Java the language is good enough for anything, anymore. So that’s what my language exploration is about. I have a suspicion that Scala might be a good language at the stable layer, but at this point the problem is there aren’t any other potential languages for that layer. So what I’m doing is trying to investigate if Scala is good enough for that.

But I need to make one thing clear – I don’t believe there will be a winner at any of these layers. In fact, I think it would be a clearly bad thing if any one language won at any layer. That means, I’m seeing a future where we have Jython and JRuby and Rhino and several other languages coexisting at the same layer. There doesn’t need to be any rivalry or language wars. Similarly, I see even less point in Scala and Ruby being viewed as competing. In my point of view they aren’t even on the same continent. And even if they were, I see no point in competing.

I got accused of being “religious” about languages yesterday. That was an interesting way of putting it, since I have always been incredibly motivated to see lots of languages coexisting, but coexisting on the JVM in a productive way.



The new base


I have for a time argued that the JVM should become more like an Operating System, and Java the language for OS development. Other languages should run on top of the JVM for running applications. It seems I’m not alone in this line of thought. Robert Varttinen wrote some about it here. To me it seems like a compelling future. But it seems the next logical step for enterprise applications would be further virtualization. I would for example like my beans implemented in Ruby. But not only that, I would want all the business logic to be OS agnostic. I would like my Ruby logic to live on top of a JVM J2EE server, but that logic should be able to move, transparently, to a .NET-server and provide the same business logic at that place. What would be even better is if I didn’t have to deploy it manually to all places, but the logic would just move to the places where it’s needed. Will we see that anytime soon?