Seph – A Hard Language to Compile


I have recently started work on Seph again. I preannounced it last summer (here), then promply became extremely busy at work. Busy enough that I didn’t really have any energy to work on this project for a while. Sadly, I’m still as busy, but I’ve still managed to find some small slivers of time to start working on the compiler parts of the implementation. This has been made much easier and more fun since JSR292 is getting near to completion, and an ASM 4 branch is available that makes it easier to compile Java bytecode with support for invoke dynamic built in.

So that means that the current code in the repository actually goes a fair bit to where I want it to be. Specifically, the compiler compiles most code except for abstractions that create abstractions, and calls that take keyword arguments. Assignments is not supported either right now. I don’t expect any of these features to be very tricky to implement, so I’m waiting with that and working on other more complicated things.

This blog post is meant to serve two purposes. The first one is to just tell the world that Seph as an idea and project actually is alive and being worked on – and what progress has been made. The other aspect of this post is to talk about some of the things that make Seph a quite tricky language to compile. I will also include some thoughts I have on how to solve these problems – and suggestions are very welcome if you know of a better approach.

To recap, the constraints Seph is working under is that it has to run on Java 7. It has to be fully compiled (in fact, I haven’t decided if I’ll keep the interpreter at all after the compiler is working). And it has to be fast. Ish. I’m aiming for Ruby 1.8-speed at least. I don’t think that’s unreasonable, considering the dimensions of flexibility Seph will have to allow.

So let’s dive in. These are the major pain points right now – and they are in some cases quite interconnected…

Tail recursion

All Seph code has to be tail recursive, which means a tail call should never grow the stack. In order to make this happen on the JVM you need to save information away somewhere about where to continue the call. Then anyone using a value has to check for a tail marker token, and if one that is found, that caller will have to do a repeated call on the current tail until a real value is produced. All the information necessary for the tail will also have to be saved away somewhere.

The approach I’m currently taking is fairly similar to Erjangs. I have a SThread object that all Seph calls will have to pass along – this will act as a thread context as soon as I add light weight threads to Seph. But this place also serves a good place to save away information on where to go next. My current encoding of the tail is simply a MethodHandle that takes no arguments. So the only thing you need to do to pump the tail call is to repeatedly check for the token and call the tail method handle. Still, doing this all over the place might not be that performant. At the moment, the code is not looking up a MethodHandle from scratch in the hot path, but it will have to bind several arguments in order to create the tail method handle. I’m unsure what the performance implications of that will be right now.

Argument evaluation from the callee

One aspect of Seph that works the same as in Ioke is that a method invocation will never evaluate the arguments. The responsibility of evaluating arguments will be in the receiving code, not the calling code. And since we don’t know whether something will do a regular evaluation or do something macro-like, it’s impossible to actually pre-evaluate the arguments and push them on the stack.

The approach Ioke and the Seph interpreter takes is to just send in the Message object and allow the callee to evaluate it. But that’s exactly what I want to avoid with Seph – everything should be possible to compile, and be running hot if that’s possible. So sending Messages around defeats the purpose.

I’ve found an approach to compile this that actually works quite well. It also reduces code bloat in most circumstances. Basically, every piece of code that is part of a message send will be compiled to a separate method. So if you have something like foo(bar baz, qux) that will compile into the main activation method and two argument methods. This approach is recursive, of course. What this gives me is a protocol where I can use method handles to the argument methods, push them on the stack, and then allow the callee to evaluate them however they want. I can provide a standard evaluation path that just calls each of the method handles in turn to generate the values. But it also becomes very easy for me to send them in unevaluated. As an example this is almost exactly what the current implementation of the built in “if” method looks like. (It’s not exactly like this right now, because of transitional interpreter details).

public final static SephObject _if(SThread thread, LexicalScope scope,
        MethodHandle condition, MethodHandle then, MethodHandle _else) {
    SephObject result = (SephObject)condition.invokeExact(thread, scope, 
                                                          true, true);
    
    if(result.isTrue()) {
        if(null != then) {
            return (SephObject)then.invokeExact(thread, scope, 
                                                true, true);
        } else {
            return Runtime.NIL;
        }
    } else {
        if(null != _else) {
            return (SephObject)_else.invokeExact(thread, scope, 
                                                 true, true);
        } else {
            return Runtime.NIL;
        }
    }
}

Of course, this approach is not perfect. It’s still a lot of code bloat, I can’t use the stack to pass things to the argument evaluation, and the code to bind the argument method handles take up most of the generated code at the moment. Still, it seems to work and gives a lot of flexibility. And compiling regular method evaluations will make it possible to bind these argument method handles straight in to an invoke dynamic call site, which could improve the performance substantially when evaluating arguments (something that will probably happen quite often in real world code… =).

Intrinsics are just regular messages

Many of the things that are syntax elements in other languages are just messages in Seph. Things like “nil”, “true”, “false”, “if” and many others work exactly the same way as a regular message send to something you have defined yourself. In many cases this is totally unnecessary though – and in most cases knowing the implementation at the call site allow you to improve things substantially in many cases. I think it’s going to be fairly uncommong to override any of those standard names. But I still want to make it possible to do it. And I’m fine with the programs that do this takng a performance hit from it. So the approach I’ve come up with (but not implemented yet) is this – I will special case the compilation of every place that has the same name as one of the intrinsics. This special casing will bind to a different bootstrap method than regular Seph methods. As a running example, let’s consider compiling a piece of code with “true” in it. This will generate a message send that will be taken care of by a sephTrueBootstrapMethod. We still have to send in all the regular method activation arguments, though. What this bootstrap method will do is to set up a call site that points to a very special method handle. This method handle will be a guardWithTest created through a SwitchPoint specific to the true value. The first path of that GWT (guardWithTest) will just return the true value directly without any checks whatsoever. The else path of the GWT will fallback to a regular Seph fallback method that does inline caching and regular lookup. The magic happens with the SwitchPoint – the places that create new bindings will check for these intrinsic names and if one of those names is used anywhere in the client code, the SwitchPoint will be changed over to the slow path.

In summary, I think a fast path can be possible for many of these things for most programs. The behaviour when you override “if” should still work as expected, but will make the global performance of that program slower for the rest of the execution.

When does lexical scopes escape?

Seph has mutable lexical scopes. But it’s impossible to know which names will escape and which won’t – so as far as I can see, I can’t use the Java stack to represent variables except for in some small amount of very degenerate cases. I’m not sure if it’s worth it to have that code path yet, so I haven’t thought much about it.

Class based PICs aren’t a good fit

One of the standard optimizations that object oriented languages use is something called a polymorphic inline cache. The basic idea is that looking up a method is the really slow operation. So if you can save away the result of doing that, guarded by a very cheap test, then you can streamline the most common cases. Now, that cheap test is usually a check against the class. As long as you send in an instance with the same class, then a new method lookup doesn’t have to happen. Doing a getClass and then a identity equality on that is usually fairly fast (a pointer comparison on most architectures) – so you can builds PICs that don’t actually spend much time in the guard.

But Seph is a prototype based language. So any object in the system can have different methods or values associated with a name, and there is no clear delineation on objects with new names and values in them. Especially, since Seph objects are immutable, every new object will most likely have a new set of values in them. And saving a way objects and dispatching on them becomes much less performant, since the call sites will basically never work on the same object. Now, there are solutions to this – but most of them are tailored for languages where you usually use a class based pattern. V8 uses an approach called hidden classes to figure out things like that. I’m considering implementing something similar, but I’m a bit worried that the usage pattern of Seph will be far enough away from the class based world that it might not work well.

Summary

So, Seph is not terribly easy to compile, and I don’t have a good feeling for how fast it can actually be made. I guess we’ll have to wait and see. But it’s also an interesting challenge, coming up with solutions to these problems. I think I might also have to go on a new research binge, investigating how Self and NewtonScript did things.


2 Comments, Comment or Ping

  1. Thanks for the update.

    Do you know that the Self mailing list has been active recently? You might be able to get some help there. http://selflanguage.org/discuss/index.html

    March 14th, 2011

  2. Arkady Rost

    Hi! What are you going to do with parenthesis, commas, dots and colons? Will they be just regular messages or will remain syntax elements?

    April 15th, 2011

Reply to “Seph – A Hard Language to Compile”