Static type thinking in dynamically typed languages


A few days back I said something on Twitter that caused some discussion. I thought I’d spend some time explaining a bit more what I meant here. The originating tweet came from Debasish Ghosh, who wrote this:

“greenspunning typechecking into ruby code” .. isn’t that what u expect when u implement a big project in a dynamically typed language ?

My answer was:

@debasishg really don’t agree about that. if you handle a dynamically typed project correctly, you will not end up greenspunning types.

Lets take this from the beginning. The whole point of duck typing as a philosophy when writing Ruby code is that you shouldn’t care about the actual class of an object you get as argument. You should expect the operations you need, to actually be there, and assume they are. That means anyone can send in any kind of object as long as it fulfills the expected protocol.

One of the problems in these kind of discussions is that people conflate classes with types in dynamic languages. In well written Ruby code you will usually end up with a type for every argument – that type is a number of constraints and protocols that will wary depending on what you do with the objects. But the point is that it generally will make things more complicated to equate classes with these types, and you will design classes without any real purpose. Since you don’t have static checking, you don’t need to have overarching classes that act as type specifiers. The types will instead be implied in the contract of a method.

So far so good. Should you keep this in mind when designing a method? In most cases, no. I tend to believe that you will end up conflating classes and types. That’s what I’ve seen on several projects at least. The first warning sign is generally kind_of? checks. Of course, you can do things this way, but you will restrict quite a lot of the power of dynamic typing by doing this. One of the key benefits of a dynamically typed language is the added flexibility. If you end up greenspunning a type system, you have just negated a large part of the benefit of your language.

The types in a well defined system will be implicitly defined by what the method actually does – and specified by the tests for that method. But if you try to design explicit types, you will end up writing a static type system into your tests, which is not the best way to develop in these languages.



Bottom Types in Dynamic Languages


On the Friday of QCon, Sir Tony Hoare held a presentation about the null reference. This was interesting stuff, and it tied into something I’ve been thinking about for a while. Namely, the similarities and differences in the role of null/nil in static and dynamic languages.

At the surface, a null reference in a statically typed language seems to be a value of a bottom type. Since null is still a value, that cannot be true – since according to the formal definition, a bottom type can have no real value.

To take a typical example of where a bottom type is needed, take a look at what happens in Scala when you have a method that always throws an exception. Of course, that method can never return – so what is the type of the return value? That type is Nothing, the bottom type in Scala. There are no values of type Nothing in Scala, and can never be. The Nothing type is really useful in other circumstances too. Lists in Scala correspond to functional linked lists, where each cons cell is immutable. Every cons cell has a type – this type is the most generic type of the value the cons cell points to, and the type of cons cell it points to. Recursively, this means that the full list will have a generic type that is the least generic type that can encompass every element of the list. But how does this really work for the end element? Every list needs to have a final element that stops the list. This end element generally doesn’t contain a value. So what is the type of that end element? List[Nothing]. The reason being that Nothing can be viewed as the subtype of every other type in the system, which means it is always more specific than any other type. But when you create a new cons cell that points to this end value, the full list will always get the type of the new cons cell.

That explains bottom types, but we are no closer to understanding null references. In a statically typed language, a null reference acts like a value of a bottom type. The reason is that you can assign null to any other type – and this is generally only true for types that are subtypes of the place where it is being assigned to. What is interesting about null in C-like languages, is that there is no type that actually corresponds to null. In languages with null references like this, it seems that the domain of every other type in the system is extended to include null as a value. I haven’t found any reference to how this works from a type checking perspective, but it does look like the bottom type – except that there is no type associated to it. So from now on I will just call null a bottom value. One of the interesting aspects of including a bottom value in your language is that it opens up for runtime errors. If you don’t check every place that uses a reference for the bottom value before using it, you have the potential to get a runtime error.

The reason I started thinking about this was actually to contrast the way null works in a statically typed language, and how it works in a dynamically typed language. But the points above make it more and more obvious that null is actually a runtime feature even in static languages. The type system bypass nulls to allow more dynamism at runtime. But take a look at a dynamically typed language. Of course you won’t have any bottom types, since you don’t have a type system that need to check types. Another feature of most dynamically typed languages is that everything is an expression. In these languages, nil is usually used as a sentinel value to mark that the operation didn’t actually result in any real value. The semantics of how nil is interpreted can depend on both the language and the specific API of the operation in question.

The crucial difference in how nil is handled in a dynamically typed language, compared with how a null reference in a statically typed language works, is one of high level approach. In particular, in a dynamically typed language you will not know what kind of value you get back from an operation. In Ruby you might get back something that is of an expected class, but you can also get something that just looks like that class. Or something totally different, like a recorder object for example. All of these things make nils in dynamically typed languages much less of a problem, since the way you develop in these languages tend to be quite different. In contrast, a null reference in Java is a loop hole, that basically means that anything can return either null or something of the type specified. It’s not exactly a lie, but it is an implicit effect that generally comes as a surprise in a statically typed language.

There is a proposal to add a @NotNull annotation to Java, so you can mark references that shouldn’t allow null. This is not good enough, and will probably not help much at this stage. The reason is that you will be penalized for avoiding null – not the other way around. It also makes code that tries to be correct _more_ verbose, instead of the other way around. Because of backwards compatibility, there is really nothing to do about this though. But I recommend language designers to think carefully before adding a Java-style reference.

Some people asks what the alternative is. What stronger type systems generally allow you to do is have an Option (or Maybe) type, that is genericized on the expected output value.

Say you define a method in Java called get() that will either return a String or a null. If you instead had an Option class in Java, you could define the method as “Option<String> get()”. What is neat about Option is that it’s abstract and it has got two concrete subtypes. The first one is Some, the second is None. This makes it explicit when you can expect a return value to be not be there, and at the same time forces the user of the method to actively handle this case. The default case is references that will always be valid. This a good way to handle this problem in a type system.



The Maintenance myth


Update: I’ve used the words “static” and “dynamic” a bit loose with regards to languages and typing in this post. If this is something that upsets you, feel free to read “static” as “Java-like” and “dynamic” as “Ruby-like” in this post. And yes, I know that this is not entirely correct, but just as mangling the language to remove all gender bias makes it highly inconvenient to write, I find it easier to write in this language when the post is aimed at people in these camps.

Being a language geek, I tend to get into lots of discussions about the differences between languages, what’s good and what’s bad. And being a Ruby guy that hangs out in Java crowds, I end up having the static-vs-dynamic conversation way too often. And it’s interesting, the number one question everyone from the static “camp” has, the one thing that worries them the most is maintenance.

The question is basically – not having types at compile time, won’t it be really hard to maintain your system when it grows to a few millions of lines of code? Don’t you need the static type hierarchy to organize your project? Don’t you need an IDE that can use the static information to give you intellisense? All of these questions, and many more, boil down to the same basic idea: that dynamic languages aren’t as maintainable as static ones.

And what’s even more curious, in these kind of discussions I find people in the dynamic camp generally agrees, that yes, maintenance can be a problem. I’ve found myself doing the same thing, because it’s such a well established fact that maintenance suffers in a dynamic system. Or wait… Is it that well established?

I’ve asked some people about this lately, and most of the answers invariably beings “but obviously it’s harder to maintain a dynamic system”. Things that are “obvious” like that really worries me.

Now, Java systems can be hard to maintain. We know that. There are lots of documentation and talk about hard to maintain systems with millions of lines of code. But I really can’t come up with anything I’ve read about people in dynamic languages talking about what a maintenance nightmare their projects are. I know several people who are responsible for quite large code bases written in Ruby and Python (very large code bases is 50K-100K lines of code in these languages). And they are not talking about how they wish they had static typing. Not at all. Of course, this is totally anecdotal, and maybe these guys are above your average developer. But in that case, shouldn’t we hear these rumblings from all those Java developers who switched to Ruby? I haven’t heard anyone say they wish they had static typing in Ruby. And not all of those who migrated could have been better than average.

So where does that leave us? With a big “I don’t know”. Thinking about this issue some more, I came up with two examples where I’ve heard about someone leaving a dynamic language because of issues like this. And I’m not sure how closely tied they are to maintenance problem, not really, but these were the only ones I came up with. Reddit and CDBaby. Reddit switched from Lisp to Python, and CDBaby switched from Ruby to PHP. Funny, they switched away from a dynamic language – but not to a static language. Instead they switched to another dynamic language, so the problem was probably not something static typing would have solved (at least not in the eyes of the teams responsible for these switches, at least).

I’m not saying I know this is true, because I have no real, hard evidence one way or another, but to me the “obvious” claim that dynamic languages are harder to maintain smells a bit fishy. I’m going to work under the hypothesis that this claim is mostly myth. And if it’s not a myth, it’s still a red herring – it takes the focus away from more important concerns with regard to the difference between static and dynamic typing.

I did a quick round of shouted questions to some of my colleagues at ThoughtWorks I know and respect – and who was online on IM at the mime. The general message was that it depends on the team. The people writing the code, and how they are writing it, is much more important than static or dynamic typing. If you make the assumption that the team is good and the code is treated well from day 0, static or dynamic typing doesn’t make difference for maintainability.

Rebecca Parsons, our CTO said this:

I think right now the tooling is still better in static languages. I think the code is shorter generally speaking in dynamic languages which makes it easier to support.

I think maintenance is improved when the cognitive distance between the language and the app is reduced, which is often easier in dynamic languages.

In the end, I’m just worried that everyone seems to take the maintainability story as fact. Has there been any research done in this area? Smalltalk and Lisp has been around forever, there should be something out there about how good or bad maintenance of these systems have been. There are three reasons I haven’t seen it:

  • It’s out there, but I haven’t looked in the right places.
  • There are maintenance problems in all of these languages, but people using dynamic languages aren’t as keen on whining as Java developers.
  • There are no real maintenance problems with dynamic languages.

There is a distinct possibility I’ll get lots of anecdotal evidence in the comments on this post. I would definitely prefer fact, if there is any to get.



A New Hope: Polyglotism


OK, so this isn’t necessarily anything new, but I had to go with the running joke of the two blog posts this post is more or less a follow up to. If you haven’t already read them, go read Yegge’s Dynamic Languages Strikes Back, and Beust’s Return Of The Statically Typed Languages.

So let’s see. Distilled, Steve thinks that static languages have reached the ceiling for what’s possible to do, and that dynamic languages offer more flexibility and power without actually sacrificing performance and maintainability. He backs this up with several research papers that point to very interesting runtime performance improvement techniques that really can help dynamic languages perform exceptionally well.

On the other hand Cedric believes that Scala is bad because of implicits and pattern matching, that it’s common sense to not allow people to use the languages they like, that tools for dynamic languages will never be as good as the ones for static ones, that Java generics isn’t really a problem, that dynamic language performance will improve but that this doesn’t matter, that static languages really hasn’t failed at all and that Java is still the best language of choice, and will continue to be for a long time.

Now, these two bloggers obviously have different opinions, and it’s really hard to actually see which parts are facts and which are opinions. So let me try to sort out some facts first:

Dynamic language have been around for a long time. As long as statically typed languages in fact. Lisp was the first one.

There have been extremely efficient dynamic language implementations. Some of the Common Lisp implementations are on par with C performance, and Strongtalk also achieved incredible numbers. As several commenters have noted, Strongtalks performance did not come from the optional type tags.

All dynamic languages in large use today are not even on the same map with regards to performance. There are several approaches to fixing these, but we can’t know how well they will work out in practice.

Java’s type system is not very strong, and not very static, as these definitions go. From a type theoretic stand point Java does not offer neither static type safety nor any complete guarantees.

There is a good reason for these holes in Java. In particular, Java was created to give lots of hints to the compiler so the compiler can catch errors where the programmer is insoncistent. This is one of the reasons that you very often find yourself writing the same type name twice, including the type name arguments (generics). If the programmer makes a mistake at one side, the compiler will be able to catch this error very easily. It is a redundancy in the syntax that makes Java programs very verbose, but helps against certain kinds of mistakes.

Really strong type systems like those Haskell and OCaML use provide extremely strong compile time guarantees. This means that if the compiler accepts your program, you will never see any runtime errors from the type system. This allows these compilers to generate very efficient code, because they know more about the state of the application at most points in time, compared to the compiler for Java, which knows some things, but not nearly as much as Haskell or OCaML.

The downside of really strong type systems is that they disallow some extremely common expressions – these are things you intuitively can imagine, but it can’t be expressed within the constraints of such a type system. One solution to these problems is to add higher kinds, but these have a tendency to create more complexity and also suffer from some of the same problems.

So, we have three categories of languages here. The strongly statically checked ones, like Haskell. The weakly statically checked ones, like Java. And the dynamically checked ones, like Ruby. The way I look at these, they are good at very different things. They don’t even compete in the same leagues. And comparing them is not really a valid point of reasoning. The one thing that I am totally sure if is that we need better tools. And the most important tool in my book is the language. It’s interesting, many Java programmers talk so much about tools, but they never seem to think about their language as a tool. For me, the language is what shapes my thinking, and thus it’s definitely much more important than which editor I’m using.

I think Cedric have a point in that dynamic language tool support will never be as good as those for statically typed languages – at least not when you’re defining “good” to be the things that current Java tools are good at. Steve thinks that the tools will be just as good, but different. I’m not sure. To a degree I know that no tool can ever be completely safe and complete, as long as the language include things like external configuration, reflection and so on. There is no way to include all dynamic aspects of Java, but using the common mainstream parts of the language will give you most of these. As always this is a tradeoff. You might get better IDE support for Java right now, but you will be able to express things in Ruby that you just can’t express in Java because the abstractions will become too large.

This is the point where I’m going to do a copout. These discussions are good, to the degree that we are working on improving our languages (our tools). But there is a fuzzy line in these discussions, where you end up comparing apples and oranges. These languages are all useful, for different things. A good programmer uses his common sense to provide the best value possible. That includes choosing the best language for the job. If Ruby allows you to provide functionality 5 times faster than the equivalent functionality with Java, you need to think about whether this is acceptable or not. On the one hand, Java has IDEs that make maintainability easier, but with the Ruby codebase you will end up maintaining a fifth of the size of the Java code base. Is that trade off acceptable? In some cases yes, in some cases no.

In many cases the best solution is a hybrid one. There is a reason that Google allows more than one language (C++, Java, Python and JavaScript). This is because the languages are good at different things. They have different characteristics, and you can get a synergistic effect by combining them. A polyglot system can be greater than the sum of it’s parts.

I guess that’s the message of this post. Compare languages, understand your most important tools. Have several different tools for different tasks, and understand the failings of your current tools. Reason about these failings in comparison to the tasks they should do well, instead of just comparing languages to languages.

Be good polyglot programmers. The world will not have a new big language again, and you need to rewire your head to work in this environment.