Ioke is not a Lisp


I generally get lots of different kinds of reactions from people when showing them Ioke. A very common misconception about the language is that it is “just another dialect of Lisp”. That is a quite easy mistake to make, since I’ve borrowed heavily from the things I like about Lisp. But there are also several things that make Ioke fundamentally different. Ioke is Lisp turned inside out and skewed a bit.

First we have the syntax. Ioke has syntax inspired by Ruby and Self, but includes some variations that come from Common Lisp too. The main difference between how Ruby and Ioke handles syntax is that Ioke will transform everything down to the same message passing idiom. Most of the syntax is in the form of operators, which don’t get any special handling by the parser at all. But the syntax is still there, and it is also deeper embedded compared to how Lisp generally does it. Ioke acknowledges the existence of different kind of literals – and allow you to override and handle them differently if you want. One of the examples in the distribution is a small parser combinator library. It allows you to use regular text and number literals, but in the context of the parser those literals will return new parser for their type.

Common Lisp can play these syntactic games with reader macros, but it is generally not done. Ioke embraces the use of syntax to improve readability and the creation of nice DSLs.

Of course, any Lisp guy will tell you that syntax has been tried – but programmers preferred S-expressions. The latest example of this is Dylan. But I’ll have to admit that if you look at the Dylan syntax, you understand why the programmers didn’t feel like bothering with it. It’s one thing to try syntax by just adding some very clumsy Algol-derivation. It is another thing entirely to actually focus on syntax.

That said, Ioke is homoiconic, and translates to a structure that could easily be represented as cons cells. It doesn’t, though, since the message chain abstraction works better.

The other thing that really makes Ioke different from Lisp – and also a reason that would make infix S-expressions extremely impractical – is that Ioke is fundamentally object oriented based on a message passing idiom. In a lisp, all function calls are evaluated in the context of the current package (in Common Lisp). You can get different behavior if the function you call is in fact a generic method, but in reality you’re still talking about one method. If you want to chain method calls together, you have to turn them inside out. That doesn’t lend itself well to a message passing paradigm where there is an explicit receiver for everything.

Ioke in contrast have the message passing model at its core. Any evaluation in Ioke is message send to some receiver. In that model, you really need to make it easy to chain messages in some way. And that’s why S-expressions would never work well for Ioke. S-expressions fundamentally doesn’t use the concept of a receiver at all.

All the other differences in Ioke versus any Lisp could be chalked down to minor dialectical differences, but the two biggies are the above. Ioke is not a Lisp. It’s heavily inspired by Lisp, but it’s fundamentally different from it.



Reverse Cambridge Polish notation for Lisp?


One of the things that still seem unnatural sometimes with Lisp, is the fact that in many cases the actual evaluation need to start quite for into the structure – especially if you’re programming in a heavily functional style. The problem with this is that the way you read code doesn’t go left-to-right. Instead you need to read inside out. To make this less abstract, take a look at this example Lisp code:

(defun foo (x)
  (flux
   (bar
    (conc "foo"
          (get-bar
           (something "afsdfsd")
           (if (= x "foo")
               (conc "foo" "bar")
               (conc "bar" "foo")))))))

As you can see, this is totally bogus code. The nesting makes it possible to identify the parts that will be evaluated first. Compare that to the order it will actually be evaluated. You can see this order by transforming it to Reverse Cambridge Polish notation. This is equivalent – although I know no Lisp that actually does it. As you can see the way you read it, is actually the order it will be evaluated:

(foo (x)
  ((("foo"
     (("afsdfsd" something)
      ((x "foo" =)
       ("foo" "bar" conc)
       ("bar" "foo" conc)
       if)
      get-bar)
     conc)
    bar)
   flux)
  defun)

Well. It’s different, I grant you that. And of course, you need to keep the stack in your head, while reading it. But it’s an interesting experiment.



Clojure


I know I’ve mentioned Clojure now and again in this blog, but I haven’t actually talked that much about it. I feel it’s time to change that right now – Clojure is in the air and it’s looking really interesting. More and more people are talking about it, and after the great presentation Rich gave at the JVM language summit I feel that there might be some more converts in the world.

So what is it? Well, a new Lisp dialect for the JVM. It was originally targeting both the JVM and .NET but Rich ended up not going through with that (a decision I can understand after seeing the efforts Fan have to expend to continue providing this feature).

It’s specifically not an implementation of either Common Lisp nor Scheme, but instead a totally new language that’s got some interesting features. The most striking feature of it is the way it embraces functional programming. In comparison to Common Lisp who I characterize as being a multiparadigm language, Clojure has a heavy bent towards functional programming. This includes a focus on immutable data structures and support for good concurrency models. He’s even got an implementation of STM in there, which is really cool.

So what do I think about it? First of all, it’s definitely a very interesting language. It’s also taken the ideas of Lisp and twisting them a bit, adding some new ideas and refining some old ones. If I wanted to do concurrency programming for the JVM I would probably lean more towards Clojure than Scala, for example.

All that said, I am in two minds about the language. It is definitely extremely cool and it looks very useful. The libraries specifically have lots to say for them. But the other side of it for me is from the point of Lisp purity. One of the things I really like about Lisps is that they are very simple. The syntax is extremely small and in most cases everything will just be either lists or atoms and nothing else. Common Lisp can handle other syntax with reader macros – which end up with results that are still only lists and atoms. This is extremely powerful. Clojure has this to a degree, but adds several basic composite data structures that are not lists, such as sets, arrays and maps. From a pragmatic standpoint I can understand that, but the fact that they are basic syntax instead of reader macros mean that if I want to process Clojure code I will end up having to work with several kinds of composite data structures instead of just one.

This might seem like a small thing, and it’s definitely not something that would stop me from using the language. But the Lisp lover in me cringes a bit at this decision.

All in all Clojure is really cool and I recommend people to take a look at it. It’s getting lots of attention and people are writing about it. Stu Halloway is currently in the process of porting Practical Common Lisp to Clojure, and I recently saw a blog post about someone porting On Lisp to Clojure, so there is absolutely an interest in it. The question is how this will continue. As I’ve started saying more and more: these are interesting times for language geeks.



How large is your .emacs?


I’ve been reading lots of blogs and opinions about emacs the last few days. What strikes me is all of these people who brag about how large their .emacs files have become. So let me make this very clear:

If your .emacs file is longer than a page YOU ARE DOING IT WRONG.

Why? Well. Unless you are a casual Emacs user, your .emacs should not be regarded as a configuration file. Rather, the .emacs file is actually the entry point to the source repository of your own version of Emacs. In effect, when you configure Emacs you create a fork, which has it’s own source that you need to maintain. This is not configuration. This is programming and you should approach it like you do all programming. What does that mean? Modularization. Clean code. Code comments. Source control. Tests. But modularization and source control are the ones that are most important for my Emacs configuration. I have loads of files in ~/emacs and every kind of extension I do has it’s own kind of file or directory to put it in. The ~/emacs directory is checked out from source control, and has got customizations for different platforms. That’s why my .emacs file is 4-5 lines long. Two for setting customizations that are specific to this computer, and the rest to load the stuff inside of ~/emacs. And that’s all.

So how do you handle modularization of Emacs Lisp code? This won’t be a tutorial. Just a few advices that might make things easier.

In no specific order:

  • (load “file.el”) will allow you to just load another file.
  • (require ‘cl) will give you lots of nice functionality from Common Lisp
  • I recommend you have one place where you add all your load paths. Mine look something like this:
    (labels ((add-path (p)
    (add-to-list 'load-path
    (concat emacs-root p))))
    (add-path "emacs/jde/lisp")
    (add-path "emacs/nxml")
    (add-path "emacs/own") ;; Personal elisp code
    (add-path "emacs/lisp") ;; Various elisp code, just dumped here
    )
  • Why do it like this? Well, it gives you an easier way to add full paths to your load path without repeating lots of stuff. This depends on you defining emacs-root somewhere – do define it, it can be highly useful.
  • Set custom-file. (The custom-file is the file where Emacs saves customizations. If you don’t set a specific file for this, you will end up getting all customizations saved into .emacs which you really don’t want.) The code for this is simple. Just do (setq custom-file “the-file-name.el”)
  • Use hooks and advice liberally. They allow you to attach new functionality without monkey patching.
  • If you ever edit XML, NEVER use Emacs builtin XML editor. Instead download the excellent NXML package.
  • Learn how to use Info and customizations
  • Use Ido mode

Feel free to add other good advice in the comments. These were just a small smattering of stuff I like and which helps your environment quite seriously. But the most important part is the whole thing about keeping your .emacs extremely small!

Addendum: As Phil just pointed out (and which was part of my plan from the beginning) is that Autoloads should be used as much as possible. Also, make sure to bytecompile as much as possible.



Rubinius and Lisp


Yesterday on the #rubinius-channel, me, Evan and MenTaLguY had the beginnings of a discussion about a Lisp dialect for the Rubinius VM. Since this sound exactly like the kind of thing I’ve been thinking about for some time, I got excited. But anyway, Pat Eyler covered the issue much better and did a small interview with us that can be read here. Ruby is wonderful. Rubinius is sweet. With Lisp added to the mix, it will be better.

I’ll probably follow this up with some more information and ideas, as soon it’s fleshed out.



An Emacs diversion: Font sizes


After a few weeks of very intense JRuby-OpenSSL hacking, I felt the need to do something different, so I’ve spent a few hours with my slightly rusty Emacs Lisp skills, trying to fix something that I really need. Namely, control over font-size and fonts in Emacs, on Linux. I want it inside Emacs and customizable by EL. To my surprise I couldn’t find anything like that anywhere.

For me personally, it’s necessary when presenting, since I usually code with a small font in Emacs, the code will be totally unreadable when presenting. And since I don’t have a fancy MacBook Pro, I need to be able to zoom in and out inside Emacs.

Presto, it wasn’t easy, but I’ve managed it. For some reason, font handling seems quite backward in Emacs. I had to extract the current font, and then split it and join the new array together again. Not neat and my way of doing it is not the best. But, for your pleasure, here is the code to do it, and also some code that establishes a font ring of the standard fonts in different sizes that can be walked through:

(defun inc-font-size ()
(interactive)
(let* ((current-font (cdr (assoc 'font (frame-parameters))))
(splitted (split-string current-font "-"))
(new-size (+ (string-to-number (nth 7 splitted)) 1))
(new-font (concat (nth 0 splitted) "-"
(nth 1 splitted) "-"
(nth 2 splitted) "-"
(nth 3 splitted) "-"
(nth 4 splitted) "-"
(nth 5 splitted) "-"
(nth 6 splitted) "-"
(number-to-string new-size) "-*-"
(nth 9 splitted) "-"
(nth 10 splitted) "-"
(nth 11 splitted) "-*-"
(nth 13 splitted))))
(if (> (length splitted) 14)
(dotimes (n (- (length splitted) 14))
(setq new-font (concat new-font "-" (nth (+ n 14) splitted)))))
(set-default-font new-font t)
(set-frame-font new-font t)))

(defun dec-font-size ()
(interactive)
(let* ((current-font (cdr (assoc 'font (frame-parameters))))
(splitted (split-string current-font "-"))
(new-size (- (string-to-number (nth 7 splitted)) 1))
(new-font (concat (nth 0 splitted) "-"
(nth 1 splitted) "-"
(nth 2 splitted) "-"
(nth 3 splitted) "-"
(nth 4 splitted) "-"
(nth 5 splitted) "-"
(nth 6 splitted) "-"
(number-to-string new-size) "-*-"
(nth 9 splitted) "-"
(nth 10 splitted) "-"
(nth 11 splitted) "-*-"
(nth 13 splitted))))
(if (> (length splitted) 14)
(dotimes (n (- (length splitted) 14))
(setq new-font (concat new-font "-" (nth (+ n 14) splitted)))))
(set-default-font new-font t)
(set-frame-font new-font t)))

(defvar *current-font-index* 0)

(defconst *font-ring* '(
"-urw-nimbus mono l-regular-r-normal--15-*-88-88-p-*-iso8859-1"
"-urw-nimbus mono l-regular-r-normal--17-*-88-88-p-*-iso8859-1"
"-Adobe-Courier-Medium-R-Normal--14-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--16-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--18-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--20-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--22-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--24-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--26-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--28-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--30-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--32-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Medium-R-Normal--34-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--14-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--16-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--18-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--20-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--22-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--24-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--26-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--28-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--30-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--32-*-100-100-M-*-ISO8859-1"
"-Adobe-Courier-Bold-R-Normal--34-*-100-100-M-*-ISO8859-1"
"-Misc-Fixed-Medium-R-SemiCondensed--10-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-SemiCondensed--12-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-SemiCondensed--13-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--13-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--14-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--15-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--16-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--17-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--18-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--19-*-75-75-C-*-ISO8859-1"
"-Misc-Fixed-Medium-R-Normal--20-*-75-75-C-*-ISO8859-1"
"-Schumacher-Clean-Medium-R-Normal--12-*-75-75-C-*-ISO8859-1"
"-Schumacher-Clean-Medium-R-Normal--13-*-75-75-C-*-ISO8859-1"
"-Schumacher-Clean-Medium-R-Normal--14-*-75-75-C-*-ISO8859-1"
"-Schumacher-Clean-Medium-R-Normal--15-*-75-75-C-*-ISO8859-1"
"-Schumacher-Clean-Medium-R-Normal--16-*-75-75-C-*-ISO8859-1"
"-Schumacher-Clean-Medium-R-Normal--17-*-75-75-C-*-ISO8859-1"
"-Schumacher-Clean-Medium-R-Normal--18-*-75-75-C-*-ISO8859-1"
"-Sony-Fixed-Medium-R-Normal--14-*-100-100-C-*-ISO8859-1"
"-Sony-Fixed-Medium-R-Normal--16-*-100-100-C-*-ISO8859-1"
"-Sony-Fixed-Medium-R-Normal--18-*-100-100-C-*-ISO8859-1"
"-Sony-Fixed-Medium-R-Normal--20-*-100-100-C-*-ISO8859-1"
"-B&H-LucidaTypewriter-Medium-R-Normal-Sans-14-*-100-100-M-*-ISO8859-1"
"-B&H-LucidaTypewriter-Medium-R-Normal-Sans-16-*-100-100-M-*-ISO8859-1"
"-B&H-LucidaTypewriter-Medium-R-Normal-Sans-18-*-100-100-M-*-ISO8859-1"
"-B&H-LucidaTypewriter-Bold-R-Normal-Sans-20-*-100-100-M-*-ISO8859-1"
"-B&H-LucidaTypewriter-Bold-R-Normal-Sans-24-*-100-100-M-*-ISO8859-1"
"-B&H-LucidaTypewriter-Bold-R-Normal-Sans-30-*-100-100-M-*-ISO8859-1"
"-B&H-LucidaTypewriter-Bold-R-Normal-Sans-34-*-100-100-M-*-ISO8859-1"
))

(defun font-next ()
(interactive)
(let ((len (length *font-ring*))
(next-index (+ *current-font-index* 1)))
(if (= next-index len)
(setq next-index 0))
(setq *current-font-index* next-index)
(message (concat "setting " (nth *current-font-index* *font-ring*)))
(set-default-font (nth *current-font-index* *font-ring*) t)
(set-frame-font (nth *current-font-index* *font-ring*) t)))

(defun font-prev ()
(interactive)
(let ((len (length *font-ring*))
(next-index (- *current-font-index* 1)))
(if (= next-index 0)
(setq next-index (- len 1)))
(setq *current-font-index* next-index)
(set-default-font (nth *current-font-index* *font-ring*) t)
(set-frame-font (nth *current-font-index* *font-ring*) t)))

(defun font-current ()
(interactive)
(cdr (assoc 'font (frame-parameters))))

(defun font-set (ix)
(setq *current-font-index* ix)
(set-default-font (nth *current-font-index* *font-ring*) t)
(set-frame-font (nth *current-font-index* *font-ring*) t))

(provide 'fontize)

I also bound these methods to keys, like this:

(global-set-key [?\C-+] 'inc-font-size)
(global-set-key [?\C--] 'dec-font-size)
(global-set-key [?\M-+] 'font-next)
(global-set-key [?\M--] 'font-prev)

Hope this helps someone in the same situation.



Three ways to add Ruby Macros


As most of my readers probably have realized at this point, I have a few obsessions. Lisp and Ruby happens to be two of the more prominent ones. And regarding Lisp, macros is what especially interest me. I have been doing much thinking lately on how you could go about adding some kind of macro facility to Ruby and these three options are the result.

I should begin by saying that none of these options are entirely practical right now. All of them have some serious problems which I frankly haven’t been able to come up with an answer for yet. But that doesn’t stop me from blogging about my ideas, of course. Another thing to notice is that this is not about hygienic macros. This is the full-blown, power, blow-the-moon away version of macros.

MacRuby – Direct defmacro in Ruby
The first approach rests on modifying the language itself. You can add a defmacro keyword which takes a name and a code block to execute. Each time the compiler/interpreter finds a macro-definition, it will remember the name. When that name is found in the code later on each place will be marked. Then, before execution begins, all places where the call to the macro are will be replaced by the output from sending in the subnodes at that place by the output of calling the macro. An example of a simple macro:

 defmacro log logger, level, *messages
if $DEBUG
:call, logger, level, *messages
else
:nop
end
end

log @l, :debug, "value is: #{very_expensive_operation()}"

What’s interesting in this case is that the messages will not be evaluated if the $DEBUG flag is not set. This is because the value returned from the macro will be spliced into the AST only if that flag is set. Otherwise a no-op will be inserted instead. Obviously, for this kind of code to work, the interpreter would need to change substantially. There is also a big problem with it, since it’s very hard to fit this model into the object-oriented system of Ruby. As I think about it now, it seems macros would be the only non-OOP feature in Ruby, if added in this way. Another big problem with this model is that it is really not that intuitive what the resulting code from the macro will be. As soon as something more advanced needs to be returned, it will be very hard getting it straight in your head. One solution to this would be to do it the standard CL way. First write the output from the macro in several different instances. Then transform this to the AST code through a tool that parses the code. Then transform this into the macro. This process would be helped by tools, of course.

Back-and-Lisp-Ruby – Write macros in Lisp, translate Ruby back and forth

Another way to achieve this power in Ruby would be to separate the macro language from the main language. In effect, the macros would be a classic pre-processor. To offer the same power level as Lisp and others, the best way would be to write the macros themselves in a Lisp dialect, then transform Ruby in a well-defined way to Lisp and back again. (See the next version for more about this idea.) In this situation the same macro as before could look like this:

 (defmacro log (logger level &rest messages)
(if $DEBUG
`(,level ,logger ,@messages)
'()))

The main difference in this code is that the macro and the output from the macro is Lisp. We have gotten rid of the ugly :call and :nop return values, and to me this seems quite readable. Of course, I’m not sure everyone else feels the same way. And we still have the same problem with Object Orientedness. It’s missing.

RoCL – Ruby over Common Lisp
The final idea is to build a Ruby runtime within Common Lisp and transform Ruby into Common Lisp before running it. The macros could either be added as Ruby code or Lisp code. Everything will be transformed into the equivalent code in Lisp, maybe using CLOS as the Object-system, or building something based on Ruby’s. Of course, the semantics of many things would change, and many libraries would need to rewritten. But in the end, there would be incredible power available. Especially if we can make it go both ways, so that Common Lisp can use Ruby libraries.

An example transformation could look like this. From this Ruby:

 class String
def revert(a, *args)
if block_given?
yield a
else
args + [a]
end
end
end

"abc".revert "one" do |x|
puts x
end

This is nonsense code, if you hadn’t noticed. =)

 (with-class "String" nil
(def revert (a block &rest args)
(if block
(apply block a)
(+ args [a]))))
(revert "abc" "one" #'(lambda (x)
(puts self x)))

Conclusions
It is very hard to actually retrofit macros into Ruby after the fact. I’m still not sure it can be done and keep enough of Ruby’s semantics to make it meaningful. It seems that we need a new language. But if I had to choose among these approach, the RoCL one seems the most interesting and also the most fun to implement. If I have a motto it would have to be something in the line of “best of all worlds”. I want the best from Ruby, Java, Lisp, Erlang and everything I can find.



The Dark Ages of programming languages


We seem to be living in the dark ages of programming languages. I’m not saying this to bash everything; I’m actually being totally objective right now. Obviously, our situation right now is much better than it was 10 years ago. Or even 5 years ago. I would actually say that it’s really much better now, than 1 year ago. But programming is still way too painful in almost all cases. We are doing so much stuff by hand that obviously should be done be computer.

I spend quite much time learning new languages now and then, to try to find something that’s really good for me. So far, the best contestants are Ruby, Erlang, OCaml and Lisp, but all of those have their share of problems too. They just suck less than the alternatives.

  • Ruby… I really like Ruby. Ruby is such an improvement that I really want to do almost everything in it nowadays. I think in Ruby half the time and in Lisp the other half. But it’s not enough. It is still clunky. I want tail calls. I want real macros. I want blazing speed and complete integration with good libraries for everything and more. I’m just a sucker for power, and I want more of it in Ruby.
  • Erlang and OCaml. These languages are really great. For specific applications. Specifically, Erlang is totally superior for concurrent programming. And OCaml is incredibly fast, very typesafe and has great GUI libraries. So, if I was asked to do something massively concurrent I would probably choose Erlang, and OCaml if it was GUI programming. But otherwise… Well, Erlang does have some neat functional properties, but not any nice macro support. It doesn’t have a central code repository and many other things you expect from a general purpose language. OCaml suffers from the same things.
  • Lisp is the love of my life. But as so many people before me has noted, all the implementations are bad in some way or another. Scheme is lovely; for research. Common Lisp is so powerful, but it needs users. Lots of them, creating libraries for every little data format there can be, creating competing implementations of particularly important API’s; like databases.

Conclusion. Nothing is good enough, right now. I see two two paths ahead. Two ways that could actually end in the “100-year language”.

The first path is one new language. This language will be based on all the best features of all current languages, plus a good amount of research output. I have a small list what this language would need to be successful as the next big one:

  • It needs to be multiparadigm. I’m not saying it can’t choose one paradigm as the base, but it should be possible to program in it functionally, OOP, AOP, imperative. It should be possible to build a declarative library so you can do logic programming without leaving the language.
  • It should have static type inference where possible. It should also allow optional type hints. This is so important for creating great implementations. It can also increase readability in some cases.
  • It needs all the trappings of functional languages; closures, first-order functions and lambdas. This is essential, to avoid locking the language into an evolutionary corner.
  • It needs garbage collection. Possibly several competing implementations of GC’s, running evolutionary algorithms to find out which one is best suited for long running processes of the program in question.
  • A JIT VM. It seems almost a given right now that Virtual Machines are a big win. They can also be made incredibly fast.
  • Another JIT VM.
  • A non-VM implementation. Several competing implementations for different purposes is important to allow competition and experimentation with new features of implementation.
  • Great integration with legacy languages (Java, Ruby (note, I’m counting on all Rubyists moving to this new language when it gets out, making Ruby legacy), Cobol). This is obvious. There are to many things lying around, bitrotting, that we will never get rid of.
  • The language and at least one production quality implementation needs to be totally open-source. No lock-in of the language should be possible.
  • Likewise, good company support is essential. A language needs money to be developed.
  • A centralized code/library repository. This is one of Java’s biggest failings. Installing a new library in Java is painful. We need something like CPAN, ASDF, RubyGems.
  • The language needs great, small and very orthogonal libraries. The libraries included with the language needs to be great, since they have to be small but still pack all the most needed punch.
  • Concurrency must be a breeze. There should be facilities in the language itself for making this obvious. (Like Erlang or Gambit Scheme).
  • It should be natural to do meta-programming in it (in the manner of Ruby).
  • It should be natural to solve problems bottom-up, by implementing DSL’s inside or outside the language.
  • The languages needs a powerful macro facility that isn’t to hard to use.
  • Importantly, for the macro facility, the language needs to have a well-defined syntax tree of the simplest possible kind, but it also needs to have optional syntax.

So, that’s what I deem necessary (but maybe not sufficient) for a really useful, good, long term programming language. When I read this list, it doesn’t seem that probables that this language will show up any time soon, though. Actually, it seems kinda unrealistic.

So maybe the other way ahead is the right one? The other way I envision is that languages become easier and easier to create, and languages have their strength in different places. Along this path I envision the descendants of Ruby and Erlang exploiting what they’re good at and eschewing everything else. But for this strategy to work, the first thing implemented in each language needs to be a seamless way to integrate to other languages. Maybe there will come an extremely good glue-language (not like Perl or Ruby, but a language that only will serve as glue between programming languages), and all languages will implement good support for that language. For example you could code a base Erlang concurrent framework, which uses G (the glue language) to implement some enterprise functionality in Java sandboxes, and some places where Ruby through G will implement a DSL, which have subparts where Ruby uses G to run Prolog knowledge engines.

If you had to choose among the two futures, I am frankly more inclined towards the one-language one. But the multi-language way seems much more probable. And since I’m trying to choose way now, I’m placing my bets on the second option. We are not ready to implement G yet, but I do think that as many p-language techs as possible should do their best to learn how languages can cooperate in different ways, to prepare this project.