Some results from the Ioke experiment


It’s been a bit over 18 months since I first released Ioke in the wild. During this time I’ve always been specific about Ioke first and foremost being a language experiment. I changed many things to see what would work and what would not. I thought I’d take stock and take a look at a few of these decisions and how I feel about them now. Ioke never got a huge user base, of course, so most of these impressions are based on my continued working on the language, and also experiences trying to explain features of the language.

This does not mean in any way that the Ioke experiment is over. I will continue working on Ioke and see what else interesting will come out of it.

White space separation for method calls

I adapted Io’s, Self’s and Smalltalk’s syntax for Ioke. This meant I could use periods to end expressions instead (taking the role of semicolons in most other languages). Personally I like this a lot. Readability really improves substantially by using white space for method calls. The only thing that makes it a bit tricky is the interaction regular expression syntax. I ended up adding an initial character to regular expressions to make them easily distinguishable. I thought I would dislike that more than I do. So white space is definitely a win.

Keyword syntax in method calls

Having methods take keyword arguments and positional arguments baked in to the language was also a big win. It’s really a huge difference between this approach and something like Ruby – having it first class means it is easy to do things like collecting all keyword arguments, provide default values and so on. It also makes introspection and documentation much better. Finally, the duality if dictionary creation with keywords and regular method invocation ended up being very pleasing. Another clear win. Languages should have keyword arguments.

Nontraditional naming

I wanted to see what would happen if I stayed away from the traditional names in the object oriented languages of today. So I didn’t use Object, String, prototype, slot, clone or property. The most obvious place for this is in the core concepts of the language. The place where user code starts is called Origin. I’m don’t miss Object as such, but I’m not sure Origin is the clearest way of talking about this object. My current thoughts are going in the direction of something like “Vanilla” (from Flavors), or “Something”.  Another problematic renaming was to talk about the act of creating a new object as “mimicking”, and call the parents of an object “mimics”. It ended up being very confusing, both from a verb/noun standpoint, but also just from simply being to opaque. So that’s a definite failure. I’m still comfortable with “cell” instead of “slot” or “property”. I’m also happy with “Ground”, “Base” and “DefaultBehavior”. All of these communicate clearly what they should. I’m also happy about the renaming of “String” to “Text”. I don’t use the type name much in Ioke code, but when I do “Text” feels much better.

Numerical tower, and no real numbers

I’ve always liked numerical towers in programming languages, and it feels good to have it in Ioke. Ratios are also necessary as first class concepts. I also decided to not have real numbers, only the equivalent of BigDecimals. That was probably a good decision for Ioke, and I still feel real numbers are problematic. I don’t think removing them from the language is the right solution though.

Condition system instead of exceptions

The decision to adapt and include a condition system based on Common Lisp was definitely a success. I like the programming model and it makes code much more flexible and expressive. Clear win.

No global scope

This is also a clear win. It’s a tricky one in many languages do. You have to unify things to a high level to make it possible to get away from global state. But I think the benefits way outweigh the cost of this.

Specialized forms of code

Ioke have quite a few variations in runnable code. The main distinction is between things that are lexically scoped and things that are object scoped. Methods, Macros and Syntax are object scoped, while Blocks and Lecros are lexically scoped. This seemed like a good idea at the time, but if I were to do it again, I would try to unify several of those – at the cost of making the evaluation rules slightly more complicated. Especially having methods that aren’t lexical closures still surprises me regularily. I wonder if I was influenced by the way Ruby works when designing these parts.

Prototype OO

I’ve always maintained that properly implemented propotype based object orientation is both conceptually simpler and more powerful than class based object orientation. I still believe this to be true, and I still think prototype based is better than class based. However, there are some places where the model breaks down. There are some situations where it just makes sense to have a class that describes objects. Take numbers for example. In a prototype based scheme, what is the parent of the number 2? Is the parent the number 1? Not really. In Ioke I added a singleton object Number that is the parent of all numbers. But that still becomes weird since you could write code like “Number + 9”, and expect that to work. I’m not sure how to solve this problem. Of course, prototypes can represent classes without problem, but my problem is mostly what makes sense intuitively.

Ruby-like load/require system

For a language with a global scope and/or total mutability, it works quite well to just have things represented as scripts that will modify a shared environment when loaded. However, there are things that become cumbersome – parameterization of modules/files become very ad hoc, and there is a real risk of conflicting names. If I were to redo this part I would probably opt for something slightly less convenient but more powerful, that allow you to work with software modules in a better way. Exporting parts and keeping other parts private, parameterize modules, bind modules under different names, and so on.



Should languages be multi-lingual?


I’m currently sitting in the Beijing ThoughtWorks office, and for some reason language is on my mind… =)

One of the discussions related to DDD that have turned up several times the last few months at conferences
is how you handle ubiquitous language when your domain is not in English. Since most programming languages are based on English, you end up mixing English and Swedish for example, if you are working with a Swedish domain. Of course, the benefits of working with these concepts in Swedish are very hard to argue against. But the dichotomy between the programming language and the domain language is definitely something that hurts my eyes, so I’m generally not very fond of that approach.

In fact, I haven’t heard anyone come up with a good solution to this problem, and this post is not really a solution either.

One of the things I’ve proposed to make this situation better is to create an external DSL that is fully in the domain language. The implementation of that DSL can then be implemented in English. The main benefit is that there is a clear separation.between the domain language and the programming language. On the other hand, the overhead of creating the DSL and also the complexities involved in translating the domain concepts into programming language concepts can become problematic too.

One interesting idea in Cucumber is the idea that you can easily add new natural languages to write the features in. When it comes to user stories at the level of testing that Cucumber provides, it’s really important to use the right language. So it got me thinking, could you use the same kind of approach in a general programming language too?

As an experiment I took a small example program for Ioke, and translated it into Mandarin, with simplified Chinese characters. Of course I used Google Translate for this, so the translation is probably not very good, but the end result is still interesting. I’m not going to try to get this into my blog, so take a look at the file at github instead: http://github.com/olabini/ioke/blob/master/examples/chinese/account.ik. As you can see there is nothing in there that even reeks of English. If you don’t understand Chinese characters it is probably hard to see what’s happening here. Basically an Account object is created, with a “transfer” method and a “print” method. Further down, two instances of this Account object is created, some transfers are made, and then the objects are printed. But provided my translation is not too crappy, this code should make sense to someone reading Chinese.

Now, this is actually extremely simple to implement in Ioke, since it relies on several of the features Ioke handles very easily. That everything is a message really helps, and having everything be first class means I can alias methods and things like that without any worry. Obviously your language also need to handle non-ascii identifiers correctly, but that should be standard in this day and age.

When thinking about it, something similar to do this can be created in languages like Lisp, Smalltalk, Factor, Io and Haskell – but most other languages would struggle. If you have keywords in your language, it’s really a killer – you would need to branch your parser to make it happen.

Of course, this approach only works when you can simply translate from one word to another. If the writing system is right to left, or top to bottom, it’s much more tricky to create a good translation.

I’m also not sure if this is actually a really good idea or not. It might be. The other thing I’ve been thinking about is how to handle multilingual editing. What if you want to be able to switch back and forth between languages? How can you handle identifiers with more than one name. Would you want to?

Lots of unanswered questions here. But it’s still funny to think about. Communication is the main goal, as usual.



NewSpeak at JavaZone


Best presentation at JavaZone so far was Gilad Bracha’s talk about NewSpeak. I might be one of a few number of people who thinks this of course, since for most Java developers NewSpeak is quite out there. For language geeks though, it’s a gold mine of interesting ideas, realized in a very nice way.

So, what is it? Well, NewSpeak is a new language created by Gilad Bracha (who used to work as Language Theologist for Sun – meaning he was one of the theory geeks for Java). NewSpeak actually doesn’t have anything to do with Java. It doesn’t run on the JVM. It’s not written in Java. It doesn’t look like Java. In fact, it’s closest relatives are Smalltalk, Self and Beta. It runs on top of Squeak, but there are plans to make it run outside too – probably targeting V8 for this.

If you were just to glance at the language, it looks a lot like Smalltalk. Gilad has based the syntax on Smalltalk, but have no problem with adding or removing things to make more code more readable, more accessible, and so on. But as it turns out, many choices in Smalltalk happened to be there for a reason, and having them gives lots of benefits.

So what are the nice features of it (except that it’s based on Smalltalk?) Well, these are the things that I took notice of:

  • No global state. Really. No global state at all. So how does it work? How do you create a library for example? Well, a library is actually a method. That method will be called with something called a platform. This platform gives you access to common resources, but note that the platform is also an instance – there is no global state there. It also means that you can inject any kind of platform into a specific library. What really makes this powerful is that it allows security by capabilities. What it means is that since there is no global state, a library can’t get access to something unless you inject it into that library. Gilad uses the example of the File object. If a File class haven’t been injected into your library, you can’t actually use any files. Or if someone injects something that looks like a File object but only allows reading, the library can only read from files. Neat. Security just comes as a side effect of this design choice. And btw, dependency injection frameworks doesn’t exist in NewSpeak, since everything is injected in the language in the normal course of coding in it.
  • Scoped injected super classes. This one is a bit complicated to understand, but it’s really powerful. In fact, it looks a bit like categories. An example you say? Well, say a library has a Foo class, a Bar that extends Foo, and a Baz that extends Foo. Then a piece of code that uses this library have a Foo2 class that is a subclass of Foo. But the kicker is that in this piece of code Foo2 is called Foo. In NewSpeak this means that in reality, the super class of Bar and Baz will actually be Foo2 – but only inside of the piece of code where Foo has been reset to be Foo2. This is a side effect of using interfaces for everything in the system.
  • Mirror reflection. One of the problems with building a secure system that still have powerful reflection capabilities is that it’s quite hard to scope this functionality in a secure way. So NewSpeak solves this by doing the same kind of injection for reflection as it does for all other things. That means you can’t do reflection on a specific class by itself – you will have to get a Mirror object for that class. And the way you get a mirror is by calling a method on a Mirror class to get it. This means that to use mirrors you need to inject a mirror class. And since you can inject a ReadableMirror for example, this means you can handle security for reflection as you do everything else in NewSpeak. Really cool.

In conclusion, I really like what I see in NewSpeak. I look forward to when it’s released (it will be open sourced under Apache 2.0). It’s got some fresh new ideas about how to approach language design.

Read more at Gilad’s blog: http://gbracha.blogspot.com/, or at http://newspeaklanguage.org/.



Language generation


This post is the first in a series of posts about PAIPr. Read here for more info about the concept.

Today I would like to start with taking a look at Chapter 2. You can find the code in lib/ch02 in the repository.

Chapter 2 introduces Common Lisp through the creation of a series of ways of doing generation of language sentences in English, based on simple grammars. It’s an interesting chapter to start with since the code is simple and makes it easy to compare the Ruby and Common Lisp versions.

The first piece to take a look at is the file common.rb, which contains two methods we’ll need later on:

require 'pp'

def one_of(set)
  [set.random_elt]
end

class Array
  def random_elt
    self[rand(self.length)]
  end
end

As you can see I’ve also required pp, to make it easier to print structures later on.

Both one_of and Array.random_elt are extremely simple methods, but it’s still nice to have the abstraction there. I’m retaining the naming from the book for these two methods.

The first real example defines a grammar by directly using methods. (From simple.rb):

require 'common'

def sentence; noun_phrase + verb_phrase; end
def noun_phrase; article + noun; end
def verb_phrase; verb + noun_phrase; end
def article; one_of %w(the a); end
def noun; one_of %w(man ball woman table); end
def verb; one_of %w(hit took saw liked); end

As you can see, all the methods just define their structure by combining the result of more basic methods. A noun phrase is an article, then a noun. An article is either ‘the’ or ‘a’, and a noun can be ‘man’, ‘ball’, ‘woman’ or ‘table’. If you run sentence a few times you will see that you sometimes get back quite sensible sentences, like [“a”, “ball”, “hit”, “the”, “table”]. But you will also get less interesting things, such as [“a”, “ball”, “hit”, “a”, “ball”]. At this stage the space for variation is quite limited, but you can still see a simplified structure of the English language in these methods.

To create an example that involves some more interesting structures, we can introduce adjectives and prepositions. Since these can be repeated zero times, or many times, we’ll use a production called PP* and Adj* (pp_star and adj_star in the code). This is from simple2.rb:

require 'simple'

def adj_star
  return [] if rand(2) == 0
  adj + adj_star
end

def pp_star
  return [] if rand(2) == 0
  pp + pp_star
end

def noun_phrase; article + adj_star + noun + pp_star; end
def pp; prep + noun_phrase; end
def adj; one_of %w(big little blue green adiabatic); end
def prep; one_of %w(to in by with on); end

Nothing really changes here, except that in both the optional productions we randomly return an empty array 50% of the time. They then call themselves recursively. The noun phrase production also changes a bit, and adj and prep gives us the two new terminals needed. If you try this one, you might get some more interesting results, such as: [“a”, “table”, “took”, “a”, “big”, “adiabatic”, “man”]. It’s still nonsensical of course. And it seems that this approach with randomness generates quite large output in some cases. To make it really nice there should probably be a diminishing bias in the adjectives and prepositions based on the length of the already generated string.

Another problem with this approach is that it’s kinda unwieldy. Using methods for the grammar is probably not the right choice long term. More specifically, we are tied to this implementation by having the grammar be represented as methods.

A viable alternative is to represent everything as a grammar definition – using a rule based solution. The first part of rule_based.rb looks like this:

require 'common'

# A grammar for a trivial subset of English
$simple_grammar = {
  :sentence => [[:noun_phrase, :verb_phrase]],
  :noun_phrase => [[:Article, :Noun]],
  :verb_phrase => [[:Verb, :noun_phrase]],
  :Article => %w(the a),
  :Noun => %w(man ball woman table),
  :Verb => %w(hit took saw liked)}

# The grammar used by generate. Initially this is $simple_grammar, but
# we can switch to other grammars
$grammar = $simple_grammar

Note that I’m using double arrays for the productions that aren’t terminal. There is a reason for this that will be more pronounced in the later grammars based on this. But right now it’s easy to see that a production is either a list of words, or a list of list of productions. Production names beginning with a capital is a terminal – this is a convention in most grammars. I didn’t use capital letters for the terminals when using methods because Ruby methods named like that causes additional trouble when calling them.

Now that we have the actual grammar we also need a helper method. PAIP defines rule-lhs, rule-rhs and rewrites, but the only one we actually need here is rewrites. (From rule_based.rb):

def rewrites(category)
  $grammar[category]
end

And actually, we could do away with it too, but it reads better than an index access.

The final thing we need is the method that actually creates a sentence from the grammar. It looks like this:

def generate(phrase)
  case phrase
  when Array
    phrase.inject([]) { |sum, elt|  sum + generate(elt) }
  when Symbol
    generate(rewrites(phrase).random_elt)
  else
    [phrase]
  end
end

If what we’re asked to generate is an array, we generate everything inside of that array, and combine them. If it’s a symbol we know it’s a production, so we get all the possible rewrites and take a random element from it. Currently every production have one rewrite, so the random_elt isn’t strictly necessary – but as you’ll see later it’s quite nice. And finally, if phrase is not an Array or Symbol, we just return the phrase as the generated element.

I especially like the use of inject as a more general version of (mappend #’generate phrase). Of course, for readability it would have been possible to implement mappend too:

def mappend(sym, list)
  list.inject([]) do |sum, elt|
    sum + self.send(sym, elt)
  end
end

But I choose to use inject directly instead, since it’s more idiomatic. Note that this version of mappend doesn’t work exactly the same as Common Lisp mappend, since it doesn’t allow a lambda.

Getting back to the generate method. If you were to run generate(:sentence), you would get the same kind of output as with the method based version – with the difference that changing the rules is much simpler now.

So for example, you can use this code from bigger_grammar.rb, which creates a larger grammar definition and then sets the default grammar to use it:

require 'rule_based'

$bigger_grammar = {
  :sentence => [[:noun_phrase, :verb_phrase]],
  :noun_phrase => [[:Article, :'Adj*', :Noun, :'PP*'], [:Name],
                   [:Pronoun]],
  :verb_phrase => [[:Verb, :noun_phrase, :'PP*']],
  :'PP*' => [[], [:PP, :'PP*']],
  :'Adj*' => [[], [:Adj, :'Adj*']],
  :PP => [[:Prep, :noun_phrase]],
  :Prep => %w(to in by with on),
  :Adj => %w(big little blue green adiabatic),
  :Article => %w(the a),
  :Name => %w(Pat Kim Lee Terry Robin),
  :Noun => %w(man ball woman table),
  :Verb => %w(hit took saw liked),
  :Pronoun => %w(he she it these those that)}

$grammar = $bigger_grammar

This grammar includes some more elements that make the output a bit better. For example, we have names here, and also pronouns. One of the reasons this grammar is easier to use is because we can define different versions of the productions. So for example, a noun phrase can be the same as we defined earlier, but it can also be a single name, or a single pronoun. We use this to handle the recursive PP* and Adj* productions. You can also see why the productions are defined with an array inside an array. This is to allow choices in this grammar.

A typical sentence from this grammar (calling generate(:sentence)) gives [“Terry”, “saw”, “that”], or [“Lee”, “took”, “the”, “blue”, “big”, “woman”].

So it’s easier to change these rules. Also believe that it’s easier to read, and understand the rules here. But one of the more important changes with the data driven approach is that you can use the same rules for different purposes. Say that we want to generate a sentence tree, which includes the name of the production used for that part of the tree. That’s as simple as defining a new generate method (in generate_tree.rb):

require 'bigger_grammar'

def generate_tree(phrase)
  case phrase
  when Array
    phrase.map { |elt| generate_tree(elt) }
  when Symbol
    [phrase] + generate_tree(rewrites(phrase).random_elt)
  else
    [phrase]
  end
end

This code follows the same pattern as generate, with a few small changes. You can see that instead of appending the results from the Array together, we instead just map every element. This is because we need more sub arrays to create a three. In the same manner when we get a symbol we prepend that to the array generated. And actually, at this point it’s kinda interesting to take a look at the Lisp version of this code:

(defun generate-tree (phrase)
  (cond ((listp phrase)
         (mapcar #'generate-tree phrase))
        ((rewrites phrase)
         (cons phrase
               (generate-tree (random-elt (rewrites phrase)))))
        t (list phrase)))

As you can see, the structure is mostly the same. I made a few different choices in representation, which means I’m checking if the phrase is a symbol instead of seeing if the rewrites for a symbol is non-nil. The call to mapcar is equivalent to the Ruby map call.

What does it generate then? Calling it with “pp generate_tree(:sentence)” I get something like this:

[:sentence,
 [:noun_phrase, [:Name, "Lee"]],
 [:verb_phrase,
  [:Verb, "saw"],
  [:noun_phrase,
   [:Article, "the"],
   [:"Adj*",
    [:Adj, "green"],
    [:"Adj*"]],
   [:Noun, "table"],
   [:"PP*"]],
  [:"PP*"]]]

which maps neatly back to our grammar. We can also generate all possible sentences for a grammar without recursion, using the same data driven approach.

The code for that can be found in generate_all.rb:

require 'rule_based'

def generate_all(phrase)
  case phrase
  when []
    [[]]
  when Array
    combine_all(generate_all(phrase[0]),
                generate_all(phrase[1..-1]))
  when Symbol
    rewrites(phrase).inject([]) { |sum, elt|  sum + generate_all(elt) }
  else
    [[phrase]]
  end
end

def combine_all(xlist, ylist)
  ylist.inject([]) do |sum, y|
    sum + xlist.map { |x| x+y }
  end
end

If you run generate(:sentence) you will get back a list of all 256 possible sentences from this simple grammar. In this case the algorithm is a bit more complicated. It’s also using the common Lisp idiom of working on the first element of a list and then recur on the rest of it. This makes it possible to combine everything together. I assume that it should be possible to devise something suitably clever based on the new Array#permutations or possible Enumerable#group_by or zip.

It’s interesting how well the usage of mappend and mapcar maps to uses of inject and map in this code.

Note that I’ve been using globals for the grammars in this implementation. An alternative that is probably better is to pass along an optional parameter to the methods. If no grammar is supplied, just use the default constant instead.

Anyway, the code for this chapter is in the repository. Play around with it and see if you can find anything interesting. This code is definitely an introduction to Lisp, more than a serious AI program – although it does show the kind of approaches that have been used for primitive code generation.

The next chapter will talk about the General Problem Solver. Until then.