An Ioke spelling corrector


A while back Peter Norvig (of AI, Lisp and Google fame) published a small entry on how spelling correctors work. He included some code in Python to illustrate the concept, and this code have ended up being a very often used example of a programming language.

It would be ridiculous to suggest that I generally like to follow tradition, but in this case I think I will. This post will take a look at the version implemented in Ioke, and use that to highlight some of the interesting aspects of Ioke. The code itself is quite simple, and doesn’t use any of the object oriented features of Ioke. Neither does it use macros, although some of the features used is based on macros, so I will get to explain a bit of that.

For those that haven’t seen my Ioke posts before, you can find out more at http://ioke.org. The features used in this example is not yet released, so to follow along you’ll have to download and build yourself. Ioke S should be out in about 1-2 weeks though, and at that point this blog post will describe released software.

First, for reference, here is Norvig’s original corrector. You can also find his corpora there: http://norvig.com/spell-correct.html. Go read it! It’s a very good article.

This code is available in the Ioke repository in examples/spelling/spelling.ik. I’m adding a few line breaks here to make it fit the blog width – expect for that everything is the same.

Lets begin with the method “words”:

words = method(text,
  #/[a-z]+/ allMatches(text lower))

This method takes a text argument, then calls the method “lower” on it. This method will return a new text that is the original text converted to lower case. A regular expression that matches one or more alphabetical characters are used, and the method allMatches called on it. This method will return a list of texts of all the places matches in the text.

The next method is called “train”:

train = method(features,
  features fold({} withDefault(1), model, f,
    model[f] ++
    model))

The argument “features” should be a list of texts. It then calls fold on this list (you might know fold as reduce or inject. those would have been fine too.) The first argument to fold is the start value. This should be a dict, with a default value of 1. The second argument is the name that will be used to refer to the sum, and the third argument is the name to use for the feature. Finally, the last argument is the actual code to execute. This code just uses the feature (which is a text), indexes into the dict and increments the number there. It then returns the dict, since that will be the model argument the next iteration.

The next piece of code uses the former methods:

NWORDS = train(words(FileSystem readFully("small.txt")))

alphabet = "abcdefghijklmnopqrstuvwxyz" chars

As you can see we define a variable called NWORDS, that contains the result of first reading a text file, then extracting all the words from that text, and finally using that to train on. The next assignment gets a list of all the characters (as text) by calling “chars” on a text. I could have just written [“a”, “b”, “c”, …] etc, but I’m a bit lazy.

OK, now we come to the meat of the code. For an explanation of why this code does what it does, refer to Norvig’s article:

edits1 = method(word,
  s = for(i <- 0..(word length + 1),
    [word[0...i], word[i..-1]])

  set(
    *for(ab <- s,
      ab[0] + ab[1][1..-1]), ;deletes
    *for(ab <- s[0..-2],
      ab[0] + ab[1][1..1] + ab[1][0..0] + ab[1][2..-1]), ;transposes
    *for(ab <- s, c <- alphabet,
      ab[0] + c + ab[1][1..-1]), ;replaces
    *for(ab <- s, c <- alphabet,
      ab[0] + c + ab[1]))) ;inserts

The mechanics of it is this. We create a method assigned to the name edits1. This method takes one argument called “word”. We then create a local variable called “s”. This contains the result of executing a for comprehension. There are several things going on here. The first part of the comprehension gives a generator (that’s the part with the <-). The thing on the right is what to iterate over, and the thing on the left is the name to give it on each iteration. Basically, this comprehensions goes from 0 to the length of the word plus 1. (The dots denote an inclusive Range). The second argument to “for” is what to actually return. In this case we create a new array with two elements. The three dots creates an exclusive Range. Ending a Range in -1 means that it will extract the text to the end of it.

The rest of the code in this method is four different comprehensions. The result of these comprehensions are splatted, or spread out as arguments to the “set” method. The * is symbol to splat things. Basically, it means that instead of four lists, set will get all the elements of all the lists as separate arguments. Finally, set will create a set from these arguments and return that.

Whew. That was a mouthful. The next method is easy in comparison. More of the same, really:

knownEdits2 = method(word,
  for:set(e1 <- edits1(word),
    e2 <- edits1(e1),
    NWORDS key?(e2),
    e2))

Here we use another variation of a comprehension, namely a set comprehension. A regular comprehension returns a list. A set comprehension returns a set instead. This comprehension will only return words that are available as keys in NWORDS.

known = method(words,
  for:set(w <- words,
    NWORDS key?(w), w))

This method uses a set comprehension to find all words in “words” that are keys in NWORDS. As this point you might wonder what a comprehension actually is. And it’s quite easy. Basically, a comprehension is a piece of nice syntax around a combination of calls to “filter”, “map” and “flatMap”. In the case of a set comprehension, the calls go to “filter”, “map:set” and “flatMap:set” instead. The whole implementation of comprehensions are available in Ioke, in the file called src/builtin/F10_comprehensions.ik. Beware though, it uses some fairly advanced macro magic.

OK, back to spelling. Let’s look at the last method:

correct = method(word,
  candidates = known([word]) ifEmpty(
    known(edits1(word)) ifEmpty(
      knownEdits2(word) ifEmpty(
        [word])))
  candidates max(x, NWORDS[x]))

The correct method takes a word to correct and returns the best possible match. It first tries to see if the word is already known. If the result of that is empty, it tries to see if any edits of the word is known, and if that doesn’t work, if any edits of edits are known. Finally it just returns the original word. If more than one candidate spelling is known, the max method is used to determine which one was featured the most on the corpus.

The ifEmpty macro is a bit interesting. What it does is pretty simple, and you could have written it yourself. But it just happens to be part of the core library. The implementation looks like this:

List ifEmpty = dmacro(
  "if this list is empty, returns the result of evaluating the argument, otherwise returns the list",

  [then]
  if(empty?,
    call argAt(0),
    self))

This is a dmacro, which basically just means that the handling of arguments are taken care of. The argument for a destructured macro can be seen in the square brackets. Using a dmacro instead of just a raw macro means that we will get a good error message if the wrong number of arguments are provided. The implementation checks if its list is empty. If it is it returns the value of the first argument, otherwise it doesn’t evaluate anything and returns itself.

So, you have now seen a 16 line spelling corrector in Ioke (it’s 16 without the extra line breaks I added for the blog).



Paradigms of Artificial Intelligence Programming (in Ruby)


Since I never can get enough projects, I’ve decided to start on something I’ve thought about doing for a long time. There are several reasons for doing it, but foremost among them is that I want to get back to playing with AI, and I want to have a project with many small pieces that I can do when I have some time over. If it ends up being educational or useful for others at the same time, well, I’m not complaining!

So what is it?

Well, first I’d like to introduce a book. It’s called Paradigms of Artificial Intelligence Programming or PAIP for short. It’s written by Peter Norvig, who’s also written several other books about AI. Currently he’s the Director of Research at Google. PAIP is probably both my favorite book about AI, and my favorite book about Common Lisp. It’s a really great book. Really. If you have any interest in one of those two subjects you should get hold of it. PAIP doesn’t cover cutting edge AI, but rather takes the historic view and looks at several examples from different eras, going from the first programs to some later, quite advanced things.

I’ve read it numerous times, went through the code and tweaked it and so on. It’s lots of fun. But that was a few years ago. So basically, what I want to do is to go through the book again. But this time I’m going to write all the programs in Ruby – converting them from Common Lisp and then maybe tweak them a bit to make them more idiomatic. And I’m planning to post it here. Or rather, I’m going to post the actual source code to http://github.com/olabini/paipr. I’m going to blog about all the code I write. You don’t necessarily have to have the book, since I’m going to surround the code with some descriptions and explanations.

Once again then, why should anyone care? Well, I don’t know. No one might care. But for me personally it’s going to be an interesting experience converting idiomatic Common Lisp into idiomatic Ruby. It’s going to be fun to revisit the old approaches to AI. And it might serve as a good, code heavy introduction to the subject for anyone interested in it.

I do have the permission from Peter Norvig to do this. The Ruby code I write is covered by the MIT license, while any Lisp code posted as part of this exercise is covered by the license here: http://norvig.com/license.html.

Also note that I sometimes won’t write the most obvious Ruby – it will be good to have a few links to the original Lisp code.