A Day In The Lyf

…the lyf so short, the craft so longe to lerne

Posts Tagged ‘Lisp

Orthogonality

Orthogonality means that features can be used in any combination, that the combinations all make sense, and that the meaning of a given feature is consistent, regardless of the other features with which it is combined. The name is meant to draw an explicit analogy to orthogonal vectors in linear algebra: none of the vectors in an orthogonal set depends on (or can be expressed in terms of) the others, and all are needed in order to describe the vector space as a whole.

– Michael Scott (Programming Language Pragmatics)

I’ve used Delphi and Visual Basic at previous jobs. I disliked both of them. VB has this annoying distinction between objects and primitives, so I’d consistently type

Dim server
server = "localhost"

Dim catalog
catalog = CreateObject("MyCompany.MyObject")

…only to be greeted by an ugly Windows popup box that Object doesn't support this property or method. The problem, as all you grizzled VB programmers no doubt spotted immediately, is the last line should start with the keyword Set. VB requires you to prefix assignments on “objects” with Set. But if you try to put a Set in front of assignments on what VB considers primitives (like the first assignment above), you get an error that reads Object required: '[string: "localhost"]'.

Delphi likewise frustrated me with mundane annoyances:

DoOneThing();

if value = goal then
    DoSomething();
else
    DoSomethingElse();

The code above doesn’t compile, and it won’t compile until you remove the semicolon from the call to DoSomething(). Semicolons complete a statement, and the statement is the entire if-else clause.

These problems in VB and Delphi are related to the concept of orthogonality mentioned in the opening quote. VB doesn’t let you compose assignment to objects the same way it lets you compose assignment to primitives. Delphi doesn’t let you end an if clause the same way it lets you end an else clause. These inconsistencies encourage even experienced programmers to make silly syntax mistakes and make the language harder to use.

What is orthogonality?

The key principles I extracted from Michael Scott’s quote listed above are consistency and composability. Composability means that features can be combined, and consistency stands in for the Principle of Least Surprise—features act how you expect they would, regardless of how they’re being combined. VB’s assignments lack consistency. Delphi’s semicolon parsing doesn’t act consistently when composed within a surrounding statement.

Scott claims that a highly orthogonal language is easier to understand and easier to use. Nowhere does he mention that it’s easier to implement. I’m sure the Delphi grammar was simplified considerably by refusing to allow the DoSomething statement to end in a semicolon when contained within an outer if-else clause. It’s also likely that the implementation of Visual Basic was simplified by having the programmer tell the compiler whether an assignment referred to an object or a primitive.

I suspect many non-orthogonal aspects of languages are there to make them easier to implement. However, some languages that are trivially easy to implement can also be amazingly orthogonal. Lisp is a prime example; it is a highly orthogonal language, and yet an entire Lisp interpreter written in Lisp fits on just one page of the Lisp 1.5 Programmer’s Manual.

I found it instructive to list out syntactic constructs that make languages less orthogonal. It’s amazing how mainstream most of them are:

Statements

Statements aren’t necessary, and a number of languages avoid them altogether. Having only expressions makes the language easier to work in. Compare the inconsistency of Delphi’s statement parsing to the composability of Ruby’s expressions:

puts if x == 0
  "Zero"
else
  "Not Zero"
end

The if clause is an expression; it returns the last value evaluated (e.g., either “Zero” or “Not Zero”). And that value can be composed within another expression.

So what happens when you don’t have a reasonable return value? Smalltalk always returns self, which is convenient because it allows method chaining. In the Ruby example above, the entire expression returns the result of the call to puts, which happens to be nil.

The beauty of expressions is that the composability doesn’t just go one level deep:

x = if a == 1; 1; else; 2 end

y = x = if a == 1; 1; else; 2; end

puts y = x = if a == 1; 1; else; 2; end

puts "returns nil" if (puts y = x = if a == 1; 1; else; 2; end).nil?

As the example above shows, orthogonality doesn’t guarantee good code. However, it allows the language to be used in unanticipated ways, which is A Good Thing. Moreover, since everything is an expression, you can put expressions where you wouldn’t normally expect them. For example, in Ruby, the < operator represents inheritance when the receiver is a class, but the superclass could be an expression:

class Test < if x == 1; OneClass; else; TwoClass
end

Pushing a language like this, only to find that it’s turtles all the way down, is a signpost for orthogonality.

Primitives

We already saw the clumsiness of VB primitives, but most mainstream languages share a similar problem. Java, for example, has a confusing dichotomy of longs and Longs, the first a primitive and the second a full-fledged object. C has stack-allocated primitives, which are freed automatically when they fall out of scope, and heap-allocated variables, which you have to free yourself. C# has value types, which force another abstraction – boxing – into the programmer’s lap.

private static object DBNullIf(int value, int nullValue)
{

    return (value != nullValue) ? (object)value : DBNull.Value;

}

The code above should be a head-scratcher. Isn’t object the superclass of all other types? And if so, why do we have to explicitly cast our int variable to an object to make this code compile? Why don’t we have to do the same for the DBNull?

I mentioned above that ease of implementation is a common source of non-orthogonality. With primitives, we can see another, more legitimate reason: performance. There is a cost to keeping primitives out of the language. Below, we’ll see several more non-orthogonal language features that make performance easier to optimize.

Nulls

Nulls have been problematic enough that an entire design pattern has been created to avoid them. Access violations and object reference exceptions are some of the most common programmer errors. Most languages divide object references into two types: those that point to an object, and those that point to nothing. There’s nothing intrinsically non-orthogonal about that division, except that in most languages the references that point to nothing work differently. Instead of returning a value, they throw an exception when dereferenced.

In Ruby, the division is still more or less there — some references point to objects, and some point to nothing — but both types of references still return a value. In effect, Ruby has built the Null Object pattern into the language, as the references that point to nothing return a nil value. But, like everything else in Ruby, nil is an object (of type NilClass), and can be used in expressions:

1.nil?      # returns false
nil.nil?    # returns true

You never get an NullReferenceException in Ruby. Instead, you get a NoMethodError when you try to call a method on nil that doesn’t exist, which is exactly the same error you’d get if you called a method on any object that didn’t exist.

Magic Functions

Most object-oriented languages have certain functions that aren’t really methods. Instead, they’re special extensions to the language that make the class-based approach work.

public class Magic
{
    public static void ClassMethod()
    {
    }

    public Magic()
    {
    }

    public void NormalMethod()
    {
    }
}

Magic.ClassMethod();
Magic magic = new Magic(5);
magic.NormalMethod();

Notice the context shift we make when constructing new objects. Instead of making method calls via the normal syntax (receiver.method()), we use the keyword new and give the class name. But what is the class but a factory for instances, and what is the constructor but a creation method? In Ruby, the constructor is just a normal class-level method:

regex = Regexp.new(phone_pattern)

Typically, the new class-level method allocates an instance and delegates to the newly created instance’s initialize method (which is what most Ruby programmers call the “constructor”). But, since new is just a normal method, if you really wanted to, you could override it and do something different:

class Test
  def self.new
    2 + 2
  end
end

Test.new    # returns 4!!

Operators also tend to be magic methods in many languages. Ruby more or less treats them just like other methods, except the interpreter steps in to break the elegant consistency of everything-as-a-method to provide operator precedence. Smalltalk and Lisp, two other highly orthogonal languages, do not provide operator precedence. Operators are just normal methods (functions in Lisp), and work with the same precedence rules as any other method.

So here we have yet another reason, on top of ease of language implementation and performance, to add non-orthogonality to a language. Ruby adds operator precedence, even though it adds an element of inconsistency, presumably because it makes the language more intuitive. Since intuitiveness is one of the purported benefits of orthogonality, there is a legitimate conflict of interest here. I think I would prefer not having operator precedence, and leaving the language consistent, but it seems more of a question of style than anything else.

Static and sealed Methods

Class-based object-oriented languages impose a dichotomy between classes and instances of those classes. Most of them still allow behavior to exist on classes, but that behavior is treated differently than instance behavior. By declaring class methods as static, you’re telling the compiler that it’s free to compute the address of this function at compile-time, instead of allowing the dynamic binding that gives you polymorphism.

Ruby gives you some degree of polymorphism at the class level (although you can’t call superclass methods, for obvious reasons):

class Animal
  def self.description
    "kingdom Animalia"
  end
end

class Person < Animal
  def self.description
    "semi-evolved simians"
  end
end

In most mainstream languages, not even all instance methods are polymorphic. They are by default in Java, although you can make them statically-bound by declaring them final. C# and C++ take a more extreme approach, forcing you to declare them virtual if you want to use them polymorphically.

Behavior cannot be combined consistently between virtual and sealed (or final) methods. It’s a common complaint when developers try to extend a framework only to find out that the relevant classes are sealed.

Instance Variables

Only the pure functionalists can get by without state; the rest of us need to remember things. But there’s no reason why the mechanism of retrieving stored values has to be treated differently from the mechanism of calling a parameter-less function that computes a value, nor is there a reason that the mechanism for storing a value has to be different from the mechanism of calling a setter function to store the value for you.

It is common OO dogma that state should be private, and if you need to expose it, you should do so through getters and setters. The evolution of the popular Rails framework recently reaffirmed this dogma. In prior versions, sessions were exposed to the controllers via the @session instance variable. When they needed to add some logic to storing and retrieving, they could no longer expose the simple variable, and refactored to a getter/setter attribute access. They were able to do so in a backward-compatible way, by making the @session variable a proxy to an object that managed the logic, but it was still a process that a more orthogonal language wouldn’t have required. The language should not force you to distinguish between field access and method access.

Both Lisp and Eiffel treat instance variables equivalently to function calls (at least when it comes to rvalues). Lisp simply looks up atoms in the environment, and if that atom is a function (lambda), then it can be called to retrieve the value no differently than if the atom is a variable containing the value. Eiffel, an object-oriented language, declares variables and methods using the same keyword (feature), and exposes them – both to the outside world and to the class itself – the same way (Bertrand Meyer called this the Uniform Access principle):

class POINT
feature
    x, y: REAL
            -- Abscissa and ordinate

    rho: REAL is
            -- Distance to origin (0,0)
        do
            Result := sqrt(x^2 + y^2)
        end

Once you get passed Eiffel’s tradition of YELLING TYPES at you and hiding the actual code, Uniform Access makes a lot of sense. Instance variables are just like features without bodies. C# has properties, which provide similar benefits, but force you to explicitly declare them. Instead of a getRho and setRho method, you can have a property that allows clients to use the same syntax regardless of whether they’re using a property or a field. Because Ruby allows the = symbol as part of a method name, it allows a similar syntax.

However, the separation between variables and properties is superfluous. For example, there’s no need for them to have separate access levels. If other classes need the state exposed, then declare it public. If the language doesn’t offer instance variables, then you’re simply exposing a property method. If you run into the same problem that Rails ran into, and suddenly need to add behavior around exposed state, refactoring should be easy. Just add a private property method that is now the state, and leave the public property method.

So, in my hypothetical language, we might have the following:

class Controller
  public feature session
end

And when you feel you need to add behavior around the exposed session dictionary, it should be easy:

class Controller
  public feature session
    # added behavior goes here
    private_session
  end

  private feature private_session
end

One thing that this hypothetical syntax doesn’t allow is separate access levels for getting and setting, but it shows the basic idea.

Inconsistent Access Levels

Since we’re on the subject of access levels, Ruby’s private access level is not very orthogonal at all. Unlike C++ derived languages, Ruby’s private is object-level, not class-level, which means that even other instances of the same class can’t directly access the private method. That’s a reasonable constraint.

However, instead of making object-level private access level orthogonal, the implementors simply disallowed developers to specify the receiver for private methods. This undoubtedly made implementing object-level private access much easier. Unfortunately, it means that you can’t even use self as the receiver within the object itself, which makes moving a method from public or protected to private non-transparent, even if all references to the method are within the class itself:

class TestAccessLevels
  def say_my_name
    puts self.name
  end

  private
  def name
    "Snoopy"
  end
end

# The following line throws an exception
TestAccessLevels.new.say_my_name

Second class types

Having special types, like null, is a special case of having primitives, and it’s a common performance optimization. Being a first-class type has a well-defined meaning. Specifically, its instances can:

  • be passed to functions as parameters
  • be returned from functions
  • be created and stored in variables
  • be created anonymously
  • be created dynamically

It’s becoming increasingly common for modern languages to move towards first-class functions. In the Execution in the Land of the Nouns, Steve Yegge parodied the typical transmogrifications Java programmers had accustomed themselves to in order to sidestep the language’s lack of first-class functions. Java’s cousin (descendant?), C#, has more or less had them since .NET 2.0 in the form of anonymous delegates.

What neither Java nor C# have are first-class classes. Both have reflection, and even allow you to create new types at runtime (painfully…). But, because both languages are statically typed, you can’t access these runtime-created types the same way you can access normal types. Assuming Foo is a runtime-created type, the following code won’t compile:

Foo foo = new Foo();

The only way to make it work is by heavy use of reflection. Ruby, on the other hand, makes it trivially easy to add types at runtime. In fact, all types are added at runtime, since it’s an interpreted language.

Single Return Values

Since you can pass more than one argument to functions, why should you only be allowed to return one value from functions? Languages like ML add orthogonality by returning tuples. Ruby works similarly, unpacking arrays automatically for you if you use multiple lvalues in one expression.

def duplicate(value)
  [value, value]
end

first, second = duplicate("hi there")

This feature allows you to combine lvalues and rvalues more consistently.

Inconsistent Nesting

Nesting is the standard programming trick of limiting scope, and most languages provide blocks that you can nest indefinitely if you want to limit scope. For example, in C#:

public void TestNest()
{
    string outer = "outer";
    {
        string middle = "middle";
        {
            string inner = "inner";
        }
    }
}

In addition to blocks, Ruby allows you to nest functions, but the nesting doesn’t work in an orthogonal way:

def outer
  inner_var = "inner variable"

  def inner
    "inner method"
  end
end

outer
puts inner
puts inner_var

In this example, the last line will throw a NameError, complaining that inner_var is undefined. This is as we should expect – since it’s defined inside an inner scope from where we’re calling it, we should not be able to access it. However, the same is not true for the inner method defined in the call to outer. Despite the fact that it’s defined within a nested scope, it actually has the same scoping as outer.

Ruby’s scoping gets even weirder:

def outer
  begin
    inner_var = "inner"
  end

  inner_var
end

puts outer

This code works, printing “inner” to the console. It shouldn’t.

JavaScript similarly suffers from strange scoping rules. Because all variables have function scope, and not block scope, a variable is available everywhere within a function regardless of where it is defined:

function testScoping(predicate) {
  if (predicate) {
    var test = "in if block";    
  }
  alert(test);    // works, even though it's in an outer block
}

Fixing your mistakes

Speaking of JavaScript, few things bug me more about programming languages than those that try to fix your mistakes for you. JavaScript automatically appends semicolons for you at the end of a line if you forget to. Most of the time, that works fine, but every now and then it creates a ridiculously hard bug to diagnose:

return
{
  value: 0
}

The return statement above looks like it returns an object with a single property. What it really does, though, is simply return. The semicolon was appended on your behalf at the end of the return keyword, turning the following three lines into dead code. It is for this reason that Douglas Crockford recommends always using K&R style braces in JavaScript in JavaScript: The Good Parts.

Summary

Orthogonality makes the language easier to extend in ways that the language implementors didn’t anticipate. For example, Ruby’s first-class types, combined with its duck-typing, allows the popular mocking framework Mocha to have a very nice syntax:

logger = stub
logger.expects(:error).with("Execution error")

do_something(:blow_up, logger)

The fact that classes in Ruby are also objects means that the same syntax works for classes:

File.stubs(:exists?).returns(true)

I picked on JavaScript a couple times, but it did a better job of any other language I know of with literals. JavaScript has literal numbers and strings, like most languages. It has literal regular expressions and lists like Perl and Ruby. But where it really shines is in it’s literal object syntax. Objects are basically dictionaries, and Ruby and Perl have hash literals, but JavaScripts objects include function literals:

function ajaxSort(columnName, sortDirection) {
    setNotice('Sorting...');
    new Ajax.Request("/reports/ajaxSort", {
            method: "get",
            parameters: { sort: columnName, sortDir: sortDirection },
            onSuccess: function (transport) { 
                $("dataTable").innerHTML = transport.responseText;
            },
            onComplete: function () { clearNotice(); }
        }
    );
}

In time, that syntax was leveraged to form the JSON format.

In a more dramatic example, when object-orientation became all the rage, Lisp was able to add object-orientation to the language without changing the language. The Common Lisp Object System (CLOS) is written entirely in Lisp. The reason Lisp was able absorb an entire new paradigm is largely due to the language’s orthogonality. Not all function-like calls work the same way; some are called “special forms” because they act differently (for example, by providing lazy evaluation of the arguments). However, Lisp allows the programmer to create their own special forms by writing macros, which are themselves written in Lisp.

It helps to have turtles all the way down.

Written by Brandon Byars

July 21, 2008 at 10:36 pm

Posted in Languages

Tagged with , , , , , , , ,

Code Generation and Metaprogramming

I wanted to expand upon an idea that I first talked about in my previous post on Common Lisp. There is a common pattern between syntactic macros, runtime metaprogramming, and static code generation.

Runtime metaprogramming is code-generation. Just like C macros. Just like CL macros.

Ok, that’s a bit of an overstatement. Those three things aren’t really just like each other. But they are definitely related—they all write code that you’d rather not write yourself. Because it’s boring. And repetitious. And ugly.

In general, there are three points at which you can generate code in the development process, although the terminology leaves something to be desired: before compilation, during compilation (or interpretation), and during runtime. In the software development vernacular, only the first option is typically called code-generation (I’ll call it static code generation to avoid confusion). Code generation during compilation goes under the moniker of a ‘syntactic macro,’ and I’m calling runtime code generation ‘runtime metaprogramming.’

Since the “meta” in metaprogramming implies writing code that writes code, all three forms of code generation can be considered metaprogramming, which is why I snuck the “runtime” prefix into the third option above. Just in case you were wondering…

Static Code Generation

Static code generation is the easiest to understand and the weakest of the three options, but it’s often your only option due to language limitations. C macros are an example of static code generation, and it is the only metaprogramming option possible with C out-of-the box.

To take an example, on a previous project I generated code for lazy loading proxies in C#. A proxy, one of the standard GoF design patterns, sits in between a client and an object and intercepts messages that the client sends to the object. For lazy loading, this means that we can instantiate a proxy in place of a database-loaded object, and the client can use it without even knowing that it’s using a proxy. For performance reasons, the actual database object will only be loaded on first access of the proxy. Here’s a truncated example:

public class OrderProxy : IOrder
{
    private IOrder proxiedOrder = null;
    private long id;
    private bool isLoaded = false;

    public OrderProxy(long id)
    {
        this.id = id;
    }

    private void Load()
    {
        if (!isLoaded)
        {
           proxiedOrder = Find();
           isLoaded = true;
        }
    }

    private IOrder Find()
    {
        return FinderRegistry.OrderFinder.Find(id);
    }

    public string OrderNumber
    {
        get
        {
           Load();
           return proxiedOrder.OrderNumber;
        }
        set
        {
           Load();
           proxiedOrder.OrderNumber = value;
        }
    }

    public DateTime DateSubmitted
    {
        get
        {
           Load();
           return proxiedOrder.DateSubmitted;
        }
    }
}

This code is boring to write and boring to maintain. Every time the interface changes, a very repetitious change has to be made in the proxy. To make it worse, we have to do this for every database entity we’ll want to load (at least those we’re worried about lazy-loading). All I’d really like to say is “make this class implement the appropriate interface, and make it a lazy-loading proxy.” Fortunately, since the proxy is supposed to be a drop-in replacement for any other class implementing the same interface, we can use reflection to query the interface and statically generate the proxy.

There’s an important limitation to generating this code statically. Because we’re doing this before compilation, this approach requires a separated interfaces approach, where the binary containing the interfaces is separate from the assembly we’re generating the proxies for. We’ll have to compile the interfaces, use reflection on the compiled assembly to generate the source code for the proxies, and compile the newly generated source code.

But it’s do-able. Simply load the interface using reflection:

public static Type GetType(string name, string nameSpace, string assemblyFileName)
{
    if (!File.Exists(assemblyFileName))
        throw new IOException("No such file");

    Assembly assembly = Assembly.LoadFile(Path.GetFullPath(assemblyFileName));
    string qualifiedName = string.Format(“{0}.{1}”, nameSpace, name);
    return assembly.GetType(qualifiedName, true, true);
}

From there it’s pretty trivial to loop through the properties and methods and recreate the source code for them on the proxy, with a call to Load before delegating to the proxied object.

Runtime Metaprogramming

Now it turns out that when I wrote the code generation code above, there weren’t very many mature object-relational mappers in the .NET space. Fortunately, that’s changed, and the code above is no longer necessary. NHibernate will lazy-load for you, using a similar proxy approach that I used above. Except, NHibernate will write the proxy code at runtime.

The mechanics of how this work are encapsulated in a nice little library called Castle.DynamicProxy. NHibernate uses reflection to read interfaces (or virtual classes) and calls DynamicProxy to runtime generate code using the Reflection.Emit namespace. In C#, that’s a difficult thing to do, which is why I wouldn’t recommend doing it unless you use DynamicProxy.

This is a much more powerful technique than static code generation. For starters, you no longer need two assemblies, one for the interfaces, and one for the proxies. But the power of runtime metaprogramming extends well beyond saving you a simple .NET assembly.

Ruby makes metaprogramming much easier than C#. The standard Rails object-relational mapper also uses proxies to manage associations, but the metaprogramming applies even to the model classes themselves (which are equivalent to the classes that implement our .NET interfaces). The truncated IOrder implementation above showed 3 properties: Id, OrderNumber, and DateSubmitted. Assuming we have those columns in our orders table in the database, then the following Ruby class completely implements the same interface:

class Order < ActiveRecord::Base
end

At runtime, The ActiveRecord::Base superclass will load the schema of the orders table, and for each column, add a property to the Order class of the same name. Now we really see the power of metaprogramming: it helps us keep our code DRY. If it’s already specified in the database schema, why should we have to specify it in our application code as well?

Syntactic Macros

It probably wouldn’t make much sense to generate lazy-loading proxies at compile time, but that doesn’t mean syntactic macros don’t have their place. Used appropriately, they can DRY up your code in ways that even runtime metaprogramming cannot.

Peter Seibel gives a good example of building a unit test framework in Common Lisp. The idea is that we’d like to assert certain code is true, but also show the asserted code in our report. For example:

pass ... (= (+ 1 2) 3)
pass ... (= (+ 1 2 3) 6)
pass ... (= (-1 -3) -4)

The code to make this work, assuming report-result is implemented correctly, looks like this:

(defun test-+ ()
  (report-result (= (+ 1 2) 3) '(= (+ 1 2) 3))
  (report-result (= (+ 1 2 3) 6) '(= (+1 2 3) 6))
  (report-result (= (+ -1 -3) -4) '(= (+ -1 -3) -4)))

Notice the ugly duplication in each call to report-result. We have the code that’s actually executed (the first parameter), and the quoted list to report (the second parameter). Runtime metaprogramming could not solve the problem because the first parameter will be evaluated before being passed to report-result. Static code-generation could remove the duplication, but would be ugly. We could DRY up the code at compile time, if only we had access to the abstract syntax tree. Fortunately, in CL, the source code is little more than a textual representation of the AST.

Here’s the macro that Seibel comes up with:

(defmacro check (&body forms)
  `(progn
    ,@(loop for f in forms collect `(report-result ,f ',f))))

Notice how the source code within the list (represented as the loop variable f) is both executed and quoted. The test now becomes much simpler:

(defun test-+ ()
  (check (= (+ 1 2) 3))
  (check (= (+ 1 2 3) 6))
  (check (= (+ -1 -3) -4)))

Summary

Finding ways to eliminate duplication is always A Good Thing. For a long time, if you were programming in a mainstream language, then static code generation was your only option when code generation was needed. Things changed with the advent of reflection based languages, particularly when Java and C# joined the list of mainstream languages. Even though their metaprogramming capability isn’t as powerful as languages like Smalltalk and Ruby, they at least introduced metaprogramming techniques to the masses.

Of course, Lisp has been around since, say, the 1950’s (I’m not sure how long macros have been around, however). Syntactic macros provide a very powerful way of generating code, even letting you change the language. But until more languages implement them, they will never become as popular as they should be.

Written by Brandon Byars

March 29, 2008 at 6:00 pm

True Multiple Dispatch

When I first started learning about Design Patterns some years ago, I found myself having to revisit the Visitor pattern over and over again. Something about it just didn’t click with me.

Several years later, I now have experience using the Visitor pattern in a few applications. But the idea of multiple dispatch still seems a bit foreign to me. The authors of Design Patterns claimed that Visitors allowed you to imitate multiple dispatch in languages that didn’t natively support it. That was a rather mind-blowing statement to me – how can a language natively support something as strange as multiple dispatch?

Lately, I’ve been learning a little bit about Lisp, and Common Lisp (CL) does natively support multiple dispatch. It turns out that understanding CL’s approach to object-orientation is useful for more than just a better understanding of the Visitor pattern.

The Fundamental Unit of Behavior

Most programmers in mainstream OO languages will tell you that the class is the fundamental unit of behavior. There are subtleties – for example, in Smalltalk and Ruby, classes are actually objects, which arguably makes the object the fundamental unit, but both of those languages still use classes to organize behavior. Languages like JavaScript don’t use classes at all, but achieve similar results through adding behavior to an object’s prototype.

Common Lisp is class-based, but in CL, the class simply holds the data, like a C struct. When it comes to behavior, the method reigns supreme.

Putting methods into classes, which would have made CL look more like other object-oriented languages, never worked out very well. The message passing paradigm popularized by Smalltalk made methods and traditional Lisp functions semantically different, meaning much of the power that stems from Lisp’s functional roots was lost. So the authors of the Common Lisp object system (CLOS) unified methods with functions, giving birth to the generic function.

Generic functions are like abstract methods in other languages. They define an interface for behavior, but defer the implementation of that behavior. But – and this is an awfully big but – generic functions are not attached to any class.

(defgeneric area (shape)
   (:documentation "Returns the area of the given shape."))

In typical functional style, the area function takes the object as a parameter, instead of sending the object an area message as you would in traditional OO languages. So far, this isn’t too different from Python, which requires you to pass self to all methods.

The actual implementation of a generic function is defined by methods. But methods don’t belong to classes; they belong to the generic function. Each method for a generic function is a specialization of that function for specific classes. For example:

(defmethod area ((shape square))
   (* (width shape) (width shape)))

(defmethod area ((shape circle))
   (* (radius shape) (radius shape) *pi*))

In a traditional OO language, you would have the Shape class define an abstract Area method, which was implemented in both the Square and Circle subclasses. In contrast, when the area generic function is called in CL, it compares the actual argument to the formal arguments in each of its applicable methods. If it finds a matching method, then its code is executed (and, although its outside the scope of this article, if it finds multiple matching methods, they are all called according to certain combination rules).

Multimethods

The area example above is simple because it takes only one argument (equivalent to taking no arguments in message passing languages (except Python (of course!))).

It is obviously possible to have generic methods that operate on multiple arguments. And – and this is an awfully big and – methods can specialize on each of those arguments, not just the first one. Methods that specialize on multiple parameters are called multimethods (the following silly example was adapted from here):

(defgeneric test (x y)
   (:documentation "Returns the type of the two arguments))

(defmethod test ((x number) (y number))
   '(num num))

(defmethod test ((i integer) (y number))
   '(int num))

(defmethod test ((x number) (j integer))
   '(num int))

(defmethod test ((i integer) (j integer))
   '(int int))

(test 1 1)      => (int int)
(test 1 1/2)    => (int num)
(test 1/2 1)    => (num int)
(test 1/2 1/2)  => (num num)

Specializing on more than one parameter doesn’t help with the combinatorial explosion, but it is occasionally useful And it’s not something you can do in a message passing language natively, which is why you need support from design patterns. The great thing about native support for multiple dispatch is that you can do it without any of the plumbing code that design patterns require (no confusing Visit and Accept methods). There is a common complaint against design patterns that says they exist simply to cover up language limitations. At least in the case of the poor Visitor, that certainly seems to be the case.

Written by Brandon Byars

February 14, 2008 at 10:57 am

Posted in Design Patterns, Languages, Lisp

Tagged with

Understanding Syntactic Macros

Paul Graham finally persuaded me to pick up a little Lisp. I’d already learned some Scheme, but the books I read were mainly academic in nature (here and here), and didn’t show off the power that Paul keeps talking about. So I switched over to Practical Common Lisp hoping that would give me a better view.

It’s very obvious that a number of programming concepts that have been added in various languages to “revolutionize” the language landscape are nothing more than rediscoveries of Lisp concepts. But the One Big Thing that Lisp has and no other language does is its macro system. I don’t think I’d being doing Lisp a disservice by saying that macros are at the heart of why Lisp programmers claim their language is the most powerful one on the planet.

Lisp macros take some getting used to. I found it useful to compare them to related concepts from other languages.

Preprocessor Substitution

It is perhaps unfortunate that the word “macro” is used to describe superficially similar concepts in both C and Lisp. In actuality, the two concepts are as different as night and day.

C’s macro system is nothing more than a preprocessor textual substitution. It really was never intended as a tool for abstraction, although some rather miserable attempts were made to treat them as such (remember MFC message maps?). Rather it was added as a performance hack, like a low-level inline function. There are some good uses of C macros, but in general they’re overused.

Let’s pretend, though, that we wish to go ahead and abstract looping into a C macro. The idiomatic C loop looks something like this:

int i;
for (i=0; i < n; i++) {
    /* loop body */
}

There’s an awful lot of duplication in doing that again and again, so let’s try and move it into a macro:

/* The \ character at the end of a line is a continuation character */
#define LOOP(n) \
    int i; \
    for (i=0; i < (n); i++)

Now the looping code might look something like this:

LOOP(n) {
    /* loop body */
}

The obvious problem with this macro is that you can only use it once in any given lexical scope, and only if you haven’t already declared the variable i. C macros don’t define their own scope as a function would, which is one large reason for they struggle to act as good abstractions.

But, to continue the fantasy, let’s pretend that LOOP works just fine for us. So good, in fact, that we use it often enough to notice the following recurring pattern:

LOOP(10) {
    printf("%d", i);
}

Beautiful code, without question, but because we see it so often, we’d like to abstract it out further. Something like this:

#define LOOP10 \
    LOOP(10) { \
        printf("%d", i); \
    }

Except, of course, that doesn’t work. The preprocessor only makes one pass through the file, meaning a macro can’t call another macro.

Code Generation

So the C macro system is a simple code generation scheme. We could, in theory, expand the code generation capability to allow multiple passes through a file, which would allow one C-like-macro to expand into another C-like-macro, which is then expanded into real code in a subsequent pass.

However, without some static analysis, we still won’t be able to generate unique variable names within any given lexical scope. Actually, even with static analysis, we won’t be able to do so. The problem is that any run-time code generation, like using Reflection.Emit in C#, or instance_variable_set in Ruby, thwarts the static analysis.

To make matters worse, a code generator sits outside the surrounding code. To make it work, you have to write the instructions in a pseudo-language, different from the surrounding language. C macros, for example, have rules different from normal C code. It’s common to uppercase them, which helps make it obvious that this clip of code you’re looking at isn’t standard C code.

Metaprogramming

Reflection in powerful languages like Ruby can give you some of the same benefits of macros, but they are very different concepts. Metaprogramming is a run-time concept. Macros are like metaprogramming at compile time (or interpretation-time—Lisp lets you decide if you want to compile the code first or not). What’s the difference?

Run-time metaprogramming uses dynamic data types to remove duplication. The ActiveRecord class in Ruby on Rails is a wonderful example. Unlike traditional object-relational mappers, you don’t have to specify the mapping from columns to properties using ActiveRecord. In fact, you don’t even have to specify the properties at all!

At run-time, an ActiveRecord class queries the database table whose name matches the name of the class, according to a well-known (and configurable) naming scheme. It then automatically adds property getters and setters for each column in the table. So you could have the following class definition:

class Order < ActiveRecord::Base
end

And you could use it like this:

order = Order.find(1)
puts order.id
puts order.ship_to_name

Notice that we never had to define id or ship_to_name. Rails used metaprogramming to define them for us. Some languages, like Ruby and Smalltalk, also let you intercept messages sent to objects for which no method could be found to bind to. Rails uses this feature to also give the appearance of adding special find methods:

Order.find_by_ship_to_name
Order.find_by_city_and_state

Metaprogramming is essentially code-generation at run-time, which provides extreme flexibility at the cost of some performance that static code-generation might be able to provide. But – and here’s what separates metaprogramming from syntactic macros – metaprogramming magic is bound by the rules of the language. You cannot change the language at run-time.

Syntactic Macros

So how do Lisp macros differ from static code-generation or metaprogramming?

To answer that question, we first need to understand something about Lisp source code. Lisp is one of the few languages that is truly homoiconic, or self-representing. Take a look at the following Lisp lists:

(2 3)
(> 2 3)
(if (> 2 3) True False)

The first list is just data – a collection of atoms. The second list, if interpreted, returns T (true) or NIL (false), depending on the result of calling > with the given arguments. The third list returns the atom True (not the same as T) or False, depending on the answer to the predicate.

But here’s the mind-stretching part: all three of these lists are just data. if and > are nothing more than atoms in a list. Lisp just has special processing rules that say, if the first atom in a list binds to a function name, apply that function with the rest of the list. The fact that Lisp has those processing rules doesn’t change the fact that its code is nothing more than a list of data.

Let’s look at a more involved if statement:

(if (hungry Spot)
    (progn
      (bark Spot)
      (wag-tail Spot)))

If Spot is hungry, he barks and wags his tail. Typically, an if statement allows only one clause if the predicate is true (and, optionally, one if it’s false). progn is a special Lisp form that lets you group multiple clauses together. It’s like the curly braces in C:

if (hungry(spot)) {
    bark(spot);
    wagTail(spot);
}

Without those braces, wagTail would happen unconditionally. Notice that Lisp provides the same functionality without any special syntax. progn is just another Lisp atom in a list, which, like any atom, is interpreted as a function (or something like a function) if it happens to be the first element of a list. Also notice that the if expression shown above is the source code for a full program – and that source code is just a list of atoms!

Maybe I shouldn’t get too carried away with that. In his book, Peter Seibel says too many people get so distracted by Lisp’s list-heritage that they miss some of the more practical aspects of the language, and in fact, lists probably aren’t used much in day-to-day usage. But the fact that there is no representational difference between Lisp data and Lisp source code means that a Lisp program can interpret its source code like any other data. Understanding that single point is the key to understanding macros.

Seibel introduces macros with a silly example, but one that’s useful for showing the code-as-data mentality. Here’s hello, world in Lisp:

(format t "hello, world")

The t atom (like true in C# and Java) tells format to print to standard output. Now imagine that we wrote the following macro:

(defmacro backwards (expr) (reverse expr))

backwards is the name of the macro, which takes an expression (represented as a list), and reverses it. Here’s hello, world again, this time using the macro:

(backwards ("hello, world" t format))

When the Lisp compiler sees that line of code, it looks at the first atom in the list (backwards), and notices that it names a macro. It passes the unevaluated list ("hello, world" t format) to the macro, which rearranges the list to (format t "hello, world"). The resulting list replaces the macro expression, and it is what will be evaluated at run-time. The Lisp environment will see that its first atom (format) is a function, and evaluate it, passing it the rest of the arguments.

So Lisp macros provide compile-time code generation, but in doing so, you have access not just to the code-generation instructions – which are just Lisp data – but also to the entire Lisp language and environment. And the resulting code doesn’t suffer a run-time performance penalty—the call to reverse above happens at compile time; the run-time environment sees only the expanded macro.

This really opens your eyes to new ways of removing duplication. Take the (if ... (progn ...)) example listed above. While progn is nice in that it gives you functionality similar to C’s braces without any special syntax, it can get a bit bulky to have to type it all the time. Common Lisp provides a when macro to make it easier; you’d use it like this:

(when (hungry Spot)
   (bark Spot)
   (wag-tail Spot))

But when doesn’t have to be baked into the language. Given if and progn, we could compile-time generate it with the following macro:

(defmacro when (condition &rest body)
   `(if ,condition (progn ,@body)))

OK, there’s some funky Lisp syntax in there (more than I want to go into to), but underneath the covers, it’s still just lists and atoms. The &rest atom gathers up any remaining parameters (after the condition parameter has been satisfied), and puts them into a single list which you can access through the variable body. The if list is quoted because we don’t want the macro to evaluate the if expression – remember it will try to if the first atom matches a function (or special form, or macro). Instead, we want to return the unevaluated list that will be evaluated later, at run-time. The comma and comma-at-sign are ways of splicing the parameters into the list, similar to string interpolation in Ruby.

So, in our when example with Spot the dog, (hungry Spot) will be the condition parameter, and ((bark Spot) (wag-tail Spot)) will be the body parameter. After when evaluates its arguments, it will return a list (source code) that looks just like the (if...(progn...)) that we wrote by hand.

Lisp also has a handy way of dealing with the variable naming problem we saw with our C macro, where we hard-coded the variable name i. You can use the function gensym to create a variable name that is guaranteed to be unique within its lexical scope. You can assign the result of gensym to another variable, and use that other variable in your macro. It will be replaced with the actual variable name when the macro expands. (Nifty, no?)

Finally, Lisp solves the multiple-pass code-generation problem. Macros can expand into macros, which can themselves expand into macros, for as long as you want. In fact, macros are so integral to the language, that this seems to be pretty common. Macros can call other macros or other functions. Some macros can become some large and complex that you might have an entire package dedicated to supporting functions for it.

The great thing about macros is that they let you actually change the language (to an extent), something that metaprogramming cannot accomplish. Let’s say you don’t care for Lisp’s clunky looping mechanisms. You’ve seen list comprehensions in Python or Erlang and want to bring some of that elegance to Lisp.

Too late – somebody else already did. The loop macro is incredibly powerful and complex. Some old-school Lispers hate it, because it really does change the language, which, for Lispers, is a way of saying it’s not using as many parentheses as it should. The following example is taken from Practical Common Lisp:

(loop for i in *random*
   counting (evenp i) into evens
   counting (oddp i) into odds
   summing i into total
   maximizing i into max
   minimizing i into min
   finally (return (list min max total evens odds)))

The result of that expression is a list containing the minimum, maximum, and sum of the numbers in the *random* global variable, as well as a list containing all even numbers and a list containing all odd numbers.

Try doing that in C.

Written by Brandon Byars

February 12, 2008 at 2:54 pm

Posted in Languages, Lisp

Tagged with ,

Follow

Get every new post delivered to your Inbox.