A Day In The Lyf

…the lyf so short, the craft so longe to lerne

Beating Sapir-Whorf

The Sapir-Whorf hypothesis is a famous linguistic theory that postulates that what you think is constrained by what you can say. In other words, the way you understand the world is limited by the language(s) you speak.

The hypothesis is regarded as incorrect by modern linguists. However, the spirit of the theory appears to be largely true if we refer only to computer languages – our solution space is limited to the number of ways we know to express solutions. Paul Graham was indirectly talking about Sapir-Whorf when he described the Blub Paradox. Blub, the language that sits halfway up the hypothetical totem pole that represents the relative power of computer languages, has certain constructs that make languages lower on the pole seem downright dysfunctional. The Blub programmer would look down, realize that the language he’s looking at doesn’t even have some language feature X, and scoff at how much extra work it would be to write programs without X. X, the Blub programmer argues, is so fundamental to designing your application that it would criminal to use any languages without X.

When the Blub programmer looks up the totem pole, though, he just sees weird languages. What he doesn’t realize, of course, is that programmers in those languages are looking down at Blub and wondering how anybody could possibly program without language feature Y.

The Blub paradox makes a nice story, and while it has value, it oversimplifies the language landscape. There isn’t a linear totem pole for expressive power when it comes to languages. Graham suggests looking at the size of the parse tree, which is indeed a useful single metric, but ignores aspects like long-term maintainability, readability, syntactic sugar, etc. The computer language landscape is diverse precisely because there are different opinions on the best way to solve different problems.

What should be obvious is that not all software problems should be solved with the same tool. Too many programmers go down that path, and use Java (or C#, or C++…) as the hammer and let everything become a nail.

Granted, we don’t always have control over which language we use. We write programs in Java (or C#, or C++…) because that’s our job, even though we may suspect that it’s not always the best tool. Fair enough. But if our ability to describe solutions to a problem is constrained by the programming languages we know, doesn’t it make sense to learn other languages even if we can’t use them on the job?

Knowing a diverse set of programming paradigms can help you become a better programmer. It’s helped me. For example, I wrote a closure-based undo in C#. A more typical OO-based undo would have resorted to the command design pattern, but the closure-based implementation required much fewer changes to the codebase. I don’t think I would have come up with the solution if I had not had some prior exposure to functional languages.

The key is to make sure that when you pick up a new language, that it’s different enough from languages you already know that you force yourself to wrap your head around some new concept. You don’t get that, for instance, going from Java to C#. I’ve tried learning at least a little about several languages, and found useful new concepts in the following:

In the object-oriented space:

  • C# / Java (statically typed OO)
  • Eiffel (design-by-contract, anchored types)
  • Ruby (open classes, incredible reflection, metaprogramming)
  • Smalltalk (like Ruby, but with keyword arguments and a very lightweight syntax)

In the hybrid OO space

  • Perl / Python (dynamic typing, default variables, list comprehensions)
  • C++ / Delphi (destructors)
  • JavaScript (prototypes instead of classes, hybrid functional)

In the procedural space:

  • C (weak typing, systems level procedural, teaches good understanding of computer memory)

In the functional space:

  • Scheme / Common Lisp (homoiconic, syntactic macros, multiple dispatch, continuations, no penalty for tail recursion)
  • Erlang (nice distributed model, pattern matching)

In the “everything-else” space:

  • shell scripting (using processes as the unit of abstraction)
  • SQL (yes, even the much-maligned relational, set-based model provides another, occasionally superior, way of solving problems)
  • XSLT (declarative)

In the “I wish I could speak intelligently about” space:

  • Prolog (propositional logic programming)
  • Forth (stack-based programming)

PragDave recommends that you learn a new programming language each year. It’s terrific advice. Pick a language, find some coding exercises, and try to wrap your head around solving those problems idiomatically in the new language. Don’t be surprised if you start out solving them the same way you would have in a language you already knew, but remember the end goal is to learn new problem solving techniques, not just new syntax. Since you probably won’t be using the language on the job, you’ll almost certainly forget the syntax after a while. Start posting questions to mailing lists, blog about some of your sample work, and don’t be afraid to make a fool of yourself. The way I see it, if you’re still afraid to make a fool of yourself, then you clearly haven’t made a fool of yourself enough.

Advertisements

Written by Brandon Byars

May 13, 2008 at 11:09 pm

Posted in Languages

Code Generation and Metaprogramming

I wanted to expand upon an idea that I first talked about in my previous post on Common Lisp. There is a common pattern between syntactic macros, runtime metaprogramming, and static code generation.

Runtime metaprogramming is code-generation. Just like C macros. Just like CL macros.

Ok, that’s a bit of an overstatement. Those three things aren’t really just like each other. But they are definitely related—they all write code that you’d rather not write yourself. Because it’s boring. And repetitious. And ugly.

In general, there are three points at which you can generate code in the development process, although the terminology leaves something to be desired: before compilation, during compilation (or interpretation), and during runtime. In the software development vernacular, only the first option is typically called code-generation (I’ll call it static code generation to avoid confusion). Code generation during compilation goes under the moniker of a ‘syntactic macro,’ and I’m calling runtime code generation ‘runtime metaprogramming.’

Since the “meta” in metaprogramming implies writing code that writes code, all three forms of code generation can be considered metaprogramming, which is why I snuck the “runtime” prefix into the third option above. Just in case you were wondering…

Static Code Generation

Static code generation is the easiest to understand and the weakest of the three options, but it’s often your only option due to language limitations. C macros are an example of static code generation, and it is the only metaprogramming option possible with C out-of-the box.

To take an example, on a previous project I generated code for lazy loading proxies in C#. A proxy, one of the standard GoF design patterns, sits in between a client and an object and intercepts messages that the client sends to the object. For lazy loading, this means that we can instantiate a proxy in place of a database-loaded object, and the client can use it without even knowing that it’s using a proxy. For performance reasons, the actual database object will only be loaded on first access of the proxy. Here’s a truncated example:

public class OrderProxy : IOrder
{
    private IOrder proxiedOrder = null;
    private long id;
    private bool isLoaded = false;

    public OrderProxy(long id)
    {
        this.id = id;
    }

    private void Load()
    {
        if (!isLoaded)
        {
           proxiedOrder = Find();
           isLoaded = true;
        }
    }

    private IOrder Find()
    {
        return FinderRegistry.OrderFinder.Find(id);
    }

    public string OrderNumber
    {
        get
        {
           Load();
           return proxiedOrder.OrderNumber;
        }
        set
        {
           Load();
           proxiedOrder.OrderNumber = value;
        }
    }

    public DateTime DateSubmitted
    {
        get
        {
           Load();
           return proxiedOrder.DateSubmitted;
        }
    }
}

This code is boring to write and boring to maintain. Every time the interface changes, a very repetitious change has to be made in the proxy. To make it worse, we have to do this for every database entity we’ll want to load (at least those we’re worried about lazy-loading). All I’d really like to say is “make this class implement the appropriate interface, and make it a lazy-loading proxy.” Fortunately, since the proxy is supposed to be a drop-in replacement for any other class implementing the same interface, we can use reflection to query the interface and statically generate the proxy.

There’s an important limitation to generating this code statically. Because we’re doing this before compilation, this approach requires a separated interfaces approach, where the binary containing the interfaces is separate from the assembly we’re generating the proxies for. We’ll have to compile the interfaces, use reflection on the compiled assembly to generate the source code for the proxies, and compile the newly generated source code.

But it’s do-able. Simply load the interface using reflection:

public static Type GetType(string name, string nameSpace, string assemblyFileName)
{
    if (!File.Exists(assemblyFileName))
        throw new IOException("No such file");

    Assembly assembly = Assembly.LoadFile(Path.GetFullPath(assemblyFileName));
    string qualifiedName = string.Format(“{0}.{1}”, nameSpace, name);
    return assembly.GetType(qualifiedName, true, true);
}

From there it’s pretty trivial to loop through the properties and methods and recreate the source code for them on the proxy, with a call to Load before delegating to the proxied object.

Runtime Metaprogramming

Now it turns out that when I wrote the code generation code above, there weren’t very many mature object-relational mappers in the .NET space. Fortunately, that’s changed, and the code above is no longer necessary. NHibernate will lazy-load for you, using a similar proxy approach that I used above. Except, NHibernate will write the proxy code at runtime.

The mechanics of how this work are encapsulated in a nice little library called Castle.DynamicProxy. NHibernate uses reflection to read interfaces (or virtual classes) and calls DynamicProxy to runtime generate code using the Reflection.Emit namespace. In C#, that’s a difficult thing to do, which is why I wouldn’t recommend doing it unless you use DynamicProxy.

This is a much more powerful technique than static code generation. For starters, you no longer need two assemblies, one for the interfaces, and one for the proxies. But the power of runtime metaprogramming extends well beyond saving you a simple .NET assembly.

Ruby makes metaprogramming much easier than C#. The standard Rails object-relational mapper also uses proxies to manage associations, but the metaprogramming applies even to the model classes themselves (which are equivalent to the classes that implement our .NET interfaces). The truncated IOrder implementation above showed 3 properties: Id, OrderNumber, and DateSubmitted. Assuming we have those columns in our orders table in the database, then the following Ruby class completely implements the same interface:

class Order < ActiveRecord::Base
end

At runtime, The ActiveRecord::Base superclass will load the schema of the orders table, and for each column, add a property to the Order class of the same name. Now we really see the power of metaprogramming: it helps us keep our code DRY. If it’s already specified in the database schema, why should we have to specify it in our application code as well?

Syntactic Macros

It probably wouldn’t make much sense to generate lazy-loading proxies at compile time, but that doesn’t mean syntactic macros don’t have their place. Used appropriately, they can DRY up your code in ways that even runtime metaprogramming cannot.

Peter Seibel gives a good example of building a unit test framework in Common Lisp. The idea is that we’d like to assert certain code is true, but also show the asserted code in our report. For example:

pass ... (= (+ 1 2) 3)
pass ... (= (+ 1 2 3) 6)
pass ... (= (-1 -3) -4)

The code to make this work, assuming report-result is implemented correctly, looks like this:

(defun test-+ ()
  (report-result (= (+ 1 2) 3) '(= (+ 1 2) 3))
  (report-result (= (+ 1 2 3) 6) '(= (+1 2 3) 6))
  (report-result (= (+ -1 -3) -4) '(= (+ -1 -3) -4)))

Notice the ugly duplication in each call to report-result. We have the code that’s actually executed (the first parameter), and the quoted list to report (the second parameter). Runtime metaprogramming could not solve the problem because the first parameter will be evaluated before being passed to report-result. Static code-generation could remove the duplication, but would be ugly. We could DRY up the code at compile time, if only we had access to the abstract syntax tree. Fortunately, in CL, the source code is little more than a textual representation of the AST.

Here’s the macro that Seibel comes up with:

(defmacro check (&body forms)
  `(progn
    ,@(loop for f in forms collect `(report-result ,f ',f))))

Notice how the source code within the list (represented as the loop variable f) is both executed and quoted. The test now becomes much simpler:

(defun test-+ ()
  (check (= (+ 1 2) 3))
  (check (= (+ 1 2 3) 6))
  (check (= (+ -1 -3) -4)))

Summary

Finding ways to eliminate duplication is always A Good Thing. For a long time, if you were programming in a mainstream language, then static code generation was your only option when code generation was needed. Things changed with the advent of reflection based languages, particularly when Java and C# joined the list of mainstream languages. Even though their metaprogramming capability isn’t as powerful as languages like Smalltalk and Ruby, they at least introduced metaprogramming techniques to the masses.

Of course, Lisp has been around since, say, the 1950’s (I’m not sure how long macros have been around, however). Syntactic macros provide a very powerful way of generating code, even letting you change the language. But until more languages implement them, they will never become as popular as they should be.

Written by Brandon Byars

March 29, 2008 at 6:00 pm

True Multiple Dispatch

When I first started learning about Design Patterns some years ago, I found myself having to revisit the Visitor pattern over and over again. Something about it just didn’t click with me.

Several years later, I now have experience using the Visitor pattern in a few applications. But the idea of multiple dispatch still seems a bit foreign to me. The authors of Design Patterns claimed that Visitors allowed you to imitate multiple dispatch in languages that didn’t natively support it. That was a rather mind-blowing statement to me – how can a language natively support something as strange as multiple dispatch?

Lately, I’ve been learning a little bit about Lisp, and Common Lisp (CL) does natively support multiple dispatch. It turns out that understanding CL’s approach to object-orientation is useful for more than just a better understanding of the Visitor pattern.

The Fundamental Unit of Behavior

Most programmers in mainstream OO languages will tell you that the class is the fundamental unit of behavior. There are subtleties – for example, in Smalltalk and Ruby, classes are actually objects, which arguably makes the object the fundamental unit, but both of those languages still use classes to organize behavior. Languages like JavaScript don’t use classes at all, but achieve similar results through adding behavior to an object’s prototype.

Common Lisp is class-based, but in CL, the class simply holds the data, like a C struct. When it comes to behavior, the method reigns supreme.

Putting methods into classes, which would have made CL look more like other object-oriented languages, never worked out very well. The message passing paradigm popularized by Smalltalk made methods and traditional Lisp functions semantically different, meaning much of the power that stems from Lisp’s functional roots was lost. So the authors of the Common Lisp object system (CLOS) unified methods with functions, giving birth to the generic function.

Generic functions are like abstract methods in other languages. They define an interface for behavior, but defer the implementation of that behavior. But – and this is an awfully big but – generic functions are not attached to any class.

(defgeneric area (shape)
   (:documentation "Returns the area of the given shape."))

In typical functional style, the area function takes the object as a parameter, instead of sending the object an area message as you would in traditional OO languages. So far, this isn’t too different from Python, which requires you to pass self to all methods.

The actual implementation of a generic function is defined by methods. But methods don’t belong to classes; they belong to the generic function. Each method for a generic function is a specialization of that function for specific classes. For example:

(defmethod area ((shape square))
   (* (width shape) (width shape)))

(defmethod area ((shape circle))
   (* (radius shape) (radius shape) *pi*))

In a traditional OO language, you would have the Shape class define an abstract Area method, which was implemented in both the Square and Circle subclasses. In contrast, when the area generic function is called in CL, it compares the actual argument to the formal arguments in each of its applicable methods. If it finds a matching method, then its code is executed (and, although its outside the scope of this article, if it finds multiple matching methods, they are all called according to certain combination rules).

Multimethods

The area example above is simple because it takes only one argument (equivalent to taking no arguments in message passing languages (except Python (of course!))).

It is obviously possible to have generic methods that operate on multiple arguments. And – and this is an awfully big and – methods can specialize on each of those arguments, not just the first one. Methods that specialize on multiple parameters are called multimethods (the following silly example was adapted from here):

(defgeneric test (x y)
   (:documentation "Returns the type of the two arguments))

(defmethod test ((x number) (y number))
   '(num num))

(defmethod test ((i integer) (y number))
   '(int num))

(defmethod test ((x number) (j integer))
   '(num int))

(defmethod test ((i integer) (j integer))
   '(int int))

(test 1 1)      => (int int)
(test 1 1/2)    => (int num)
(test 1/2 1)    => (num int)
(test 1/2 1/2)  => (num num)

Specializing on more than one parameter doesn’t help with the combinatorial explosion, but it is occasionally useful And it’s not something you can do in a message passing language natively, which is why you need support from design patterns. The great thing about native support for multiple dispatch is that you can do it without any of the plumbing code that design patterns require (no confusing Visit and Accept methods). There is a common complaint against design patterns that says they exist simply to cover up language limitations. At least in the case of the poor Visitor, that certainly seems to be the case.

Written by Brandon Byars

February 14, 2008 at 10:57 am

Posted in Design Patterns, Languages, Lisp

Tagged with

Understanding Syntactic Macros

Paul Graham finally persuaded me to pick up a little Lisp. I’d already learned some Scheme, but the books I read were mainly academic in nature (here and here), and didn’t show off the power that Paul keeps talking about. So I switched over to Practical Common Lisp hoping that would give me a better view.

It’s very obvious that a number of programming concepts that have been added in various languages to “revolutionize” the language landscape are nothing more than rediscoveries of Lisp concepts. But the One Big Thing that Lisp has and no other language does is its macro system. I don’t think I’d being doing Lisp a disservice by saying that macros are at the heart of why Lisp programmers claim their language is the most powerful one on the planet.

Lisp macros take some getting used to. I found it useful to compare them to related concepts from other languages.

Preprocessor Substitution

It is perhaps unfortunate that the word “macro” is used to describe superficially similar concepts in both C and Lisp. In actuality, the two concepts are as different as night and day.

C’s macro system is nothing more than a preprocessor textual substitution. It really was never intended as a tool for abstraction, although some rather miserable attempts were made to treat them as such (remember MFC message maps?). Rather it was added as a performance hack, like a low-level inline function. There are some good uses of C macros, but in general they’re overused.

Let’s pretend, though, that we wish to go ahead and abstract looping into a C macro. The idiomatic C loop looks something like this:

int i;
for (i=0; i < n; i++) {
    /* loop body */
}

There’s an awful lot of duplication in doing that again and again, so let’s try and move it into a macro:

/* The \ character at the end of a line is a continuation character */
#define LOOP(n) \
    int i; \
    for (i=0; i < (n); i++)

Now the looping code might look something like this:

LOOP(n) {
    /* loop body */
}

The obvious problem with this macro is that you can only use it once in any given lexical scope, and only if you haven’t already declared the variable i. C macros don’t define their own scope as a function would, which is one large reason for they struggle to act as good abstractions.

But, to continue the fantasy, let’s pretend that LOOP works just fine for us. So good, in fact, that we use it often enough to notice the following recurring pattern:

LOOP(10) {
    printf("%d", i);
}

Beautiful code, without question, but because we see it so often, we’d like to abstract it out further. Something like this:

#define LOOP10 \
    LOOP(10) { \
        printf("%d", i); \
    }

Except, of course, that doesn’t work. The preprocessor only makes one pass through the file, meaning a macro can’t call another macro.

Code Generation

So the C macro system is a simple code generation scheme. We could, in theory, expand the code generation capability to allow multiple passes through a file, which would allow one C-like-macro to expand into another C-like-macro, which is then expanded into real code in a subsequent pass.

However, without some static analysis, we still won’t be able to generate unique variable names within any given lexical scope. Actually, even with static analysis, we won’t be able to do so. The problem is that any run-time code generation, like using Reflection.Emit in C#, or instance_variable_set in Ruby, thwarts the static analysis.

To make matters worse, a code generator sits outside the surrounding code. To make it work, you have to write the instructions in a pseudo-language, different from the surrounding language. C macros, for example, have rules different from normal C code. It’s common to uppercase them, which helps make it obvious that this clip of code you’re looking at isn’t standard C code.

Metaprogramming

Reflection in powerful languages like Ruby can give you some of the same benefits of macros, but they are very different concepts. Metaprogramming is a run-time concept. Macros are like metaprogramming at compile time (or interpretation-time—Lisp lets you decide if you want to compile the code first or not). What’s the difference?

Run-time metaprogramming uses dynamic data types to remove duplication. The ActiveRecord class in Ruby on Rails is a wonderful example. Unlike traditional object-relational mappers, you don’t have to specify the mapping from columns to properties using ActiveRecord. In fact, you don’t even have to specify the properties at all!

At run-time, an ActiveRecord class queries the database table whose name matches the name of the class, according to a well-known (and configurable) naming scheme. It then automatically adds property getters and setters for each column in the table. So you could have the following class definition:

class Order < ActiveRecord::Base
end

And you could use it like this:

order = Order.find(1)
puts order.id
puts order.ship_to_name

Notice that we never had to define id or ship_to_name. Rails used metaprogramming to define them for us. Some languages, like Ruby and Smalltalk, also let you intercept messages sent to objects for which no method could be found to bind to. Rails uses this feature to also give the appearance of adding special find methods:

Order.find_by_ship_to_name
Order.find_by_city_and_state

Metaprogramming is essentially code-generation at run-time, which provides extreme flexibility at the cost of some performance that static code-generation might be able to provide. But – and here’s what separates metaprogramming from syntactic macros – metaprogramming magic is bound by the rules of the language. You cannot change the language at run-time.

Syntactic Macros

So how do Lisp macros differ from static code-generation or metaprogramming?

To answer that question, we first need to understand something about Lisp source code. Lisp is one of the few languages that is truly homoiconic, or self-representing. Take a look at the following Lisp lists:

(2 3)
(> 2 3)
(if (> 2 3) True False)

The first list is just data – a collection of atoms. The second list, if interpreted, returns T (true) or NIL (false), depending on the result of calling > with the given arguments. The third list returns the atom True (not the same as T) or False, depending on the answer to the predicate.

But here’s the mind-stretching part: all three of these lists are just data. if and > are nothing more than atoms in a list. Lisp just has special processing rules that say, if the first atom in a list binds to a function name, apply that function with the rest of the list. The fact that Lisp has those processing rules doesn’t change the fact that its code is nothing more than a list of data.

Let’s look at a more involved if statement:

(if (hungry Spot)
    (progn
      (bark Spot)
      (wag-tail Spot)))

If Spot is hungry, he barks and wags his tail. Typically, an if statement allows only one clause if the predicate is true (and, optionally, one if it’s false). progn is a special Lisp form that lets you group multiple clauses together. It’s like the curly braces in C:

if (hungry(spot)) {
    bark(spot);
    wagTail(spot);
}

Without those braces, wagTail would happen unconditionally. Notice that Lisp provides the same functionality without any special syntax. progn is just another Lisp atom in a list, which, like any atom, is interpreted as a function (or something like a function) if it happens to be the first element of a list. Also notice that the if expression shown above is the source code for a full program – and that source code is just a list of atoms!

Maybe I shouldn’t get too carried away with that. In his book, Peter Seibel says too many people get so distracted by Lisp’s list-heritage that they miss some of the more practical aspects of the language, and in fact, lists probably aren’t used much in day-to-day usage. But the fact that there is no representational difference between Lisp data and Lisp source code means that a Lisp program can interpret its source code like any other data. Understanding that single point is the key to understanding macros.

Seibel introduces macros with a silly example, but one that’s useful for showing the code-as-data mentality. Here’s hello, world in Lisp:

(format t "hello, world")

The t atom (like true in C# and Java) tells format to print to standard output. Now imagine that we wrote the following macro:

(defmacro backwards (expr) (reverse expr))

backwards is the name of the macro, which takes an expression (represented as a list), and reverses it. Here’s hello, world again, this time using the macro:

(backwards ("hello, world" t format))

When the Lisp compiler sees that line of code, it looks at the first atom in the list (backwards), and notices that it names a macro. It passes the unevaluated list ("hello, world" t format) to the macro, which rearranges the list to (format t "hello, world"). The resulting list replaces the macro expression, and it is what will be evaluated at run-time. The Lisp environment will see that its first atom (format) is a function, and evaluate it, passing it the rest of the arguments.

So Lisp macros provide compile-time code generation, but in doing so, you have access not just to the code-generation instructions – which are just Lisp data – but also to the entire Lisp language and environment. And the resulting code doesn’t suffer a run-time performance penalty—the call to reverse above happens at compile time; the run-time environment sees only the expanded macro.

This really opens your eyes to new ways of removing duplication. Take the (if ... (progn ...)) example listed above. While progn is nice in that it gives you functionality similar to C’s braces without any special syntax, it can get a bit bulky to have to type it all the time. Common Lisp provides a when macro to make it easier; you’d use it like this:

(when (hungry Spot)
   (bark Spot)
   (wag-tail Spot))

But when doesn’t have to be baked into the language. Given if and progn, we could compile-time generate it with the following macro:

(defmacro when (condition &rest body)
   `(if ,condition (progn ,@body)))

OK, there’s some funky Lisp syntax in there (more than I want to go into to), but underneath the covers, it’s still just lists and atoms. The &rest atom gathers up any remaining parameters (after the condition parameter has been satisfied), and puts them into a single list which you can access through the variable body. The if list is quoted because we don’t want the macro to evaluate the if expression – remember it will try to if the first atom matches a function (or special form, or macro). Instead, we want to return the unevaluated list that will be evaluated later, at run-time. The comma and comma-at-sign are ways of splicing the parameters into the list, similar to string interpolation in Ruby.

So, in our when example with Spot the dog, (hungry Spot) will be the condition parameter, and ((bark Spot) (wag-tail Spot)) will be the body parameter. After when evaluates its arguments, it will return a list (source code) that looks just like the (if...(progn...)) that we wrote by hand.

Lisp also has a handy way of dealing with the variable naming problem we saw with our C macro, where we hard-coded the variable name i. You can use the function gensym to create a variable name that is guaranteed to be unique within its lexical scope. You can assign the result of gensym to another variable, and use that other variable in your macro. It will be replaced with the actual variable name when the macro expands. (Nifty, no?)

Finally, Lisp solves the multiple-pass code-generation problem. Macros can expand into macros, which can themselves expand into macros, for as long as you want. In fact, macros are so integral to the language, that this seems to be pretty common. Macros can call other macros or other functions. Some macros can become some large and complex that you might have an entire package dedicated to supporting functions for it.

The great thing about macros is that they let you actually change the language (to an extent), something that metaprogramming cannot accomplish. Let’s say you don’t care for Lisp’s clunky looping mechanisms. You’ve seen list comprehensions in Python or Erlang and want to bring some of that elegance to Lisp.

Too late – somebody else already did. The loop macro is incredibly powerful and complex. Some old-school Lispers hate it, because it really does change the language, which, for Lispers, is a way of saying it’s not using as many parentheses as it should. The following example is taken from Practical Common Lisp:

(loop for i in *random*
   counting (evenp i) into evens
   counting (oddp i) into odds
   summing i into total
   maximizing i into max
   minimizing i into min
   finally (return (list min max total evens odds)))

The result of that expression is a list containing the minimum, maximum, and sum of the numbers in the *random* global variable, as well as a list containing all even numbers and a list containing all odd numbers.

Try doing that in C.

Written by Brandon Byars

February 12, 2008 at 2:54 pm

Posted in Languages, Lisp

Tagged with ,

Managing Config Files

There’s a discussion on the altdotnet Yahoo group about managing configuration files. How do you manage updating multiple configuration files to change the appropriate values when deploying to a different environment?

The solution I hit on was to create a custom MSBuild task. When called from our build script, it looks something like this:

<ItemGroup>
    <ConfigFiles Include="$(DeployDir)/**/*.exe.config"/>
    <ConfigFiles Include="$(DeployDir)/**/*.dll.config"/>
    <ConfigFiles Include="$(DeployDir)/**/web.config"/>
</ItemGroup>

<ItemGroup>
    <HibernateFiles Include="$(DeployDir)/**/hibernate.cfg.xml"/>
</ItemGroup>

<ItemGroup>
    <Log4NetFiles Include="$(DeployDir)/**/log4net.config"/>
</ItemGroup>

<Target Name="UpdateConfig">
    <UpdateConfig
        ConfigFiles="@(ConfigFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\config.xml"
        Environment="$(Environment)" />
    <UpdateConfig
        ConfigFiles="@(HibernateFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\hibernate_config.xml"
        Environment="$(Environment)"
        NamespaceUri="urn:nhibernate-configuration-2.2"
        NamespacePrefix="hbm" />
    <UpdateConfig
        ConfigFiles="@(Log4NetFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\log4net_config.xml"
        Environment="$(Environment)" />
</Target>

Notice that each call to UpdateConfig takes the list of config files that will be changed and a config mapping file. That mapping file is what is read to update the config files given the environment. Here’s an example of what the mapping file looks like:


<configOptions>
    <add xpath="configuration/appSettings/add[@key='dbserver']">
        <staging>
            <add key="dbserver" value="stagingServer"/>
        </staging>
        <production>
            <add key="dbserver" value="productionServer"/>
        </production>
    </add>
</configOptions>

Each config file is scanned looking for each XPath expression in the mapping file. On each match, the entire node (and all its child nodes) of the original config file are replaced with the node under the appropriate environment tag in the mapping file. It’s a bit verbose, but simple enough, and it supports as many environments as you want to have.

The MSBuild task itself is fairly simple, delegating most of its work to a separate object called XmlMerger:

private void MergeChanges()
{
    foreach (ITaskItem item in ConfigFiles)
    {
        string configFile = item.ItemSpec;
        XmlDocument configFileDoc = LoadXmlDocument(configFile);
        XmlDocument configMappingDoc = LoadXmlDocument(configMappingFile);

        XmlMerger merger = new XmlMerger(configFileDoc, configMappingDoc);
        if (!string.IsNullOrEmpty(NamespaceUri) && !string.IsNullOrEmpty(NamespacePrefix))
            merger.AddNamespace(NamespacePrefix, NamespaceUri);

        merger.Merge(environment.ToLower());
        configFileDoc.Save(configFile);
    }
}

XmlMerger just finds the nodes that need updating and replaces them from the mapping file. Notice that it also accepts namespace information (see the NHibernate example in the build script snippet above), which is occasionally needed:

public class  XmlMerger
{
    private readonly XmlDocument configFile;
    private readonly XmlDocument configMapping;
    private readonly XmlNamespaceManager namespaces;

    public XmlMerger(XmlDocument configFile, XmlDocument configMapping)
    {
        this.configFile = configFile;
        this.configMapping = configMapping;
        namespaces = new XmlNamespaceManager(configFile.NameTable);
    }

    public void AddNamespace(string prefix, string uri)
    {
        namespaces.AddNamespace(prefix, uri);
    }

    public void Merge(string environment)
    {
        foreach (XmlNode mappingNode in configMapping.SelectNodes("/configOptions/add"))
        {
            string xpath = mappingNode.Attributes["xpath"].Value;
            XmlNode replacementNode = FindNode(mappingNode, environment).FirstChild;
            XmlNode nodeToReplace = configFile.SelectSingleNode(xpath, namespaces);
            if (nodeToReplace != null)
                ReplaceNode(nodeToReplace, replacementNode);
        }
    }

    private void ReplaceNode(XmlNode nodeToReplace, XmlNode replacementNode)
    {
        nodeToReplace.InnerXml = replacementNode.InnerXml;

        // Remove attributes not in nodeToReplace.  There's probably a cleaner solution,
        // but I didn't see it.
        for (int i = nodeToReplace.Attributes.Count - 1; i >= 0; i--)
        {
            if (replacementNode.Attributes[nodeToReplace.Attributes[i].Name] == null)
                nodeToReplace.Attributes.RemoveAt(i);
        }

        foreach (XmlAttribute attribute in replacementNode.Attributes)
        {
            if (nodeToReplace.Attributes[attribute.Name] == null)
                nodeToReplace.Attributes.Append(configFile.CreateAttribute(attribute.Name));

            nodeToReplace.Attributes[attribute.Name].Value = attribute.Value;
        }
    }

    private XmlNode FindNode(XmlNode node, string xpath)
    {
        XmlNode result = node.SelectSingleNode(xpath);
        if (result == null)
            throw new ApplicationException("Missing node for " + xpath);
        return result;
    }
}

That's it. Now the whole process is hands-free, so long as you remember to update the mapping file when needed. The config files we put into subversion are set to work in the development environment (everything is localhost), so anybody can checkout our code and start working without having to tweak a bunch of settings first. The deployment process calls our build script, which ensures that the appropriate config values get changed.

Written by Brandon Byars

January 10, 2008 at 9:39 pm

Posted in .NET, Configuration Management

Tagged with

Using Closures to Implement Undo

While it seems to be fairly common knowledge in the functional programming world, I don’t think most object-oriented developers realize that closures and objects can be used to implement each other. Ken Dickey showed how it can be done rather easily in Scheme, complete with multiple inheritance and dynamic dispatch.

That’s not to say, of course, that all OO programmers should drop their object hats and run over to the world of functional programming. There is room for multiple paradigms.

Take the well-known Command pattern, often advertised as having two advantages over a more traditional API:

  1. Commands can be easily decorated, giving you some measure of aspect-oriented programming. CruiseControl.NET uses a Command-pattern dispatch for the web interface, and decorates each command with error-handling, etc, providing a nice separation of concerns.
  2. Commands can give you easy undo functionality. Rails migrations are a good example.

Recently, I had to retrofit Undo onto an existing legacy (and ugly) codebase, and I was able to do it quite elegantly with closures instead of commands.

What are closures?

Briefly (since better descriptions lie elsewhere), a closure is a procedure that “remembers” its bindings to free variables, where free variables are those variables that lie outside the procedure itself. The name come from LISP, where the procedure (or “lambda”, as LISPers call them) was said to “close over” its lexical environment. In C# terms, a closure is simply an anonymous delegate with a reference to a free variable, as in:

string mark = “i wuz here”;
DoSomething(delegate { Console.WriteLine(mark); });

Notice that the anonymous delegate references the variable mark. When the delegate is actually called, it will be within a lexical scope that does not include mark. To make that work, the compiler wraps the closure in a class that remembers both the code to execute and any variable bindings (remember – objects and closures can be interchanged).

As always, Wikipedia has a nice write-up. A C#-specific description can be found here.

What does a closure-based Undo look like?

The legacy code I needed to update maintained the entire object state serialized in XML. This was terrible for a number of reasons, but it did have the advantage of making undo easy in principle; just swap out the new XML with the XML before making the previous API call. I wanted something like this:

public delegate void Action();

public void AddItem(OrderItemStruct itemInfo)
{
    string originalXml = orderXml;
    Action todo = delegate
    {
        OrderApi.AddOrderItem(currentSession, ref itemInfo,
            ref orderXml, out errorCode, out errorMessage);
    };
    Action undo = delegate { orderXml = originalXml; };
    processor.Do(todo, undo);
}

In actual practice, the undo part of that could be wrapped up in some boilerplate code:

public void AddItem(OrderItemStruct itemInfo)
{
    CallApiMethod(delegate
    {
        OrderApi.AddOrderItem(currentSession, ref itemInfo,
            ref orderXml, out errorCode, out errorMessage);
    });
}

private void CallApiMethod(Action method)
{
    string originalXml = orderXml;
    processor.Do(method, delegate { orderXml = originalXml; });
    // error handling, etc…
}

Notice that the undo procedure is referencing originalXml. That variable will be saved with the closure, making for a rather lightweight syntax, even with the static typing.

Getting Started

Implementing a single undo is really quite easy. Here’s a simple test fixture for it:

[Test]
public void SingleUndo()
{
    CommandProcessor processor = new CommandProcessor(5);
    int testValue = 0;
    processor.Do(delegate { testValue++; },
        delegate { testValue--; });

    processor.Undo();

    Assert.AreEqual(0, testValue);
}

…and the code to make it work:

public delegate void Action();

public class CommandProcessor
{
    private CircularBuffer undoBuffer;

    public CommandProcessor(int capacity)
    {
        undoBuffer = new CircularBuffer(capacity);
    }

    public void Do(Action doAction, Action undoAction)
    {
        doAction();
        undoBuffer.Add(undoAction);
    }

    public void Undo()
    {
        if (!undoBuffer.IsEmpty)
        {
            Action action = undoBuffer.Pop();
            action();
        }
    }
}

I won’t go into how CircularBuffer works, but it’s such a simple data structure that you can figure it out.

Naturally, with undo, we’ll want redo:

[Test]
public void SingleRedo()
{
    CommandProcessor processor = new CommandProcessor(5);
    int testValue = 0;
    processor.Do(delegate { testValue++; }, delegate { testValue--; });
    processor.Undo();

    processor.Redo();

    Assert.AreEqual(1, testValue);
}

Conceptually, this should be fairly easy:

public void Undo()
{
    PopAndDo(undoBuffer);
}

public void Redo()
{
    PopAndDo(redoBuffer);
}

private void PopAndDo(CircularBuffer buffer)
{
    if (!buffer.IsEmpty)
    {
        Action action = buffer.Pop();
        action();
    }
}

However, we’re not actually adding anything to the redo buffer yet. What we need to do is rather interesting—we don’t want to add to the redo buffer until Undo is called. Closures to the rescue:

public void  Do(Action doAction, Action undoAction)
{
    doAction();
    undoBuffer.Add(delegate
    {
        undoAction();
        redoBuffer.Add(doAction);
    });
}

But let’s say I undo, redo, and then want to undo and redo again. That won’t work as written, and making it work is starting to get pretty ugly:

public void Do(Action doAction, Action undoAction)
{
    doAction();
    undoBuffer.Add(delegate
    {
        undoAction();
        redoBuffer.Add(delegate
        {
            doAction();
            undoBuffer.Add(delegate
            {
                undoAction();
                redoBuffer.Add(doAction);
            });
        });
    });
}

It’s becoming apparent that what we really want is infinite recursion, lazily-evaluated. How ‘bout a closure?

public void  Do(Action doAction, Action undoAction)
{
    doAction();
    undoBuffer.Add(DecoratedAction(undoAction, undoBuffer, doAction, redoBuffer));
}

private Action DecoratedAction(Action undoAction, CircularBuffer undoBuffer,
        Action redoAction, CircularBuffer redoBuffer)
{
    return delegate
    {
        undoAction();
        redoBuffer.Add(DecoratedAction(
            redoAction, redoBuffer, undoAction, undoBuffer));
    };
}

Now we see how easy it is to decorate closures—remember that the ability to decorate commands is an oft-quoted advantage of them. However, closures provide a more lightweight approach to programming than commands.

The elegance of this approach is hard to deny. All it takes is getting over the conceptual hump that functions are just data. Think about it—we just added a function that took two functions as arguments and returned another function.

What also was apparent to me is how much TDD helped me get to this point. It may not be obvious from the few snippets I’ve shown here, but building up to the DecoratedAction abstraction was a very satisfying experience.

For reference, here’s the full CommandProcessor class. The bit I haven’t shown, CanUndo and CanRedo, along with an event that fires when either one change, is there so that we know when to enable or disable a menu option in a UI.

public class CommandProcessor
{
    public event EventHandler UndoAbilityChanged;

    private CircularBuffer undoBuffer;
    private CircularBuffer redoBuffer;

    public CommandProcessor(int capacity)
    {
        undoBuffer = new CircularBuffer(capacity);
        redoBuffer = new CircularBuffer(capacity);
    }

    public void Do(Action doAction, Action undoAction)
    {
        FireEventIfChanged(delegate
        {
            doAction();

            // Redo only makes sense if we’re redoing a clean undo stack.
            // Once they do something else, redo would corrupt the state.
            redoBuffer.Clear();

            undoBuffer.Add(DecoratedAction(
                undoAction, undoBuffer, doAction, redoBuffer));
        });
    }

    private Action DecoratedAction(Action undoAction, CircularBuffer undoBuffer,
        Action redoAction, CircularBuffer redoBuffer)
    {
        return delegate
        {
            undoAction();
            redoBuffer.Add(DecoratedAction(
                redoAction, redoBuffer, undoAction, undoBuffer));
        };
    }

    public void Undo()
    {
        FireEventIfChanged(delegate { PopAndDo(undoBuffer); });
    }

    public void Redo()
    {
        FireEventIfChanged(delegate { PopAndDo(redoBuffer); });
    }

    public void Clear()
    {
        undoBuffer.Clear();
        redoBuffer.Clear();
    }

    public bool CanUndo
    {
        get { return !undoBuffer.IsEmpty; }
    }

    public bool CanRedo
    {
        get { return !redoBuffer.IsEmpty; }
    }

    private void PopAndDo(CircularBuffer buffer)
    {
        if (!buffer.IsEmpty)
        {
            Action action = buffer.Pop();
            action();
        }
    }

    private void FireEventIfChanged(Action action)
    {
        bool originalCanUndo = CanUndo;
        bool originalCanRedo = CanRedo;

        action();

        if (originalCanUndo != CanUndo || originalCanRedo != CanRedo)
            OnUndoAbilityChanged(EventArgs.Empty);
    }

    protected void OnUndoAbilityChanged(EventArgs e)
    {
        EventUtils.FireEvent(this, e, UndoAbilityChanged);
    }
}

Written by Brandon Byars

November 5, 2007 at 11:26 pm

Posted in .NET, Design Patterns, TDD

Tagged with

Configuring ActiveRecord to work with SQL Server 2005

As much as possible, I like a zero-install configuration. In other words, I want to simply checkout a codebase, run an automated build process, and start working. Ideally, I’d like to be able to do that on a clean machine.

It doesn’t always work, of course. For instance, even though most of our production code is written in .NET, we use Ruby extensively for automation. Installing Ruby is one of those dependencies that we live with. But installing Ruby isn’t enough; we also need Rails (or at least ActiveRecord) for some data management scripts we have, Watir, and some fixes for ActiveRecord to work with SQL Server.

All of that can be done fairly easily by somebody who knows what they’re doing, but new developers often don’t know what they’re doing, and I strive to be dispensable. We wrote a script that configured
Ruby and the necessary libraries to work for our development environment.

First, we needed to install the necessary gems. This is quite easy to do on the command line, but it took me a little digging before I figured out how to do it in code:

needed_gems = ["rails"]

require "rubygems"
Gem.manage_gems

needed_gems.each do |gem|
  puts "Installing gem #{gem}..."
  Gem::GemRunner.new.run ["install", gem, "--include-dependencies"]
end

SQL Server requires an ADO adapter that doesn’t ship with Ruby. You can read about it here. All that’s needed is to download the ADO.rb file (which we have versioned next to our setup script) and copy it to the right place, creating the directory if needed:

setup_dir = File.dirname(__FILE__)
ruby_dir = "C:/ruby/lib/ruby"
ruby_version = 1.8

# The ADO adapter needed for using dbi (the standard db access)
# with SQL Server, which does not come with ruby by default.
# See http://wiki.rubyonrails.org/rails/pages/HowtoConnectToMicrosoftSQLServer.

puts "creating ADO adapter..."
ado_dir = "#{ruby_dir}/site_ruby/#{ruby_version}/DBD/ADO"
system "if not exist #{ado_dir} mkdir #{ado_dir}"
system "copy /Y #{setup_dir}/ADO.rb #{ado_dir}"

Finally, we use SQL Server 2005, and we want to use Windows Authentication for all of our Ruby scripts. Neither SQL Server 2005 nor Windows Authentication is supported by rails out of the box. The problem, described on the SQL Server wiki for rails, is the connection string rails builds. At first, we were taking the suggestions of some of the comments on the wiki and changing the sqlserver_adapter.rb file that ships with rails. This obviously isn’t ideal, so now we monkey-patch it in our code that accesses the database:

module DBI
  # We have to monkey patch this because the SQL Server adapter that comes
  # with rails (active_record\connection_adapters\sqlserver_adapter.rb)
  # doesn't work with SQL Server 2005 or with Integrated Security yet.
  class << self
    alias_method :original_connect, :connect

    def connect(driver_url, username, password)
      # Change to SQL 2005
      driver_url.sub! "SQLOLEDB", "SQLNCLI"

      # Change to Windows Authentication
      driver_url.sub! /User Id=[^;]*;Password=[^;]*;/, "Integrated Security=SSPI;"

      original_connect(driver_url, username, password)
    end
  end
end

And that’s it. You still can’t checkout the codebase and start working on a clean machine, but it’s not bad. Install Ruby, run setup.rb. All database code has been patched to deal with our environment.

Written by Brandon Byars

October 23, 2007 at 7:13 pm