A Day In The Lyf

…the lyf so short, the craft so longe to lerne

Archive for the ‘.NET’ Category

RestMvc – RESTful Goodies for ASP.NET MVC

with 4 comments

Last summer, I found myself building a RESTful ASP.NET MVC service that had an HTML admin UI. Oftentimes, the resource that was being edited in HTML was the same resource that needed to be sent out in XML via the service, which mapped nicely to the REST ‘multiple representations per resource’ philosophy.

There are obviously some very nice RESTful libraries for ASP.NET MVC, but none quite met my needs. Simply Restful Routing, which comes with MVC Contrib, takes a Rails-inspired approach of handing you a pre-built set of routes that more or less match a RESTful contract for a resource. While obviously convenient, that’s never been my preferred way to manage routing. It adds a bunch of routes that you probably have no intention of implementing. It keeps the routes centralized, which never seemed to read as well to me as the way Sinatra keeps the routing configuration next to the block that handles requests to that route.

Additionally, one of the problems I encountered with other routing libraries like Simply Restful is that they define the IRouteHandler internally, which removes your ability to add any custom hooks into the routing process. I needed just such a hook to add content negotiation. I also wanted some RESTful goodies, like responding with a 405 instead of a 404 status code if we did route to a resource (identified by a URI template), but not to a requested HTTP verb on that resource. I wanted the library to automatically deal with HEAD and OPTIONS requests. In the end, I created my own open-source library called RestMvc which provides such goodies with Sinatra-like routing and content negotiation.

Routing

public class OrdersController : Controller
{
    [Get("/orders")]
    public ActionResult Index() { ... }

    [Post("/orders"]
    public ActionResult Create() { ... }

    [Get("/orders/{id}.format", "/orders/{id}")]
    public ActionResult Show(string id) { ... }

    [Put("/orders/{id}")]
    public ActionResult Edit(string id) { ... }

    [Delete("/orders/{id}")]
    public ActionResult Destroy(string id) { ... }
}

Adding the routes for the attributes above is done in Global.asax.cs, in a couple of different ways:


RouteTable.Routes.Map();
// or RouteTable.Routes.MapAssembly(Assembly.GetExecutingAssembly());

That is, in effect, the entire routing API of RestMvc. The Map and MapAssembly extension methods will do the following:

  • Create the routes defined by the HTTP methods and URI templates in the attributes. Even though System.Web.Routing does not allow you to prefix URI templates with either / or ~/, I find allowing those prefixes can enhance readability, and thus they are allowed.
  • Routes HEAD and OPTIONS methods for the two URI templates (“orders” and “orders/{id}”) to a method within RestMVC capable of handling those methods intelligently.
  • Routes PUT and DELETE for /orders, and POST for /orders/{id}, to a method within RestMvc that knows to return a 405 HTTP status code (Method Not Supported) with an appropriate Allow header. This method and the ones that handle HEAD and OPTIONS, work without any subclassing for the Controller as shown above. However, if you need to customize their behavior — for example, to add a body to OPTIONS — you can subclass RestfulController and override the appropriate method.
  • Adds routes for tunnelling PUT and DELETE through POST for HTML browser support. RestMvc takes the Rails approach of looking for a hidden form field called _method set to either PUT or DELETE. If you don’t want the default behavior, or you do want the tunnelling but with a different form field, you can call ResourceMapper directly instead of accepting the defaults that the Map and MapAssembly extension methods provide.
  • Notice the optional format parameter on the Get attribute above the Show method. Routes with an extension are routed such that the extension gets passed as the format parameter, if the resource supports multiple representations (e.g. /orders/1.xml routes to Show with a format of xml). The ordering of the URI templates in the Get attribute is important. Had I reversed the order, /orders/1.xml would have matched with an id of “1.xml” and an empty format
  • The last point is a convenient way to handle multiple formats for a resource. Since it’s in the URL, it can be bookmarked and emailed, or tested through a browser, with the same representation regardless of the HTTP headers. Even if content negotiation is used, it allows you to bypass the standard negotiation process. Note that having different URLs for different representations of the same resource is generally frowned upon by REST purists. RestMvc does not automatically provide these routes for you, but lets you add them if you want.

    Content Negotiation

    Content negotiation is provided as a decorator to the standard RouteHandler. Doing it this way allows you to compose additional custom behavior that needs access to the IRouteHandler.

    // In Global.asax.cs
    var map = new MediaTypeFormatMap();
    map.Add(MediaType.Html, "html");
    map.Add(MediaType.Xhtml, "html");
    map.Add(MediaType.Xml, xml");
    
    var connegRouter = new ContentNegotiationRouteProxy(new MvcRouteHandler(), map);
    
    RouteTable.Routes.MapAssembly(Assembly.GetExecutingAssembly(), connegRouter);

    In the absence of a route URI template specifying the format explicitly, the connegRouter will examine the Accept request header and pick the first media type supported in the map. Wildcard matches are supported (e.g. text/* matches text/html). The format parameter will be set for the route, based on the value added in the MediaTypeFormatMap.

    The content negotiation is quite simple at the moment. The q parameter in the Accept header is completely ignored. By default, it tries to abide by the Accept header prioritization inferred from the order of the MIME types in the header. However, you can change it to allow the server ordering, as defined by the order MIME types are added to the MediaTypeFormatMap, to take priority. This was added to work around what I consider to be a bug in Google Chrome – despite being unable to natively render XML, it prioritizes XML over HTML in its Accept header. The library does not currently support sending back a 406 (Not Acceptable) HTTP status code when no acceptable MIME type is sent in the Accept header.

    Next Steps

    I haven’t worked on RestMvc in a few months, largely because I shifted focus at work and haven’t done any .NET programming in a while. However, I had planned on doing some automatic etagging, and to make the content negotiation more robust.

    Contributors welcome! The code can be found on github.

Written by Brandon Byars

January 6, 2011 at 5:02 pm

Posted in .NET

Tagged with , ,

Funcletize This!

I was recently involved in troubleshooting a bug in our staging environment. We had some code that worked in every environment we had put it in, except staging. Once there, you perform the equivalent of an update on a field (using LINQ in C#), only to be greeted by a ChangeConflictException.

I’m embarrassed by how long it took to figure out what was wrong. It was obviously an optimistic locking problem, and I even mentioned that it was because the UPDATE statement wasn’t updating anything once I first saw the exception. Optimistic locking works by adding extra fields to the WHERE clause to make sure that the data hasn’t changed since you loaded it. If one of those fields had changed, the WHERE clause wouldn’t match anything, and the O/RM would assume that somebody’s changed the data behind your back and throw an exception.

It turns out that failing to match any rows with the given filter isn’t the only way that LINQ will think no rows were updated; it’s also dependent on the NOCOUNT option in SQL Server. If the database is configured to have NOCOUNT ON, then the number of rows affected by each query won’t be sent back to the client. LINQ interprets this lack of information as 0 rows being updated, and thus throws the ChangeConflictException.

In itself, the bug wasn’t very interesting. What is interesting is what we saw when we opened Reflector to look at the LINQ code around the exception:

IExecuteResult IProvider.Execute(Expression query)
{
    // …
    query = Funcletizer.Funcletize(query);
}

Love it. Uniquifiers, Funcletizers, and Daemonizers of the world unite.

Written by Brandon Byars

October 26, 2008 at 12:13 pm

Posted in .NET, Database

Tagged with ,

Code Generation and Metaprogramming

I wanted to expand upon an idea that I first talked about in my previous post on Common Lisp. There is a common pattern between syntactic macros, runtime metaprogramming, and static code generation.

Runtime metaprogramming is code-generation. Just like C macros. Just like CL macros.

Ok, that’s a bit of an overstatement. Those three things aren’t really just like each other. But they are definitely related—they all write code that you’d rather not write yourself. Because it’s boring. And repetitious. And ugly.

In general, there are three points at which you can generate code in the development process, although the terminology leaves something to be desired: before compilation, during compilation (or interpretation), and during runtime. In the software development vernacular, only the first option is typically called code-generation (I’ll call it static code generation to avoid confusion). Code generation during compilation goes under the moniker of a ‘syntactic macro,’ and I’m calling runtime code generation ‘runtime metaprogramming.’

Since the “meta” in metaprogramming implies writing code that writes code, all three forms of code generation can be considered metaprogramming, which is why I snuck the “runtime” prefix into the third option above. Just in case you were wondering…

Static Code Generation

Static code generation is the easiest to understand and the weakest of the three options, but it’s often your only option due to language limitations. C macros are an example of static code generation, and it is the only metaprogramming option possible with C out-of-the box.

To take an example, on a previous project I generated code for lazy loading proxies in C#. A proxy, one of the standard GoF design patterns, sits in between a client and an object and intercepts messages that the client sends to the object. For lazy loading, this means that we can instantiate a proxy in place of a database-loaded object, and the client can use it without even knowing that it’s using a proxy. For performance reasons, the actual database object will only be loaded on first access of the proxy. Here’s a truncated example:

public class OrderProxy : IOrder
{
    private IOrder proxiedOrder = null;
    private long id;
    private bool isLoaded = false;

    public OrderProxy(long id)
    {
        this.id = id;
    }

    private void Load()
    {
        if (!isLoaded)
        {
           proxiedOrder = Find();
           isLoaded = true;
        }
    }

    private IOrder Find()
    {
        return FinderRegistry.OrderFinder.Find(id);
    }

    public string OrderNumber
    {
        get
        {
           Load();
           return proxiedOrder.OrderNumber;
        }
        set
        {
           Load();
           proxiedOrder.OrderNumber = value;
        }
    }

    public DateTime DateSubmitted
    {
        get
        {
           Load();
           return proxiedOrder.DateSubmitted;
        }
    }
}

This code is boring to write and boring to maintain. Every time the interface changes, a very repetitious change has to be made in the proxy. To make it worse, we have to do this for every database entity we’ll want to load (at least those we’re worried about lazy-loading). All I’d really like to say is “make this class implement the appropriate interface, and make it a lazy-loading proxy.” Fortunately, since the proxy is supposed to be a drop-in replacement for any other class implementing the same interface, we can use reflection to query the interface and statically generate the proxy.

There’s an important limitation to generating this code statically. Because we’re doing this before compilation, this approach requires a separated interfaces approach, where the binary containing the interfaces is separate from the assembly we’re generating the proxies for. We’ll have to compile the interfaces, use reflection on the compiled assembly to generate the source code for the proxies, and compile the newly generated source code.

But it’s do-able. Simply load the interface using reflection:

public static Type GetType(string name, string nameSpace, string assemblyFileName)
{
    if (!File.Exists(assemblyFileName))
        throw new IOException("No such file");

    Assembly assembly = Assembly.LoadFile(Path.GetFullPath(assemblyFileName));
    string qualifiedName = string.Format(“{0}.{1}”, nameSpace, name);
    return assembly.GetType(qualifiedName, true, true);
}

From there it’s pretty trivial to loop through the properties and methods and recreate the source code for them on the proxy, with a call to Load before delegating to the proxied object.

Runtime Metaprogramming

Now it turns out that when I wrote the code generation code above, there weren’t very many mature object-relational mappers in the .NET space. Fortunately, that’s changed, and the code above is no longer necessary. NHibernate will lazy-load for you, using a similar proxy approach that I used above. Except, NHibernate will write the proxy code at runtime.

The mechanics of how this work are encapsulated in a nice little library called Castle.DynamicProxy. NHibernate uses reflection to read interfaces (or virtual classes) and calls DynamicProxy to runtime generate code using the Reflection.Emit namespace. In C#, that’s a difficult thing to do, which is why I wouldn’t recommend doing it unless you use DynamicProxy.

This is a much more powerful technique than static code generation. For starters, you no longer need two assemblies, one for the interfaces, and one for the proxies. But the power of runtime metaprogramming extends well beyond saving you a simple .NET assembly.

Ruby makes metaprogramming much easier than C#. The standard Rails object-relational mapper also uses proxies to manage associations, but the metaprogramming applies even to the model classes themselves (which are equivalent to the classes that implement our .NET interfaces). The truncated IOrder implementation above showed 3 properties: Id, OrderNumber, and DateSubmitted. Assuming we have those columns in our orders table in the database, then the following Ruby class completely implements the same interface:

class Order < ActiveRecord::Base
end

At runtime, The ActiveRecord::Base superclass will load the schema of the orders table, and for each column, add a property to the Order class of the same name. Now we really see the power of metaprogramming: it helps us keep our code DRY. If it’s already specified in the database schema, why should we have to specify it in our application code as well?

Syntactic Macros

It probably wouldn’t make much sense to generate lazy-loading proxies at compile time, but that doesn’t mean syntactic macros don’t have their place. Used appropriately, they can DRY up your code in ways that even runtime metaprogramming cannot.

Peter Seibel gives a good example of building a unit test framework in Common Lisp. The idea is that we’d like to assert certain code is true, but also show the asserted code in our report. For example:

pass ... (= (+ 1 2) 3)
pass ... (= (+ 1 2 3) 6)
pass ... (= (-1 -3) -4)

The code to make this work, assuming report-result is implemented correctly, looks like this:

(defun test-+ ()
  (report-result (= (+ 1 2) 3) '(= (+ 1 2) 3))
  (report-result (= (+ 1 2 3) 6) '(= (+1 2 3) 6))
  (report-result (= (+ -1 -3) -4) '(= (+ -1 -3) -4)))

Notice the ugly duplication in each call to report-result. We have the code that’s actually executed (the first parameter), and the quoted list to report (the second parameter). Runtime metaprogramming could not solve the problem because the first parameter will be evaluated before being passed to report-result. Static code-generation could remove the duplication, but would be ugly. We could DRY up the code at compile time, if only we had access to the abstract syntax tree. Fortunately, in CL, the source code is little more than a textual representation of the AST.

Here’s the macro that Seibel comes up with:

(defmacro check (&body forms)
  `(progn
    ,@(loop for f in forms collect `(report-result ,f ',f))))

Notice how the source code within the list (represented as the loop variable f) is both executed and quoted. The test now becomes much simpler:

(defun test-+ ()
  (check (= (+ 1 2) 3))
  (check (= (+ 1 2 3) 6))
  (check (= (+ -1 -3) -4)))

Summary

Finding ways to eliminate duplication is always A Good Thing. For a long time, if you were programming in a mainstream language, then static code generation was your only option when code generation was needed. Things changed with the advent of reflection based languages, particularly when Java and C# joined the list of mainstream languages. Even though their metaprogramming capability isn’t as powerful as languages like Smalltalk and Ruby, they at least introduced metaprogramming techniques to the masses.

Of course, Lisp has been around since, say, the 1950’s (I’m not sure how long macros have been around, however). Syntactic macros provide a very powerful way of generating code, even letting you change the language. But until more languages implement them, they will never become as popular as they should be.

Written by Brandon Byars

March 29, 2008 at 6:00 pm

Managing Config Files

There’s a discussion on the altdotnet Yahoo group about managing configuration files. How do you manage updating multiple configuration files to change the appropriate values when deploying to a different environment?

The solution I hit on was to create a custom MSBuild task. When called from our build script, it looks something like this:

<ItemGroup>
    <ConfigFiles Include="$(DeployDir)/**/*.exe.config"/>
    <ConfigFiles Include="$(DeployDir)/**/*.dll.config"/>
    <ConfigFiles Include="$(DeployDir)/**/web.config"/>
</ItemGroup>

<ItemGroup>
    <HibernateFiles Include="$(DeployDir)/**/hibernate.cfg.xml"/>
</ItemGroup>

<ItemGroup>
    <Log4NetFiles Include="$(DeployDir)/**/log4net.config"/>
</ItemGroup>

<Target Name="UpdateConfig">
    <UpdateConfig
        ConfigFiles="@(ConfigFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\config.xml"
        Environment="$(Environment)" />
    <UpdateConfig
        ConfigFiles="@(HibernateFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\hibernate_config.xml"
        Environment="$(Environment)"
        NamespaceUri="urn:nhibernate-configuration-2.2"
        NamespacePrefix="hbm" />
    <UpdateConfig
        ConfigFiles="@(Log4NetFiles)"
        ConfigMappingFile="$(MSBuildProjectDirectory)\config\log4net_config.xml"
        Environment="$(Environment)" />
</Target>

Notice that each call to UpdateConfig takes the list of config files that will be changed and a config mapping file. That mapping file is what is read to update the config files given the environment. Here’s an example of what the mapping file looks like:


<configOptions>
    <add xpath="configuration/appSettings/add[@key='dbserver']">
        <staging>
            <add key="dbserver" value="stagingServer"/>
        </staging>
        <production>
            <add key="dbserver" value="productionServer"/>
        </production>
    </add>
</configOptions>

Each config file is scanned looking for each XPath expression in the mapping file. On each match, the entire node (and all its child nodes) of the original config file are replaced with the node under the appropriate environment tag in the mapping file. It’s a bit verbose, but simple enough, and it supports as many environments as you want to have.

The MSBuild task itself is fairly simple, delegating most of its work to a separate object called XmlMerger:

private void MergeChanges()
{
    foreach (ITaskItem item in ConfigFiles)
    {
        string configFile = item.ItemSpec;
        XmlDocument configFileDoc = LoadXmlDocument(configFile);
        XmlDocument configMappingDoc = LoadXmlDocument(configMappingFile);

        XmlMerger merger = new XmlMerger(configFileDoc, configMappingDoc);
        if (!string.IsNullOrEmpty(NamespaceUri) && !string.IsNullOrEmpty(NamespacePrefix))
            merger.AddNamespace(NamespacePrefix, NamespaceUri);

        merger.Merge(environment.ToLower());
        configFileDoc.Save(configFile);
    }
}

XmlMerger just finds the nodes that need updating and replaces them from the mapping file. Notice that it also accepts namespace information (see the NHibernate example in the build script snippet above), which is occasionally needed:

public class  XmlMerger
{
    private readonly XmlDocument configFile;
    private readonly XmlDocument configMapping;
    private readonly XmlNamespaceManager namespaces;

    public XmlMerger(XmlDocument configFile, XmlDocument configMapping)
    {
        this.configFile = configFile;
        this.configMapping = configMapping;
        namespaces = new XmlNamespaceManager(configFile.NameTable);
    }

    public void AddNamespace(string prefix, string uri)
    {
        namespaces.AddNamespace(prefix, uri);
    }

    public void Merge(string environment)
    {
        foreach (XmlNode mappingNode in configMapping.SelectNodes("/configOptions/add"))
        {
            string xpath = mappingNode.Attributes["xpath"].Value;
            XmlNode replacementNode = FindNode(mappingNode, environment).FirstChild;
            XmlNode nodeToReplace = configFile.SelectSingleNode(xpath, namespaces);
            if (nodeToReplace != null)
                ReplaceNode(nodeToReplace, replacementNode);
        }
    }

    private void ReplaceNode(XmlNode nodeToReplace, XmlNode replacementNode)
    {
        nodeToReplace.InnerXml = replacementNode.InnerXml;

        // Remove attributes not in nodeToReplace.  There's probably a cleaner solution,
        // but I didn't see it.
        for (int i = nodeToReplace.Attributes.Count - 1; i >= 0; i--)
        {
            if (replacementNode.Attributes[nodeToReplace.Attributes[i].Name] == null)
                nodeToReplace.Attributes.RemoveAt(i);
        }

        foreach (XmlAttribute attribute in replacementNode.Attributes)
        {
            if (nodeToReplace.Attributes[attribute.Name] == null)
                nodeToReplace.Attributes.Append(configFile.CreateAttribute(attribute.Name));

            nodeToReplace.Attributes[attribute.Name].Value = attribute.Value;
        }
    }

    private XmlNode FindNode(XmlNode node, string xpath)
    {
        XmlNode result = node.SelectSingleNode(xpath);
        if (result == null)
            throw new ApplicationException("Missing node for " + xpath);
        return result;
    }
}

That's it. Now the whole process is hands-free, so long as you remember to update the mapping file when needed. The config files we put into subversion are set to work in the development environment (everything is localhost), so anybody can checkout our code and start working without having to tweak a bunch of settings first. The deployment process calls our build script, which ensures that the appropriate config values get changed.

Written by Brandon Byars

January 10, 2008 at 9:39 pm

Posted in .NET, Configuration Management

Tagged with

Using Closures to Implement Undo

While it seems to be fairly common knowledge in the functional programming world, I don’t think most object-oriented developers realize that closures and objects can be used to implement each other. Ken Dickey showed how it can be done rather easily in Scheme, complete with multiple inheritance and dynamic dispatch.

That’s not to say, of course, that all OO programmers should drop their object hats and run over to the world of functional programming. There is room for multiple paradigms.

Take the well-known Command pattern, often advertised as having two advantages over a more traditional API:

  1. Commands can be easily decorated, giving you some measure of aspect-oriented programming. CruiseControl.NET uses a Command-pattern dispatch for the web interface, and decorates each command with error-handling, etc, providing a nice separation of concerns.
  2. Commands can give you easy undo functionality. Rails migrations are a good example.

Recently, I had to retrofit Undo onto an existing legacy (and ugly) codebase, and I was able to do it quite elegantly with closures instead of commands.

What are closures?

Briefly (since better descriptions lie elsewhere), a closure is a procedure that “remembers” its bindings to free variables, where free variables are those variables that lie outside the procedure itself. The name come from LISP, where the procedure (or “lambda”, as LISPers call them) was said to “close over” its lexical environment. In C# terms, a closure is simply an anonymous delegate with a reference to a free variable, as in:

string mark = “i wuz here”;
DoSomething(delegate { Console.WriteLine(mark); });

Notice that the anonymous delegate references the variable mark. When the delegate is actually called, it will be within a lexical scope that does not include mark. To make that work, the compiler wraps the closure in a class that remembers both the code to execute and any variable bindings (remember – objects and closures can be interchanged).

As always, Wikipedia has a nice write-up. A C#-specific description can be found here.

What does a closure-based Undo look like?

The legacy code I needed to update maintained the entire object state serialized in XML. This was terrible for a number of reasons, but it did have the advantage of making undo easy in principle; just swap out the new XML with the XML before making the previous API call. I wanted something like this:

public delegate void Action();

public void AddItem(OrderItemStruct itemInfo)
{
    string originalXml = orderXml;
    Action todo = delegate
    {
        OrderApi.AddOrderItem(currentSession, ref itemInfo,
            ref orderXml, out errorCode, out errorMessage);
    };
    Action undo = delegate { orderXml = originalXml; };
    processor.Do(todo, undo);
}

In actual practice, the undo part of that could be wrapped up in some boilerplate code:

public void AddItem(OrderItemStruct itemInfo)
{
    CallApiMethod(delegate
    {
        OrderApi.AddOrderItem(currentSession, ref itemInfo,
            ref orderXml, out errorCode, out errorMessage);
    });
}

private void CallApiMethod(Action method)
{
    string originalXml = orderXml;
    processor.Do(method, delegate { orderXml = originalXml; });
    // error handling, etc…
}

Notice that the undo procedure is referencing originalXml. That variable will be saved with the closure, making for a rather lightweight syntax, even with the static typing.

Getting Started

Implementing a single undo is really quite easy. Here’s a simple test fixture for it:

[Test]
public void SingleUndo()
{
    CommandProcessor processor = new CommandProcessor(5);
    int testValue = 0;
    processor.Do(delegate { testValue++; },
        delegate { testValue--; });

    processor.Undo();

    Assert.AreEqual(0, testValue);
}

…and the code to make it work:

public delegate void Action();

public class CommandProcessor
{
    private CircularBuffer undoBuffer;

    public CommandProcessor(int capacity)
    {
        undoBuffer = new CircularBuffer(capacity);
    }

    public void Do(Action doAction, Action undoAction)
    {
        doAction();
        undoBuffer.Add(undoAction);
    }

    public void Undo()
    {
        if (!undoBuffer.IsEmpty)
        {
            Action action = undoBuffer.Pop();
            action();
        }
    }
}

I won’t go into how CircularBuffer works, but it’s such a simple data structure that you can figure it out.

Naturally, with undo, we’ll want redo:

[Test]
public void SingleRedo()
{
    CommandProcessor processor = new CommandProcessor(5);
    int testValue = 0;
    processor.Do(delegate { testValue++; }, delegate { testValue--; });
    processor.Undo();

    processor.Redo();

    Assert.AreEqual(1, testValue);
}

Conceptually, this should be fairly easy:

public void Undo()
{
    PopAndDo(undoBuffer);
}

public void Redo()
{
    PopAndDo(redoBuffer);
}

private void PopAndDo(CircularBuffer buffer)
{
    if (!buffer.IsEmpty)
    {
        Action action = buffer.Pop();
        action();
    }
}

However, we’re not actually adding anything to the redo buffer yet. What we need to do is rather interesting—we don’t want to add to the redo buffer until Undo is called. Closures to the rescue:

public void  Do(Action doAction, Action undoAction)
{
    doAction();
    undoBuffer.Add(delegate
    {
        undoAction();
        redoBuffer.Add(doAction);
    });
}

But let’s say I undo, redo, and then want to undo and redo again. That won’t work as written, and making it work is starting to get pretty ugly:

public void Do(Action doAction, Action undoAction)
{
    doAction();
    undoBuffer.Add(delegate
    {
        undoAction();
        redoBuffer.Add(delegate
        {
            doAction();
            undoBuffer.Add(delegate
            {
                undoAction();
                redoBuffer.Add(doAction);
            });
        });
    });
}

It’s becoming apparent that what we really want is infinite recursion, lazily-evaluated. How ‘bout a closure?

public void  Do(Action doAction, Action undoAction)
{
    doAction();
    undoBuffer.Add(DecoratedAction(undoAction, undoBuffer, doAction, redoBuffer));
}

private Action DecoratedAction(Action undoAction, CircularBuffer undoBuffer,
        Action redoAction, CircularBuffer redoBuffer)
{
    return delegate
    {
        undoAction();
        redoBuffer.Add(DecoratedAction(
            redoAction, redoBuffer, undoAction, undoBuffer));
    };
}

Now we see how easy it is to decorate closures—remember that the ability to decorate commands is an oft-quoted advantage of them. However, closures provide a more lightweight approach to programming than commands.

The elegance of this approach is hard to deny. All it takes is getting over the conceptual hump that functions are just data. Think about it—we just added a function that took two functions as arguments and returned another function.

What also was apparent to me is how much TDD helped me get to this point. It may not be obvious from the few snippets I’ve shown here, but building up to the DecoratedAction abstraction was a very satisfying experience.

For reference, here’s the full CommandProcessor class. The bit I haven’t shown, CanUndo and CanRedo, along with an event that fires when either one change, is there so that we know when to enable or disable a menu option in a UI.

public class CommandProcessor
{
    public event EventHandler UndoAbilityChanged;

    private CircularBuffer undoBuffer;
    private CircularBuffer redoBuffer;

    public CommandProcessor(int capacity)
    {
        undoBuffer = new CircularBuffer(capacity);
        redoBuffer = new CircularBuffer(capacity);
    }

    public void Do(Action doAction, Action undoAction)
    {
        FireEventIfChanged(delegate
        {
            doAction();

            // Redo only makes sense if we’re redoing a clean undo stack.
            // Once they do something else, redo would corrupt the state.
            redoBuffer.Clear();

            undoBuffer.Add(DecoratedAction(
                undoAction, undoBuffer, doAction, redoBuffer));
        });
    }

    private Action DecoratedAction(Action undoAction, CircularBuffer undoBuffer,
        Action redoAction, CircularBuffer redoBuffer)
    {
        return delegate
        {
            undoAction();
            redoBuffer.Add(DecoratedAction(
                redoAction, redoBuffer, undoAction, undoBuffer));
        };
    }

    public void Undo()
    {
        FireEventIfChanged(delegate { PopAndDo(undoBuffer); });
    }

    public void Redo()
    {
        FireEventIfChanged(delegate { PopAndDo(redoBuffer); });
    }

    public void Clear()
    {
        undoBuffer.Clear();
        redoBuffer.Clear();
    }

    public bool CanUndo
    {
        get { return !undoBuffer.IsEmpty; }
    }

    public bool CanRedo
    {
        get { return !redoBuffer.IsEmpty; }
    }

    private void PopAndDo(CircularBuffer buffer)
    {
        if (!buffer.IsEmpty)
        {
            Action action = buffer.Pop();
            action();
        }
    }

    private void FireEventIfChanged(Action action)
    {
        bool originalCanUndo = CanUndo;
        bool originalCanRedo = CanRedo;

        action();

        if (originalCanUndo != CanUndo || originalCanRedo != CanRedo)
            OnUndoAbilityChanged(EventArgs.Empty);
    }

    protected void OnUndoAbilityChanged(EventArgs e)
    {
        EventUtils.FireEvent(this, e, UndoAbilityChanged);
    }
}

Written by Brandon Byars

November 5, 2007 at 11:26 pm

Posted in .NET, Design Patterns, TDD

Tagged with

C# Enum Generation

Ayende recently asked on the ALT.NET mailing list about the various methods developers use to provide lookup values, with the question framed as one between lookup tables and enums. My own preference is to use both, but keep it DRY with code generation.

To demonstrate the idea, I wrote a Ruby script that generates a C# enum file from some metadata. I much prefer Ruby to pure .NET solutions like CodeSmith—I find it easier and more powerful (I do think CodeSmith is excellent if there is no Ruby expertise on the team, however). The full source for this example can be grabbed here.

The idea is simple. I want a straightforward and extensible way to provide metadata for lookup values, following the Ruby Way of convention over configuration. XML is very popular in the .NET world, but the Ruby world views it as overly verbose, and prefers lighter markup languages like YAML. For my purposes, I decided not to mess with markup at all (although I’m still considering switching to YAML—the hash of hashes approach describes what I want well). Here’s some example metadata:

enums = {
  'OrderType' => {},
  'MethodOfPayment' => {:table => 'PaymentMethod',},
  'StateProvince' => {:table => 'StateProvinces',
                      :name_column => 'Abbreviation',
                      :id_column => 'StateProvinceId',
                      :transformer => lambda {|value| value.upcase},
                      :filter => lambda {|value| !value.empty?}}
}

That list, which is valid Ruby code, describes three enums, which will be named OrderType, MethodOfPayment, and StateProvince. The intention is that, where you followed your database standards, you should usually be able to get by without adding any extra metadata, as shown in the OrderType example. The code generator will get the ids and enum names from the OrderType table (expecting the columns to be named OrderTypeId and Description) and create the enum from those values. As StateProvince shows, the table name and two column names can be overridden.

More interestingly, you can both transform and filter the enum names by passing lambdas (which are like anonymous delegates in C#). The ‘StateProvince’ example above will filter out any states that, after cleaning up any illegal characters, equal an empty string, and then it will upper case the name.

We use a pre-build event in our project to build the enum file. However, if you simply overwrite the file every time you build, you may slow down the build process considerably. MSBuild (used by Visual Studio) evidently sees that the timestamp has been updated, so it rebuilds the project, forcing a rebuild of all downstream dependent projects. A better solution is to only overwrite the file if there are changes:

require File.dirname(__FILE__) + '/enum_generator'

gen = EnumGenerator.new('localhost', ‘database-name’)
source = gen.generate_all(‘Namespace', enums)

filename = File.join(File.dirname(__FILE__), 'Enums.cs')
if Dir[filename].empty? || source != IO.read(filename)
  File.open(filename, 'w') {|file| file << source}
end

I define the basic templates straight in the EnumGenerator class, but allow them to be swapped out. In theory, the default name column and the default lambda for generating the id column name given the table name (or enum name) could be handled the same way. Below is the EnumGenerator code:

class EnumGenerator
  FILE_TEMPLATE = <<EOT
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated by a tool from <%= catalog %> on <%= server %>.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

namespace <%= namespace %>
{
    <%= enums %>
}
EOT

  ENUM_TEMPLATE = <<EOT
public enum <%= enum_name %>
{
<% values.keys.sort.each_with_index do |id, i| -%>
    <%= values[id] %> = <%= id %><%= ',' unless i == values.length - 1 %>
<% end -%>
}

EOT

  # Change the templates by calling these setters
  attr_accessor :enum_template, :file_template

  attr_reader :server, :catalog

  def initialize(server, catalog)
    @server, @catalog = server, catalog
    @enum_template, @file_template = ENUM_TEMPLATE, FILE_TEMPLATE
  end
end

The code generation uses erb, the standard Ruby templating language:

def transform(template, template_binding)
  erb = ERB.new(template, nil, '-')
  erb.result template_binding
end

template_binding describes the variables available to use in the template in much the same way that Castle Monorail’s PropertyBag describes the variables available to the views. The difference is that, because Ruby is dynamic, you don’t have to explictly add values to the binding. The rest of the code is shown below:

def generate(enum_name, attributes)
  table = attributes[:table] || enum_name
  filter = attributes[:filter] || lambda {|value| true}
  values = enum_values(table, attributes)
  values.delete_if {|key, value| !filter.call(value)}
  transform enum_template, binding
end

def generate_all(namespace, metadata)
  enums = ''
  metadata.keys.sort.each {|enum_name| enums << generate(enum_name, metadata[enum_name])}
  enums = enums.gsub(/\n/m, "\n\t").strip
  transform file_template, binding
end

private
def enum_values(table, attributes)
  sql = get_sql table, attributes
  @dbh ||= DBI.connect("DBI:ADO:Provider=SQLNCLI;server=#{server};database=#{catalog};Integrated Security=SSPI")
  sth = @dbh.execute sql
  values = {}
  sth.each {|row| values[row['Id']] = clean(row['Name'], attributes[:transformer])}
  sth.finish

  values
end

def get_sql(table, attributes)
  id_column = attributes[:id_column] || "#{table}Id"
  name_column = attributes[:name_column] || "Description"
  "SELECT #{id_column} AS Id, #{name_column} AS Name FROM #{table} ORDER BY Id"
end

def clean(enum_value, transformer=nil)
  enum_value = '_' + enum_value if enum_value =~ /^\d/
  enum_value = enum_value.gsub /[^\w]/, ''
  transformer ||= lambda {|value| value}
  transformer.call enum_value
end

Caveat Emptor: I wrote this code from scratch today; it is not the same code we currently use in production. I think it’s better, but if you find a problem with it please let me know.

Written by Brandon Byars

October 21, 2007 at 9:54 pm

Posted in .NET, Code Generation, Ruby

Tagged with ,

log4net Connection String Blues

We use log4net as our production logger, which has proven to be tremendously flexible. However, one problem I ran into was configuring the AdoNetAppender that logs to the database. It expects the connection string to be defined in the configuration file, which I didn’t want to do since it was already defined in our NHibernate config file.

This proved to be a relatively easy fix (found here):

private void  SetConnectionStrings()
{
    Hierarchy hierarchy = LogManager.GetRepository() as Hierarchy;
    if (hierarchy == null)
        return;

    using (UnitOfWork unitOfWork = new UnitOfWork())
    {
        foreach (IAppender appender in hierarchy.GetAppenders())
        {
            AdoNetAppender dbAppender = appender as AdoNetAppender;
            if (dbAppender != null)
            {
                dbAppender.ConnectionString = unitOfWork.ConnectionString;
                dbAppender.ActivateOptions();
            }
        }
    }
}

However, the problem is that log4net whined to standard error about not having the connection string defined. The result was that any console application had its output garbled (including our tests, since some of them used the production logger).

The solution turned out to be going ahead and putting a connection string in the config file, but making it obviously invalid (e.g., “<ignore>”). Then, when the logger is configured, temporarily redirect standard error:

public void ConfigureLogger()
{
    FileInfo file = new FileInfo(ConfigUtils.GetFilePath(“log4net.config”));
    TextWriter stdErr = Console.Error;
    Console.SetError(new StreamWriter(new MemoryStream()));
    XmlConfigurator.ConfigureAndWatch(file);
    ServiceRegistry.Logger = new Log4NetLogger();
    Console.SetError(stdErr);
}

Voila.

Written by Brandon Byars

September 9, 2007 at 10:01 pm

Posted in .NET

Tagged with

Follow

Get every new post delivered to your Inbox.