A Day In The Lyf

…the lyf so short, the craft so longe to lerne

mountebank v1.2.0 released

leave a comment »

In the past week I released both v1.1.72 and v1.2.0. New features include:

  • post-processing responses using a decorate behavior,
  • easier proxy replay using the removeProxies query parameter when saving imposter configurtation
  • improved validation during imposter creation
  • support for storing imposters in multiple files when starting mb with the --configfile query param
  • various bug fixes and doc updates (see the release pages for details)

Since the last release, there have been some good articles on testing in microservice environments and the tools involved (with some nice mountebank mentions). Here are a few good ones:

  • A nice infodeck from my colleague Toby Clemson
  • A really good blog post on the types of stubbing and service virtualization
  • A FlowCon presentation from ThoughtWorks’ very own Sam Newman, who’s book should hit the press soon.

Written by Brandon Byars

January 3, 2015 at 6:39 pm

Posted in Uncategorized

mountebank v1.1.0 release

leave a comment »


mountebank is happy to provide you with the following changes in v1.1.0. See the new app
at www.mbtest.org.

More Install Options

  • v1.0 only supported installing through npm. mountebank now supports self-contained
    packages on all supported platforms, which do not require node.js installed on the target
    platform. See the install page for details.

New Features

  • A new ?replayable=true
    query parameter to retrieve all imposters or a
    single imposter without the
    requests array, matches array, and hypermedia.
    This supports runtime downloads of a configuration to replay later.
  • Mass update and delete capability. mountebank now supports
    PUT and
    DELETE verbs
    on /imposters. This is designed to support loading a configuration file of imposters at
  • A new _behaviors field available for stubs.
    _behaviors currently supports adding latency to each response.
  • A new --nomock command line option to prevent
    mountebank from recording requests. This supports long-running processes from leaking memory
    where mocking is not important.
  • A new --configfile command line option to
    load a configuration file at startup for creating imposters.
  • The config and
    logs resources are now exposed oer JSON.

Documentation Improvements

  • Example usages in multiple languages and multiple protocols.
  • An ATOM feed, which will be updated with release notes on every release. Subscribe in
    feedly or your favorite RSS reader.
  • A new glossary to help explain mountebank concepts.
  • Embedded site search, accessible through the search box in the header.

Bug Fixes

Many thanks to Nikos Baxevanis and
Andrew Hadley for help with this release.

Written by Brandon Byars

May 8, 2014 at 10:13 pm

Posted in Uncategorized

mountebank self contained packages

leave a comment »

One of my principal goals with mountebank is to make it cross-platform and as easy to install as possible.  Nikos Baxevanis showed the cross-platform nature of mountebank by using F# to verify SMTP calls from the system under test. I like this a lot; I think the .NET community has been under-represented by such tools in the past, and mountebank definitely aims to correct that.

I’ve started the process of creating self-contained packages for mountebank to make the installation as transparent as possible. Due to the nature of being a traveling consultant, I often bounce between technology stacks. Nonetheless, I find the process of having to install a JDK to use a testing tool when the rest of my stack is Microsoft-based to be a hassle. I love CSS transpilers, but I also find it taxing to set up ruby just to manage CSS when I’m developing Java. I don’t want mountebank to be a similar hassle, requiring node.js just to use. My goal is to make it as usable out of the box as possible.

The latest version continues to ship as an npm package. However, it also ships with several other options that allow you to simply download and run from a local directory, even without node.js installed, or to use a Linux package manager to install (again without node.js installed). There’s still quite a bit of work yet to do – getting the packages in the primary package repositories, and adding additional packages like homebrew.

What about Windows?

mountebank runs on Windows, but requires npm install -g --production. Without the --production flag, it tries to install some testing dependencies that don’t run on Windows. I definitely aim to follow suit with Windows (or get some help from friendly developers) to get an msi and self-contained zip file for Windows. Please do reach out if you’re interested in helping.

Written by Brandon Byars

April 24, 2014 at 6:32 pm

Posted in Uncategorized

APIs & Documentation – Who’s the Tail and Who’s the Dog?

leave a comment »

API Docs: The Traditionalist View

I’ve written a lot of APIs in my time (think RESTful-type APIs, not library APIs). Often times, clients don’t have access to the code or don’t care to pore through it, so documentation is important. I’ve generally held the following to be somewhat of a maturity model for API documentation:

  1. No docs – call me and I’ll show you how to use it!
  2. Go to the wiki
  3. The docs are on a special page of the web application hosting the service. This is nice because the docs
    are versioned with the app; in fact, they are part of the app.
  4. The docs are auto-generated from the comments, annotations, attributes, or source code

Recent experiences have made me question this maturity model. Indeed, I’m beginning to wonder if deriving the documentation from the code is exactly backwards from the natural order of things. In short, I’ve begun to wonder: Is it better to generate the docs from the code, or test the code through the docs? I’m starting to think that the way we write API documentation based on the code may be a bit of the tail wagging the dog.

Experience 1 – Cracks in the Traditionalist View

On my current project, we’re auto-generating docs using Swagger plugins for Java and using Swagger UI to present them. Swagger – a wire format for exposing docs – is nice. The Java plugins are nice, reading many built-in annotations of the framework we’re using for exposing services. The UI is nice. None of it fits our needs perfectly, but that’s the nature of borrowing code, and, at first, the win of having documentation generated from our code seemed completely worth it.

The belated observation is that, in our noble effort to keep the documentation complete, we’ve made some subtle and insidious design compromises to satisfy the metadata Swagger requires. This isn’t a knock on the Swagger community – the plugins really are very nice. Our Java Swagger plugins use reflection to, for example, describe the structure of the return type of the resource (controller) method representing the response body. Any such technology would have to do the same.

There are a few examples of the design smells we’re noticing, but the big one is that we have an anemic domain model since our service is essentially just a pass-through to a downstream system that doesn’t have the common courtesy to expose itself over HTTP. Using lists and maps would be actually quite a bit simpler for our scenario. All we originally created the domain objects for was for serialization and deserialization, but that’s a relatively simple problem to solve with lists and maps. However, we piggy-backed Swagger on top of those domain objects, and now we’re stuck with them.

If we suddenly switched to lists and maps, any use of reflection would fail to adequately describe the shape of the response object, since simple reflection depends on data known at compile time. To dynamically generate documentation through a tool, we would actually have to call our service at runtime and inspect the lists and maps being returned for the keys and values. This isn’t feasible, so we’re stuck (for now) with an anemic domain model.

Experience 2 – An Alternative Presents Itself

When I was writing mountebank (the tool we’re using for stubbing TCP and HTTP dependencies), I did what I imagine most open source developers do when they think they’re close to the first release. The sequence goes like this:

  1. Write a bunch of crappy documentation by hand (following the third option on the maturity model – inline with the app)
  2. Take one last look at the code to “polish” it
  3. In a fit of passion, write a bit more code, including complete rewrites of core functionality and substantial enhancements
  4. (Roughly one month later) notice how out-of-date the docs are. Write more code to ease the pain.

Eventually I realized I could ignore the docs no more, but in a desperate effort to continue writing code, I rationalized that I should test the docs to prevent them going stale again. So I added markup in the HTML to give me hooks to test, wrote the necessary code to interpret that markup and assert on the entire JSON, post-processed out the ephemeral bits like timestamps, and verified my docs. Simple enough. So then I wrote the next bit of documentation and re-ran the test. That was when something magical happened.

It failed.

Turns out, I had had this bug in the code for quite some time, and never noticed it. That was the case despite a fair bit of automated functional testing which examined substantial bits of the (parsed) JSON structure. I fixed it and carried on writing the docs. In the process, I discovered several more bugs, as well as a few “UX” fixes (e.g. the ordering of fields to make developer comprehension easier when testing over curl). I had not up to this point faithfully tested against the full JSON structure, nor had I previously tested the JSON structure as text instead of some internal representation. I caught more bugs in the app writing documentation than in all previous tests I had written added together.

I realize some would call this BDD, but it’s a constrained version of BDD, where the docs are really my production API docs, and my experience makes me wonder if this isn’t a better way of writing docs. The docs were, without question, much harder to write, as I painstakingly wrote out example narratives by hand. However, they now serve as the bests tests I have in my entire codebase.

I’ve been coding for a long time, and I can’t remember the last time I experienced such a welcome surprise. What I intended to be a routine regression test turned out to completely reshape the way I think about the documentation, at least in mountebank. Based on just one experience, I’m not confident enough to suggest that the traditionalist view of API documentation is wrong. I am, however, hedging my bets.

How to test your docs, or how your docs test your API

The documentation on JavaScript injection describes the most complex scenario in the mountebank docs. The test hook markup is in each <code> tag. The docs in my example show a series of requests and responses surrounded by narrative. In some cases, I leave out the response; in others I use CSS to hide a request or response but keep it in the HTML to make the test possible.

Here’s an example request; the hooks are bolded:

<code data-test-id="predicate injection example" data-test-step="3" data-test-type="http">
POST /test HTTP/1.1
Host: localhost:4545
Content-Type: application/json


and an example response

<code data-test-id="predicate injection example" data-test-verify-step="3" data-test-ignore-lines="["^Date"]">
HTTP/1.1 400 Bad Request
Connection: close
Date: Thu, 09 Jan 2014 02:30:31 GMT
Transfer-Encoding: chunked

I won't say that writing code to parse these data- attributes and execute them correctly was easy, but you can see how it works in the code. I will say that it was definitely worth it.

Written by Brandon Byars

April 16, 2014 at 8:14 am

Posted in Uncategorized

Stubbing a Mule TCP connector with mountebank

with one comment

My current project involves integration with a Mule service bus and a number of endpoints responding over a TCP connector.  We have limited control over the downstream service, but depend on it deeply, as our REST service is stateless.  

We like to QA our stories using black-box testing of our API, which means sending an HTTP request to our API and expecting it to make the relevant TCP service call.  For some time now, our biggest testing problem has been manufacturing test data for all scenarios.  Were it an HTTP dependency, stubbing would have been easy, but no TCP stubbing solution existed until recently.

Our most versatile approach was also quite kludgy.  We had created Endpoint classes that encapsulated the calls to all downstream dependencies, and had a parallel set of StubEndpoint classes.  When we deployed our API, we had the ability of specifying a different Spring profile that only wired up the StubEndpoint classes. This meant that our stubbing logic was directly housed in our production application.


This is obviously an inelegant solution for our main application, since it now contained a significant amount of code that should never be used in production. Lest you think that having a switch to turn on what should be dead code in production isn’t dangerous, perhaps you should read about Knight Capital, which lost $465 million due to just such a bug.

Perhaps less obviously, this was also a terrible solution for our tests. Let’s say that we were testing an API retrieving travel plans, and we needed to test several different scenarios – plans in the past, cancelled plans, overbooked plans, etc. We might pass a plan number into our API, and then have to code a massive if-else block in our StubEndpoint that switched on the the plan number to manufacture the various scenarios. We might, for example, send plan 12345 for a cancelled flight and plan 23456 for an overbooked plan. The data setup was completely opaque to the tests, and the cognitive distance between the test data setup and the test scenario opened the door for some difficult maintenance and ugly interactions between tests.

We made another attempt at stubbing using Byteman, which allows you to inject Java code into a running agent. We would have to spin up the real service alongside a Byteman agent for this to work:


This had a few big problems. First, it required us to actually stand up the real service, which added some tax to our development time when we wanted to isolation test our API. Second, it was very hard to use. Byteman takes a string written in a special syntax to inject Java code at a certain method in the remote application. Writing the string was fiddly at best. Finally, this approach deeply coupled our code to the remote service code. We had to know intimate details of the scope within which our injected code would run to correctly write the remote Java code. In the end, it proved too difficult to use.

mountebank made remote stubbing a breeze. Each test can now setup the test data specific to the scenario it represents. A class level tearDown or setUp can remove existing stub data before each test runs.


Each test now has full control to specify the result of the RPC call, and, unlike the Byteman approach, they can do so directly in Java code. Once we moved the mountebank interaction into a thin plumbing layer, the test code remained as clean as testing in-process with mockito.

Here’s an example test, which validates what happens through our API when we request a cancelled travel plan:

public void shouldSomethingWithCancelledTravelPlan() {
    // This is a much bigger object graph than represented here
    TravelPlan cancelledTravelPlan = new TravelPlan("1234", "2013-02-15").withStatus(CANCELLED); 

    // Now we tell mountebank to return the specified object when our app connects to the service port
    setupBinaryStub(cancelledTravelPlan, TRAVEL_PLAN_ENDPOINT_PORT);

    HttpResponse response = callOurAPIAt("http://localhost:8080/api/travelPlans/1234&date=2013-02-15");


    // This could be moved to a setUp or tearDown
    // I won't show the code, but it just sends an HTTP DELETE to http://localhost:2525/imposters/{port}

The actual stub setup is in the seupBinaryStub method. Let’s take a look at it:

private void setupBinaryStub(Object stub, int port) {
    HttpPost createStub = new HttpPost("http://localhost:2525/imposters");
    createStub.setHeader("Content-Type", "application/json");
    String json = "{\n" +
                "  \"protocol\": \"tcp\",\n" +
                "  \"port\": " + port + ",\n" +
                "  \"mode\": \"binary\",\n" +
                "  \"stubs\": [\n" +
                "    { \"responses\": [{ \"is\": { \"data\": \"" + encodedString(rawObject) + "\" } }] }\n" +
                "  ]\n" +

    createStub.setEntity(new StringEntity(buildJsonContent(expectation)));
    final HttpResponse response = new DefaultHttpClient().execute(createStub);

    if (response.getStatusLine().getStatusCode() != 201) {
        throw new RuntimeException("expected 201 status but was " + response.getStatusLine().getStatusCode());

The magic sauce is in the encodedString method. Here we have to figure out how our remote service serializes the object graph, and encode it as a base64 string. Figuring out the serialization mechanism can be the trickiest part of all of this. In our case, a short investigation into the code reveals it’s simply Java serialization:

private String encodedString(Object objectToWrite) throws IOException {
    byte[] data = SerializationUtils.serialize(objectToWrite);
    return new Base64().encodeAsString(stream.toByteArray());

Now we can manufacture any test scenario we want, and better yet, we can keep the test data setup scoped within each test without any fear of that data interfering with other tests.

Check it out:

Github repository

Heroku site

Written by Brandon Byars

February 10, 2014 at 5:00 am

Multi-protocol remote stubbing with mountebank

leave a comment »

mountebank is a pet project I’ve been working on for a while now to provide multi-protocol and multi-language stubbing.  It does HTTP and HTTPS stubbing, but HTTP and HTTPS stubbing is a solved problem.

What mountebank does that’s really cool is stubbing and mocking for other protocols.  TCP stubbing is way cool because I’m not aware of an equivalent tool currently available.  We spiked this out at my current client to stub an RMI-like protocol.  The test code created the Java object that it wanted the remote server to return, serialized it, and relied on mountebank to do the right thing at the right time.

It also supports SMTP, at least for mock verification.  That way your app under test can connect to a real SMTP server, but that server won’t send any actual email.  It will, however, allow your test to verify the data sent.

This first release of mountebank ships with:

  • over the wire mock verification for http, https, tcp, and smtp
  • over the wire stubbing for http, https, and tcp
  • advanced stubbing predicates and multi-response stubbing for stateful services
  • record and replay proxying for http, https, and tcp
  • javascript injection when the tool doesn’t do what you need

Feedback welcome! 

heroku site: http://www.mbtest.org/

github: https://github.com/bbyars/mountebank

Written by Brandon Byars

February 3, 2014 at 8:25 pm

Posted in Uncategorized

Introducing httpmock

leave a comment »

node.js is simply too cool to ignore. I find the trick to learning any new cool framework is to find a new cool project to write, and use the framework to write it in. httpmock is my cool new project.

I tend to work in large’ish programs of work with lots of teams. One recurring pain point in such an environment is dealing with other teams. Sure, your code is fine, but what happens if you have to integrate with their code? These days, that tends to be through web services, and it’s never pretty. Your team writes integration tests, verify they work, and commit the changes. Three days later, for no apparent reason, your build goes red. It turns out that other team broke their code again.

httpmock is a network server stubbing tool. I say “network,” primarily because I’ve got my dreaming hat on. At the moment, it’s an HTTP stubbing tool, but I’d like to add other protocols, especially SMTP. httpmock exposes a RESTful API that lets you spin up HTTP servers that you can control, and that helpully record how they were used so you can make useful test assertions. In other words, httmock expects you to execute your code full-stack, including making a full TCP handshakes across the network. httpmock just lets you control both hands involved in the handshake, and it does so in a way transparent to your code; you just have to point your configuration to the stub servers httpmock creates. I find this quite useful for functional testing.

Don’t get me wrong. Integration tests are important, and you should still have a set of tests that exercise the actual system, but expect some fragility in those tests. If they break, it doesn’t necessarily mean you broke them. Therefore, they should be in a separate build, one that may not have the same “stop the line” implications when it goes red. httpmock should not be used only for builds that are entirely in control of your team..

When I say that httpmock exposes a RESTful API, I mean it’s a truly RESTful API. There’s only one published URL, and hypermedia drives the interaction from there. This makes it easy to evolve the server while maintaining a stable client-side API. At the moment, I have only a Java binding, but I have intentions for more (and I’d love some help on the issue!). The idea of the language bindings is to present to the developer a friendly API that approximates normal unit-testing mocking frameworks without having to write your tests in a language different from your production stack.


public void showVerificationAPI()
    StubServer stub = ControlServer.at("http://localhost:3000").setupPort(3001);
    SystemUnderTest yourCode = new SystemUnderTest();
    assertThat(stub, wasCalled("POST", "/endpoint")
                        .withHeader("Content-Type", "text/plain")
                        .withHeader("Accept", "text/plain")

The code above shows the Java assertions. The first line actually makes two network calls; one to talk to the well-known httpmock URL (in this case, http://localhost:3000) to get the hypermedia. The second one tells httpmock to set up a new server for stubbing purposes, which is then spun down on the last call in the method. For performance reasons, you may want to do some of these administrative tasks once for the test suite, particularly retrieving the initial hypermedia. The wasCalled method also talks to httpmock to perform the verification. The SystemUnderTest should expect to base its web service calls to http://localhost:3001.

public void showStubbingAPI()
    StubServer stub = ControlServer.at("http://localhost:3000").setupPort(3001);
    stub.on("GET", "/").returns(new Stub().withStatus(200).withBody("BODY"));
    SystemUnderTest yourCode = new SystemUnderTest();

Like typical stubbing libraries, httpmock allows you to set up stubs that will be executed if the server receives the expected call.


You need node.js to run the server. You can download httpmock from the incredibly ugly downloads page. Untar the server, and run bin/httpmock start.

This will run the control server on port 3000. You can optionally pass a --port command line argument to change the port and a --pidfile argument to use a non-default pidfile (useful if you’re running multiple instances). If you run the server in a background job, bin/httpmock stop will kill it.

If you’re using npm, you can install httpmock with sudo npm install -g httpmock. Installing it globally allows you to simply say httpmock start on the command line, which feels quite sexy.

The client bindings will simply be a library. For Java, add httpmock.jar to your classpath.


The following are the features I’d like to add to httpmock:

  • Support for multiple protocols, at least for verification if stubbing doesn’t make sense (SMTP, FTP, perhaps even things like printer protocols). It appears at the moment easy enough to support a plugin mechanism where the same REST API will work for multiple protocols, but I won’t know for sure until getting around to a second protocol.
  • Support for HTML. Right now, there is a JSON-based content type used for all communication, which is great for progammatic manipulation. I think it would also be quite nice to allow QA’s to manually set up stubs and perform verifications through a web browser.
  • A more robust stubbing mechanism. It’d be nice, for example, to only conditionally execute stubs if a specific request header was set.
  • More language bindings.

Want to contribute? I’d love the help. Send me a pull request at github.

Written by Brandon Byars

May 30, 2011 at 9:48 pm

Posted in Testing

Tagged with , , ,

The Fundamental Theorem of Renting

with 6 comments

I should start by saying that I’m not a videophile. My TV is color, but it’s at least as deep as it is wide, and it’s certainly not high-def.

But I do know this: DVDs suck, and the DVD rental industry is broken.

Trying to improve DVDs by making them higher resolution is like trying to fix ice sculptures by making them prettier. It works great, until summer rolls around.

As far as I can tell, the media industry (which includes both video and audio media) has been moving towards higher resolution at the expense of longevity. LP’s, in case you didn’t know, stand for “long-playing,” and I can still play some of my grandma’s. You could literally do jumping jacks on eight tracks and they’d be fine. Tapes might get out of sorts from time to time, but you could fix them by sticking a pencil in the cogs and getting the tape wound correctly again.

Then the music industry came out with CDs, and the digital revolution began. Fast forward a bit, and here we stand.

Luke, I am your, your, your, your, your, your, your

Damn, just when I was getting into it, too. I haven’t been keeping strict tabs on it, largely because I don’t keep strict tabs on anything, but I’d guess that approximately two videos out of every one I rent has a significant scratch, strategically located right at the climax of the movie. Not at the beginning, mind you, because then the previous renter would have failed at maximizing the amount of time you waste. If they put the scratch at the beginning of the movie, you might even still be sober enough to storm back to the movie store and demand another copy. Put the scratch at the end, and most movie-goers will just assume that they saw the good parts anyhow, and forget about it. But climatic scratches leave scars. By the time the scratch actually does manifest itself, you’re expected to suffer through the still-frame slideshow, under the delusional hopes than in just a couple of minutes, the show will resume as normal.

When that fails, naturally, you’re supposed to resign yourself to the fact that you’ll have to skip to the next chapter, and backtrack to just after the scratched area of the disk. That probably takes a while, on account of your excessive optimism of just how insignificant the scratch is. Eventually, you just skip the chapter you were on altogether, hoping you didn’t miss any Oscar moments.

(By “you,” I mean my wife, at least when I watch DVDs, because I still haven’t figured out how to navigate the arcane series of steps required to coerce the remote control into skipping past the affected area. And, by this point, I’m usually too drunk to learn.)

As the next chapter begins, you realize that, while you (or my wife) has succesfully crossed the Grand Canyon, several minor fault lines still emanate from the crevasse. Eventually, my much soberer wife decides that it is no longer worth her time babysitting the remote, gives up, and hands control over to me. Damn. I should really learn how to use that remote one day.

The funny thing about all of this is that the movie industry wants us to upgrade our DVDs to Blu-rays. The demos I see at electronics stores really are spectacular, too. But what the marketing fails to mention is the economics of renting, as summarized by the Fundamental Theorem of Renting:

Renters demand to rent products that are in better shape than when they return them.

Like all deep truths of the universe, this Fundamental Theorem has a number of interesting corollaries, such as the fact that, to be true, every renter must find ways to return the product in worse shape than when they rented it. Car renters intuitively know this, and home renters are required to pay a deposit to combat the Fundamental Theorem, but movie renters pretend that renting is a harmless affair.

It doesn’t matter anyway. Blu Ray is too late. By the time it came out, physical media was no longer that important. Now I just have to wait until my wife figures out how to stream Netflix…

Written by Brandon Byars

January 8, 2011 at 11:40 am

Posted in Uncategorized

Tagged with ,

RestMvc – RESTful Goodies for ASP.NET MVC

with 4 comments

Last summer, I found myself building a RESTful ASP.NET MVC service that had an HTML admin UI. Oftentimes, the resource that was being edited in HTML was the same resource that needed to be sent out in XML via the service, which mapped nicely to the REST ‘multiple representations per resource’ philosophy.

There are obviously some very nice RESTful libraries for ASP.NET MVC, but none quite met my needs. Simply Restful Routing, which comes with MVC Contrib, takes a Rails-inspired approach of handing you a pre-built set of routes that more or less match a RESTful contract for a resource. While obviously convenient, that’s never been my preferred way to manage routing. It adds a bunch of routes that you probably have no intention of implementing. It keeps the routes centralized, which never seemed to read as well to me as the way Sinatra keeps the routing configuration next to the block that handles requests to that route.

Additionally, one of the problems I encountered with other routing libraries like Simply Restful is that they define the IRouteHandler internally, which removes your ability to add any custom hooks into the routing process. I needed just such a hook to add content negotiation. I also wanted some RESTful goodies, like responding with a 405 instead of a 404 status code if we did route to a resource (identified by a URI template), but not to a requested HTTP verb on that resource. I wanted the library to automatically deal with HEAD and OPTIONS requests. In the end, I created my own open-source library called RestMvc which provides such goodies with Sinatra-like routing and content negotiation.


public class OrdersController : Controller
    public ActionResult Index() { ... }

    public ActionResult Create() { ... }

    [Get("/orders/{id}.format", "/orders/{id}")]
    public ActionResult Show(string id) { ... }

    public ActionResult Edit(string id) { ... }

    public ActionResult Destroy(string id) { ... }

Adding the routes for the attributes above is done in Global.asax.cs, in a couple of different ways:

// or RouteTable.Routes.MapAssembly(Assembly.GetExecutingAssembly());

That is, in effect, the entire routing API of RestMvc. The Map and MapAssembly extension methods will do the following:

  • Create the routes defined by the HTTP methods and URI templates in the attributes. Even though System.Web.Routing does not allow you to prefix URI templates with either / or ~/, I find allowing those prefixes can enhance readability, and thus they are allowed.
  • Routes HEAD and OPTIONS methods for the two URI templates (“orders” and “orders/{id}”) to a method within RestMVC capable of handling those methods intelligently.
  • Routes PUT and DELETE for /orders, and POST for /orders/{id}, to a method within RestMvc that knows to return a 405 HTTP status code (Method Not Supported) with an appropriate Allow header. This method and the ones that handle HEAD and OPTIONS, work without any subclassing for the Controller as shown above. However, if you need to customize their behavior — for example, to add a body to OPTIONS — you can subclass RestfulController and override the appropriate method.
  • Adds routes for tunnelling PUT and DELETE through POST for HTML browser support. RestMvc takes the Rails approach of looking for a hidden form field called _method set to either PUT or DELETE. If you don’t want the default behavior, or you do want the tunnelling but with a different form field, you can call ResourceMapper directly instead of accepting the defaults that the Map and MapAssembly extension methods provide.
  • Notice the optional format parameter on the Get attribute above the Show method. Routes with an extension are routed such that the extension gets passed as the format parameter, if the resource supports multiple representations (e.g. /orders/1.xml routes to Show with a format of xml). The ordering of the URI templates in the Get attribute is important. Had I reversed the order, /orders/1.xml would have matched with an id of “1.xml” and an empty format
  • The last point is a convenient way to handle multiple formats for a resource. Since it’s in the URL, it can be bookmarked and emailed, or tested through a browser, with the same representation regardless of the HTTP headers. Even if content negotiation is used, it allows you to bypass the standard negotiation process. Note that having different URLs for different representations of the same resource is generally frowned upon by REST purists. RestMvc does not automatically provide these routes for you, but lets you add them if you want.

    Content Negotiation

    Content negotiation is provided as a decorator to the standard RouteHandler. Doing it this way allows you to compose additional custom behavior that needs access to the IRouteHandler.

    // In Global.asax.cs
    var map = new MediaTypeFormatMap();
    map.Add(MediaType.Html, "html");
    map.Add(MediaType.Xhtml, "html");
    map.Add(MediaType.Xml, xml");
    var connegRouter = new ContentNegotiationRouteProxy(new MvcRouteHandler(), map);
    RouteTable.Routes.MapAssembly(Assembly.GetExecutingAssembly(), connegRouter);

    In the absence of a route URI template specifying the format explicitly, the connegRouter will examine the Accept request header and pick the first media type supported in the map. Wildcard matches are supported (e.g. text/* matches text/html). The format parameter will be set for the route, based on the value added in the MediaTypeFormatMap.

    The content negotiation is quite simple at the moment. The q parameter in the Accept header is completely ignored. By default, it tries to abide by the Accept header prioritization inferred from the order of the MIME types in the header. However, you can change it to allow the server ordering, as defined by the order MIME types are added to the MediaTypeFormatMap, to take priority. This was added to work around what I consider to be a bug in Google Chrome – despite being unable to natively render XML, it prioritizes XML over HTML in its Accept header. The library does not currently support sending back a 406 (Not Acceptable) HTTP status code when no acceptable MIME type is sent in the Accept header.

    Next Steps

    I haven’t worked on RestMvc in a few months, largely because I shifted focus at work and haven’t done any .NET programming in a while. However, I had planned on doing some automatic etagging, and to make the content negotiation more robust.

    Contributors welcome! The code can be found on github.

Written by Brandon Byars

January 6, 2011 at 5:02 pm

Posted in .NET

Tagged with , ,

Picking Up The Pen Again

with one comment


That’s the number of days my Muse has slumbered. That’s the number of days since I set my pen down. 800 days ago, I wrote my last blog post.

Quite a bit has happened since then. The economy fell out from under us. The US elected its first black President in history. Michael Jackson died.

718 days ago, my wife delivered our second son, 3,622 days after she delivered our first son. 520 days ago, my family moved to another country. And, yesterday, my Muse awoke.

My Muse surveyed the landscape after shaking off his hibernation languor, and decided that my old blog just looked, like, so last decade. So, I moved it, and I updated it, and I changed the DNS to point to the new location1.

800 days after setting my pen down, I have a new blog. And tomorrow, my Muse and I start discussing what to do about that.

1While most of the links moved over fine, the RSS feed has changed.

Written by Brandon Byars

January 4, 2011 at 12:43 pm

Posted in Writing