A Day In The Lyf

…the lyf so short, the craft so longe to lerne

mountebank v1.1.0 release

leave a comment »

v1.1.0

mountebank is happy to provide you with the following changes in v1.1.0. See the new app
at www.mbtest.org.

More Install Options

  • v1.0 only supported installing through npm. mountebank now supports self-contained
    packages on all supported platforms, which do not require node.js installed on the target
    platform. See the install page for details.

New Features

  • A new ?replayable=true
    query parameter to retrieve all imposters or a
    single imposter without the
    requests array, matches array, and hypermedia.
    This supports runtime downloads of a configuration to replay later.
  • Mass update and delete capability. mountebank now supports
    PUT and
    DELETE verbs
    on /imposters. This is designed to support loading a configuration file of imposters at
    runtime.
  • A new _behaviors field available for stubs.
    _behaviors currently supports adding latency to each response.
  • A new --nomock command line option to prevent
    mountebank from recording requests. This supports long-running processes from leaking memory
    where mocking is not important.
  • A new --configfile command line option to
    load a configuration file at startup for creating imposters.
  • The config and
    logs resources are now exposed oer JSON.

Documentation Improvements

  • Example usages in multiple languages and multiple protocols.
  • An ATOM feed, which will be updated with release notes on every release. Subscribe in
    feedly or your favorite RSS reader.
  • A new glossary to help explain mountebank concepts.
  • Embedded site search, accessible through the search box in the header.

Bug Fixes

Many thanks to Nikos Baxevanis and
Andrew Hadley for help with this release.

Written by Brandon Byars

May 8, 2014 at 10:13 pm

Posted in Uncategorized

mountebank self contained packages

leave a comment »

One of my principal goals with mountebank is to make it cross-platform and as easy to install as possible.  Nikos Baxevanis showed the cross-platform nature of mountebank by using F# to verify SMTP calls from the system under test. I like this a lot; I think the .NET community has been under-represented by such tools in the past, and mountebank definitely aims to correct that.

I’ve started the process of creating self-contained packages for mountebank to make the installation as transparent as possible. Due to the nature of being a traveling consultant, I often bounce between technology stacks. Nonetheless, I find the process of having to install a JDK to use a testing tool when the rest of my stack is Microsoft-based to be a hassle. I love CSS transpilers, but I also find it taxing to set up ruby just to manage CSS when I’m developing Java. I don’t want mountebank to be a similar hassle, requiring node.js just to use. My goal is to make it as usable out of the box as possible.

The latest version continues to ship as an npm package. However, it also ships with several other options that allow you to simply download and run from a local directory, even without node.js installed, or to use a Linux package manager to install (again without node.js installed). There’s still quite a bit of work yet to do – getting the packages in the primary package repositories, and adding additional packages like homebrew.

What about Windows?

mountebank runs on Windows, but requires npm install -g --production. Without the --production flag, it tries to install some testing dependencies that don’t run on Windows. I definitely aim to follow suit with Windows (or get some help from friendly developers) to get an msi and self-contained zip file for Windows. Please do reach out if you’re interested in helping.

Written by Brandon Byars

April 24, 2014 at 6:32 pm

Posted in Uncategorized

APIs & Documentation – Who’s the Tail and Who’s the Dog?

leave a comment »

API Docs: The Traditionalist View

I’ve written a lot of APIs in my time (think RESTful-type APIs, not library APIs). Often times, clients don’t have access to the code or don’t care to pore through it, so documentation is important. I’ve generally held the following to be somewhat of a maturity model for API documentation:

  1. No docs – call me and I’ll show you how to use it!
  2. Go to the wiki
  3. The docs are on a special page of the web application hosting the service. This is nice because the docs
    are versioned with the app; in fact, they are part of the app.
  4. The docs are auto-generated from the comments, annotations, attributes, or source code

Recent experiences have made me question this maturity model. Indeed, I’m beginning to wonder if deriving the documentation from the code is exactly backwards from the natural order of things. In short, I’ve begun to wonder: Is it better to generate the docs from the code, or test the code through the docs? I’m starting to think that the way we write API documentation based on the code may be a bit of the tail wagging the dog.

Experience 1 – Cracks in the Traditionalist View

On my current project, we’re auto-generating docs using Swagger plugins for Java and using Swagger UI to present them. Swagger – a wire format for exposing docs – is nice. The Java plugins are nice, reading many built-in annotations of the framework we’re using for exposing services. The UI is nice. None of it fits our needs perfectly, but that’s the nature of borrowing code, and, at first, the win of having documentation generated from our code seemed completely worth it.

The belated observation is that, in our noble effort to keep the documentation complete, we’ve made some subtle and insidious design compromises to satisfy the metadata Swagger requires. This isn’t a knock on the Swagger community – the plugins really are very nice. Our Java Swagger plugins use reflection to, for example, describe the structure of the return type of the resource (controller) method representing the response body. Any such technology would have to do the same.

There are a few examples of the design smells we’re noticing, but the big one is that we have an anemic domain model since our service is essentially just a pass-through to a downstream system that doesn’t have the common courtesy to expose itself over HTTP. Using lists and maps would be actually quite a bit simpler for our scenario. All we originally created the domain objects for was for serialization and deserialization, but that’s a relatively simple problem to solve with lists and maps. However, we piggy-backed Swagger on top of those domain objects, and now we’re stuck with them.

If we suddenly switched to lists and maps, any use of reflection would fail to adequately describe the shape of the response object, since simple reflection depends on data known at compile time. To dynamically generate documentation through a tool, we would actually have to call our service at runtime and inspect the lists and maps being returned for the keys and values. This isn’t feasible, so we’re stuck (for now) with an anemic domain model.

Experience 2 – An Alternative Presents Itself

When I was writing mountebank (the tool we’re using for stubbing TCP and HTTP dependencies), I did what I imagine most open source developers do when they think they’re close to the first release. The sequence goes like this:

  1. Write a bunch of crappy documentation by hand (following the third option on the maturity model – inline with the app)
  2. Take one last look at the code to “polish” it
  3. In a fit of passion, write a bit more code, including complete rewrites of core functionality and substantial enhancements
  4. (Roughly one month later) notice how out-of-date the docs are. Write more code to ease the pain.

Eventually I realized I could ignore the docs no more, but in a desperate effort to continue writing code, I rationalized that I should test the docs to prevent them going stale again. So I added markup in the HTML to give me hooks to test, wrote the necessary code to interpret that markup and assert on the entire JSON, post-processed out the ephemeral bits like timestamps, and verified my docs. Simple enough. So then I wrote the next bit of documentation and re-ran the test. That was when something magical happened.

It failed.

Turns out, I had had this bug in the code for quite some time, and never noticed it. That was the case despite a fair bit of automated functional testing which examined substantial bits of the (parsed) JSON structure. I fixed it and carried on writing the docs. In the process, I discovered several more bugs, as well as a few “UX” fixes (e.g. the ordering of fields to make developer comprehension easier when testing over curl). I had not up to this point faithfully tested against the full JSON structure, nor had I previously tested the JSON structure as text instead of some internal representation. I caught more bugs in the app writing documentation than in all previous tests I had written added together.

I realize some would call this BDD, but it’s a constrained version of BDD, where the docs are really my production API docs, and my experience makes me wonder if this isn’t a better way of writing docs. The docs were, without question, much harder to write, as I painstakingly wrote out example narratives by hand. However, they now serve as the bests tests I have in my entire codebase.

I’ve been coding for a long time, and I can’t remember the last time I experienced such a welcome surprise. What I intended to be a routine regression test turned out to completely reshape the way I think about the documentation, at least in mountebank. Based on just one experience, I’m not confident enough to suggest that the traditionalist view of API documentation is wrong. I am, however, hedging my bets.

How to test your docs, or how your docs test your API

The documentation on JavaScript injection describes the most complex scenario in the mountebank docs. The test hook markup is in each <code> tag. The docs in my example show a series of requests and responses surrounded by narrative. In some cases, I leave out the response; in others I use CSS to hide a request or response but keep it in the HTML to make the test possible.

Here’s an example request; the hooks are bolded:


<code data-test-id="predicate injection example" data-test-step="3" data-test-type="http">
POST /test HTTP/1.1
Host: localhost:4545
Content-Type: application/json

&lt;?xml&gt;
&lt;customer&gt;
  &lt;name&gt;Test&lt;/name&gt;
&lt;/customer&gt;
</code>

and an example response


<code data-test-id="predicate injection example" data-test-verify-step="3" data-test-ignore-lines="["^Date"]">
HTTP/1.1 400 Bad Request
Connection: close
Date: Thu, 09 Jan 2014 02:30:31 GMT
Transfer-Encoding: chunked
</code>

I won't say that writing code to parse these data- attributes and execute them correctly was easy, but you can see how it works in the code. I will say that it was definitely worth it.

Written by Brandon Byars

April 16, 2014 at 8:14 am

Posted in Uncategorized

Stubbing a Mule TCP connector with mountebank

with one comment

My current project involves integration with a Mule service bus and a number of endpoints responding over a TCP connector.  We have limited control over the downstream service, but depend on it deeply, as our REST service is stateless.  

We like to QA our stories using black-box testing of our API, which means sending an HTTP request to our API and expecting it to make the relevant TCP service call.  For some time now, our biggest testing problem has been manufacturing test data for all scenarios.  Were it an HTTP dependency, stubbing would have been easy, but no TCP stubbing solution existed until recently.

Our most versatile approach was also quite kludgy.  We had created Endpoint classes that encapsulated the calls to all downstream dependencies, and had a parallel set of StubEndpoint classes.  When we deployed our API, we had the ability of specifying a different Spring profile that only wired up the StubEndpoint classes. This meant that our stubbing logic was directly housed in our production application.

stub-spring-profile

This is obviously an inelegant solution for our main application, since it now contained a significant amount of code that should never be used in production. Lest you think that having a switch to turn on what should be dead code in production isn’t dangerous, perhaps you should read about Knight Capital, which lost $465 million due to just such a bug.

Perhaps less obviously, this was also a terrible solution for our tests. Let’s say that we were testing an API retrieving travel plans, and we needed to test several different scenarios – plans in the past, cancelled plans, overbooked plans, etc. We might pass a plan number into our API, and then have to code a massive if-else block in our StubEndpoint that switched on the the plan number to manufacture the various scenarios. We might, for example, send plan 12345 for a cancelled flight and plan 23456 for an overbooked plan. The data setup was completely opaque to the tests, and the cognitive distance between the test data setup and the test scenario opened the door for some difficult maintenance and ugly interactions between tests.

We made another attempt at stubbing using Byteman, which allows you to inject Java code into a running agent. We would have to spin up the real service alongside a Byteman agent for this to work:

stub-byteman

This had a few big problems. First, it required us to actually stand up the real service, which added some tax to our development time when we wanted to isolation test our API. Second, it was very hard to use. Byteman takes a string written in a special syntax to inject Java code at a certain method in the remote application. Writing the string was fiddly at best. Finally, this approach deeply coupled our code to the remote service code. We had to know intimate details of the scope within which our injected code would run to correctly write the remote Java code. In the end, it proved too difficult to use.

mountebank made remote stubbing a breeze. Each test can now setup the test data specific to the scenario it represents. A class level tearDown or setUp can remove existing stub data before each test runs.

stub-mountebank

Each test now has full control to specify the result of the RPC call, and, unlike the Byteman approach, they can do so directly in Java code. Once we moved the mountebank interaction into a thin plumbing layer, the test code remained as clean as testing in-process with mockito.

Here’s an example test, which validates what happens through our API when we request a cancelled travel plan:


@Test
public void shouldSomethingWithCancelledTravelPlan() {
    // This is a much bigger object graph than represented here
    TravelPlan cancelledTravelPlan = new TravelPlan("1234", "2013-02-15").withStatus(CANCELLED); 

    // Now we tell mountebank to return the specified object when our app connects to the service port
    setupBinaryStub(cancelledTravelPlan, TRAVEL_PLAN_ENDPOINT_PORT);

    HttpResponse response = callOurAPIAt("http://localhost:8080/api/travelPlans/1234&date=2013-02-15");

    assertSomething(response);

    // This could be moved to a setUp or tearDown
    // I won't show the code, but it just sends an HTTP DELETE to http://localhost:2525/imposters/{port}
    tearDownStub(TRAVEL_PLAN_ENDPOINT_PORT);
}

The actual stub setup is in the seupBinaryStub method. Let’s take a look at it:


private void setupBinaryStub(Object stub, int port) {
    HttpPost createStub = new HttpPost("http://localhost:2525/imposters");
    createStub.setHeader("Content-Type", "application/json");
   
    String json = "{\n" +
                "  \"protocol\": \"tcp\",\n" +
                "  \"port\": " + port + ",\n" +
                "  \"mode\": \"binary\",\n" +
                "  \"stubs\": [\n" +
                "    { \"responses\": [{ \"is\": { \"data\": \"" + encodedString(rawObject) + "\" } }] }\n" +
                "  ]\n" +
                "}";

    createStub.setEntity(new StringEntity(buildJsonContent(expectation)));
    final HttpResponse response = new DefaultHttpClient().execute(createStub);

    if (response.getStatusLine().getStatusCode() != 201) {
        throw new RuntimeException("expected 201 status but was " + response.getStatusLine().getStatusCode());
    }
}

The magic sauce is in the encodedString method. Here we have to figure out how our remote service serializes the object graph, and encode it as a base64 string. Figuring out the serialization mechanism can be the trickiest part of all of this. In our case, a short investigation into the code reveals it’s simply Java serialization:


private String encodedString(Object objectToWrite) throws IOException {
    byte[] data = SerializationUtils.serialize(objectToWrite);
    return new Base64().encodeAsString(stream.toByteArray());
}

Now we can manufacture any test scenario we want, and better yet, we can keep the test data setup scoped within each test without any fear of that data interfering with other tests.

Check it out:

Github repository

Heroku site

Written by Brandon Byars

February 10, 2014 at 5:00 am

Multi-protocol remote stubbing with mountebank

leave a comment »

mountebank is a pet project I’ve been working on for a while now to provide multi-protocol and multi-language stubbing.  It does HTTP and HTTPS stubbing, but HTTP and HTTPS stubbing is a solved problem.

What mountebank does that’s really cool is stubbing and mocking for other protocols.  TCP stubbing is way cool because I’m not aware of an equivalent tool currently available.  We spiked this out at my current client to stub an RMI-like protocol.  The test code created the Java object that it wanted the remote server to return, serialized it, and relied on mountebank to do the right thing at the right time.

It also supports SMTP, at least for mock verification.  That way your app under test can connect to a real SMTP server, but that server won’t send any actual email.  It will, however, allow your test to verify the data sent.

This first release of mountebank ships with:

  • over the wire mock verification for http, https, tcp, and smtp
  • over the wire stubbing for http, https, and tcp
  • advanced stubbing predicates and multi-response stubbing for stateful services
  • record and replay proxying for http, https, and tcp
  • javascript injection when the tool doesn’t do what you need

Feedback welcome! 

heroku site: http://www.mbtest.org/

github: https://github.com/bbyars/mountebank

Written by Brandon Byars

February 3, 2014 at 8:25 pm

Posted in Uncategorized

Introducing httpmock

leave a comment »

node.js is simply too cool to ignore. I find the trick to learning any new cool framework is to find a new cool project to write, and use the framework to write it in. httpmock is my cool new project.

I tend to work in large’ish programs of work with lots of teams. One recurring pain point in such an environment is dealing with other teams. Sure, your code is fine, but what happens if you have to integrate with their code? These days, that tends to be through web services, and it’s never pretty. Your team writes integration tests, verify they work, and commit the changes. Three days later, for no apparent reason, your build goes red. It turns out that other team broke their code again.

httpmock is a network server stubbing tool. I say “network,” primarily because I’ve got my dreaming hat on. At the moment, it’s an HTTP stubbing tool, but I’d like to add other protocols, especially SMTP. httpmock exposes a RESTful API that lets you spin up HTTP servers that you can control, and that helpully record how they were used so you can make useful test assertions. In other words, httmock expects you to execute your code full-stack, including making a full TCP handshakes across the network. httpmock just lets you control both hands involved in the handshake, and it does so in a way transparent to your code; you just have to point your configuration to the stub servers httpmock creates. I find this quite useful for functional testing.

Don’t get me wrong. Integration tests are important, and you should still have a set of tests that exercise the actual system, but expect some fragility in those tests. If they break, it doesn’t necessarily mean you broke them. Therefore, they should be in a separate build, one that may not have the same “stop the line” implications when it goes red. httpmock should not be used only for builds that are entirely in control of your team..

When I say that httpmock exposes a RESTful API, I mean it’s a truly RESTful API. There’s only one published URL, and hypermedia drives the interaction from there. This makes it easy to evolve the server while maintaining a stable client-side API. At the moment, I have only a Java binding, but I have intentions for more (and I’d love some help on the issue!). The idea of the language bindings is to present to the developer a friendly API that approximates normal unit-testing mocking frameworks without having to write your tests in a language different from your production stack.

Examples


@Test
public void showVerificationAPI()
{
    StubServer stub = ControlServer.at("http://localhost:3000").setupPort(3001);
    SystemUnderTest yourCode = new SystemUnderTest();
    
    yourCode.doSomethingThatCallsAWebService();
    
    assertThat(stub, wasCalled("POST", "/endpoint")
                        .withHeader("Content-Type", "text/plain")
                        .withHeader("Accept", "text/plain")
                        .withBodyContaining("TEST"));
    stub.close();
}

The code above shows the Java assertions. The first line actually makes two network calls; one to talk to the well-known httpmock URL (in this case, http://localhost:3000) to get the hypermedia. The second one tells httpmock to set up a new server for stubbing purposes, which is then spun down on the last call in the method. For performance reasons, you may want to do some of these administrative tasks once for the test suite, particularly retrieving the initial hypermedia. The wasCalled method also talks to httpmock to perform the verification. The SystemUnderTest should expect to base its web service calls to http://localhost:3001.


@Test
public void showStubbingAPI()
{
    StubServer stub = ControlServer.at("http://localhost:3000").setupPort(3001);
    stub.on("GET", "/").returns(new Stub().withStatus(200).withBody("BODY"));
    SystemUnderTest yourCode = new SystemUnderTest();
    
    yourCode.doSomethingThatShouldCall("http://localhost:3001/");
    
    stub.close();
}

Like typical stubbing libraries, httpmock allows you to set up stubs that will be executed if the server receives the expected call.

Installing

You need node.js to run the server. You can download httpmock from the incredibly ugly downloads page. Untar the server, and run bin/httpmock start.

This will run the control server on port 3000. You can optionally pass a --port command line argument to change the port and a --pidfile argument to use a non-default pidfile (useful if you’re running multiple instances). If you run the server in a background job, bin/httpmock stop will kill it.

If you’re using npm, you can install httpmock with sudo npm install -g httpmock. Installing it globally allows you to simply say httpmock start on the command line, which feels quite sexy.

The client bindings will simply be a library. For Java, add httpmock.jar to your classpath.

Roadmap

The following are the features I’d like to add to httpmock:

  • Support for multiple protocols, at least for verification if stubbing doesn’t make sense (SMTP, FTP, perhaps even things like printer protocols). It appears at the moment easy enough to support a plugin mechanism where the same REST API will work for multiple protocols, but I won’t know for sure until getting around to a second protocol.
  • Support for HTML. Right now, there is a JSON-based content type used for all communication, which is great for progammatic manipulation. I think it would also be quite nice to allow QA’s to manually set up stubs and perform verifications through a web browser.
  • A more robust stubbing mechanism. It’d be nice, for example, to only conditionally execute stubs if a specific request header was set.
  • More language bindings.

Want to contribute? I’d love the help. Send me a pull request at github.

Written by Brandon Byars

May 30, 2011 at 9:48 pm

Posted in Testing

Tagged with , , ,

The Fundamental Theorem of Renting

with 6 comments

I should start by saying that I’m not a videophile. My TV is color, but it’s at least as deep as it is wide, and it’s certainly not high-def.

But I do know this: DVDs suck, and the DVD rental industry is broken.

Trying to improve DVDs by making them higher resolution is like trying to fix ice sculptures by making them prettier. It works great, until summer rolls around.

As far as I can tell, the media industry (which includes both video and audio media) has been moving towards higher resolution at the expense of longevity. LP’s, in case you didn’t know, stand for “long-playing,” and I can still play some of my grandma’s. You could literally do jumping jacks on eight tracks and they’d be fine. Tapes might get out of sorts from time to time, but you could fix them by sticking a pencil in the cogs and getting the tape wound correctly again.

Then the music industry came out with CDs, and the digital revolution began. Fast forward a bit, and here we stand.

Luke, I am your, your, your, your, your, your, your

Damn, just when I was getting into it, too. I haven’t been keeping strict tabs on it, largely because I don’t keep strict tabs on anything, but I’d guess that approximately two videos out of every one I rent has a significant scratch, strategically located right at the climax of the movie. Not at the beginning, mind you, because then the previous renter would have failed at maximizing the amount of time you waste. If they put the scratch at the beginning of the movie, you might even still be sober enough to storm back to the movie store and demand another copy. Put the scratch at the end, and most movie-goers will just assume that they saw the good parts anyhow, and forget about it. But climatic scratches leave scars. By the time the scratch actually does manifest itself, you’re expected to suffer through the still-frame slideshow, under the delusional hopes than in just a couple of minutes, the show will resume as normal.

When that fails, naturally, you’re supposed to resign yourself to the fact that you’ll have to skip to the next chapter, and backtrack to just after the scratched area of the disk. That probably takes a while, on account of your excessive optimism of just how insignificant the scratch is. Eventually, you just skip the chapter you were on altogether, hoping you didn’t miss any Oscar moments.

(By “you,” I mean my wife, at least when I watch DVDs, because I still haven’t figured out how to navigate the arcane series of steps required to coerce the remote control into skipping past the affected area. And, by this point, I’m usually too drunk to learn.)

As the next chapter begins, you realize that, while you (or my wife) has succesfully crossed the Grand Canyon, several minor fault lines still emanate from the crevasse. Eventually, my much soberer wife decides that it is no longer worth her time babysitting the remote, gives up, and hands control over to me. Damn. I should really learn how to use that remote one day.

The funny thing about all of this is that the movie industry wants us to upgrade our DVDs to Blu-rays. The demos I see at electronics stores really are spectacular, too. But what the marketing fails to mention is the economics of renting, as summarized by the Fundamental Theorem of Renting:

Renters demand to rent products that are in better shape than when they return them.

Like all deep truths of the universe, this Fundamental Theorem has a number of interesting corollaries, such as the fact that, to be true, every renter must find ways to return the product in worse shape than when they rented it. Car renters intuitively know this, and home renters are required to pay a deposit to combat the Fundamental Theorem, but movie renters pretend that renting is a harmless affair.

It doesn’t matter anyway. Blu Ray is too late. By the time it came out, physical media was no longer that important. Now I just have to wait until my wife figures out how to stream Netflix…

Written by Brandon Byars

January 8, 2011 at 11:40 am

Posted in Uncategorized

Tagged with ,

Follow

Get every new post delivered to your Inbox.