Archive for the ‘Uncategorized’ Category
- post-processing responses using a
- easier proxy replay using the
removeProxiesquery parameter when saving imposter configurtation
- improved validation during imposter creation
- support for storing imposters in multiple files when starting
- various bug fixes and doc updates (see the release pages for details)
Since the last release, there have been some good articles on testing in microservice environments and the tools involved (with some nice mountebank mentions). Here are a few good ones:
mountebank is happy to provide you with the following changes in v1.1.0. See the new app
More Install Options
- v1.0 only supported installing through npm. mountebank now supports self-contained
packages on all supported platforms, which do not require node.js installed on the target
platform. See the install page for details.
- A new
query parameter to retrieve all imposters or a
single imposter without the
matchesarray, and hypermedia.
This supports runtime downloads of a configuration to replay later.
- Mass update and delete capability. mountebank now supports
on /imposters. This is designed to support loading a configuration file of imposters at
- A new
_behaviorsfield available for stubs.
_behaviorscurrently supports adding latency to each response.
- A new
--nomockcommand line option to prevent
mountebank from recording requests. This supports long-running processes from leaking memory
where mocking is not important.
- A new
--configfilecommand line option to
load a configuration file at startup for creating imposters.
logs resources are now exposed oer JSON.
- Example usages in multiple languages and multiple protocols.
- An ATOM feed, which will be updated with release notes on every release. Subscribe in
feedly or your favorite RSS reader.
- A new glossary to help explain mountebank concepts.
- Embedded site search, accessible through the search box in the header.
- Fixed incorrect handling of JSON null values
- Fixed inconsistent end tags on the stubs
- mountebank now support numbers and
- Fixed rendering of several pages in Internet Explorer.
One of my principal goals with mountebank is to make it cross-platform and as easy to install as possible. Nikos Baxevanis showed the cross-platform nature of mountebank by using F# to verify SMTP calls from the system under test. I like this a lot; I think the .NET community has been under-represented by such tools in the past, and mountebank definitely aims to correct that.
I’ve started the process of creating self-contained packages for mountebank to make the installation as transparent as possible. Due to the nature of being a traveling consultant, I often bounce between technology stacks. Nonetheless, I find the process of having to install a JDK to use a testing tool when the rest of my stack is Microsoft-based to be a hassle. I love CSS transpilers, but I also find it taxing to set up ruby just to manage CSS when I’m developing Java. I don’t want mountebank to be a similar hassle, requiring node.js just to use. My goal is to make it as usable out of the box as possible.
The latest version continues to ship as an npm package. However, it also ships with several other options that allow you to simply download and run from a local directory, even without node.js installed, or to use a Linux package manager to install (again without node.js installed). There’s still quite a bit of work yet to do – getting the packages in the primary package repositories, and adding additional packages like homebrew.
What about Windows?
mountebank runs on Windows, but requires
npm install -g --production. Without the
--production flag, it tries to install some testing dependencies that don’t run on Windows. I definitely aim to follow suit with Windows (or get some help from friendly developers) to get an msi and self-contained zip file for Windows. Please do reach out if you’re interested in helping.
API Docs: The Traditionalist View
I’ve written a lot of APIs in my time (think RESTful-type APIs, not library APIs). Often times, clients don’t have access to the code or don’t care to pore through it, so documentation is important. I’ve generally held the following to be somewhat of a maturity model for API documentation:
- No docs – call me and I’ll show you how to use it!
- Go to the wiki
- The docs are on a special page of the web application hosting the service. This is nice because the docs
are versioned with the app; in fact, they are part of the app.
- The docs are auto-generated from the comments, annotations, attributes, or source code
Recent experiences have made me question this maturity model. Indeed, I’m beginning to wonder if deriving the documentation from the code is exactly backwards from the natural order of things. In short, I’ve begun to wonder: Is it better to generate the docs from the code, or test the code through the docs? I’m starting to think that the way we write API documentation based on the code may be a bit of the tail wagging the dog.
Experience 1 – Cracks in the Traditionalist View
On my current project, we’re auto-generating docs using Swagger plugins for Java and using Swagger UI to present them. Swagger – a wire format for exposing docs – is nice. The Java plugins are nice, reading many built-in annotations of the framework we’re using for exposing services. The UI is nice. None of it fits our needs perfectly, but that’s the nature of borrowing code, and, at first, the win of having documentation generated from our code seemed completely worth it.
The belated observation is that, in our noble effort to keep the documentation complete, we’ve made some subtle and insidious design compromises to satisfy the metadata Swagger requires. This isn’t a knock on the Swagger community – the plugins really are very nice. Our Java Swagger plugins use reflection to, for example, describe the structure of the return type of the resource (controller) method representing the response body. Any such technology would have to do the same.
There are a few examples of the design smells we’re noticing, but the big one is that we have an anemic domain model since our service is essentially just a pass-through to a downstream system that doesn’t have the common courtesy to expose itself over HTTP. Using lists and maps would be actually quite a bit simpler for our scenario. All we originally created the domain objects for was for serialization and deserialization, but that’s a relatively simple problem to solve with lists and maps. However, we piggy-backed Swagger on top of those domain objects, and now we’re stuck with them.
If we suddenly switched to lists and maps, any use of reflection would fail to adequately describe the shape of the response object, since simple reflection depends on data known at compile time. To dynamically generate documentation through a tool, we would actually have to call our service at runtime and inspect the lists and maps being returned for the keys and values. This isn’t feasible, so we’re stuck (for now) with an anemic domain model.
Experience 2 – An Alternative Presents Itself
When I was writing mountebank (the tool we’re using for stubbing TCP and HTTP dependencies), I did what I imagine most open source developers do when they think they’re close to the first release. The sequence goes like this:
- Write a bunch of crappy documentation by hand (following the third option on the maturity model – inline with the app)
- Take one last look at the code to “polish” it
- In a fit of passion, write a bit more code, including complete rewrites of core functionality and substantial enhancements
- (Roughly one month later) notice how out-of-date the docs are. Write more code to ease the pain.
Eventually I realized I could ignore the docs no more, but in a desperate effort to continue writing code, I rationalized that I should test the docs to prevent them going stale again. So I added markup in the HTML to give me hooks to test, wrote the necessary code to interpret that markup and assert on the entire JSON, post-processed out the ephemeral bits like timestamps, and verified my docs. Simple enough. So then I wrote the next bit of documentation and re-ran the test. That was when something magical happened.
Turns out, I had had this bug in the code for quite some time, and never noticed it. That was the case despite a fair bit of automated functional testing which examined substantial bits of the (parsed) JSON structure. I fixed it and carried on writing the docs. In the process, I discovered several more bugs, as well as a few “UX” fixes (e.g. the ordering of fields to make developer comprehension easier when testing over curl). I had not up to this point faithfully tested against the full JSON structure, nor had I previously tested the JSON structure as text instead of some internal representation. I caught more bugs in the app writing documentation than in all previous tests I had written added together.
I realize some would call this BDD, but it’s a constrained version of BDD, where the docs are really my production API docs, and my experience makes me wonder if this isn’t a better way of writing docs. The docs were, without question, much harder to write, as I painstakingly wrote out example narratives by hand. However, they now serve as the bests tests I have in my entire codebase.
I’ve been coding for a long time, and I can’t remember the last time I experienced such a welcome surprise. What I intended to be a routine regression test turned out to completely reshape the way I think about the documentation, at least in mountebank. Based on just one experience, I’m not confident enough to suggest that the traditionalist view of API documentation is wrong. I am, however, hedging my bets.
How to test your docs, or how your docs test your API
Here’s an example request; the hooks are bolded:
<code data-test-id="predicate injection example" data-test-step="3" data-test-type="http"> POST /test HTTP/1.1 Host: localhost:4545 Content-Type: application/json <?xml> <customer> <name>Test</name> </customer> </code>
and an example response
<code data-test-id="predicate injection example" data-test-verify-step="3" data-test-ignore-lines="["^Date"]"> HTTP/1.1 400 Bad Request Connection: close Date: Thu, 09 Jan 2014 02:30:31 GMT Transfer-Encoding: chunked </code>
I won't say that writing code to parse these
data- attributes and execute them correctly was easy, but you can see how it works in the code. I will say that it was definitely worth it.
mountebank is a pet project I’ve been working on for a while now to provide multi-protocol and multi-language stubbing. It does HTTP and HTTPS stubbing, but HTTP and HTTPS stubbing is a solved problem.
What mountebank does that’s really cool is stubbing and mocking for other protocols. TCP stubbing is way cool because I’m not aware of an equivalent tool currently available. We spiked this out at my current client to stub an RMI-like protocol. The test code created the Java object that it wanted the remote server to return, serialized it, and relied on mountebank to do the right thing at the right time.
It also supports SMTP, at least for mock verification. That way your app under test can connect to a real SMTP server, but that server won’t send any actual email. It will, however, allow your test to verify the data sent.
This first release of mountebank ships with:
- over the wire mock verification for http, https, tcp, and smtp
- over the wire stubbing for http, https, and tcp
- advanced stubbing predicates and multi-response stubbing for stateful services
- record and replay proxying for http, https, and tcp
heroku site: http://www.mbtest.org/
I should start by saying that I’m not a videophile. My TV is color, but it’s at least as deep as it is wide, and it’s certainly not high-def.
But I do know this: DVDs suck, and the DVD rental industry is broken.
Trying to improve DVDs by making them higher resolution is like trying to fix ice sculptures by making them prettier. It works great, until summer rolls around.
As far as I can tell, the media industry (which includes both video and audio media) has been moving towards higher resolution at the expense of longevity. LP’s, in case you didn’t know, stand for “long-playing,” and I can still play some of my grandma’s. You could literally do jumping jacks on eight tracks and they’d be fine. Tapes might get out of sorts from time to time, but you could fix them by sticking a pencil in the cogs and getting the tape wound correctly again.
Then the music industry came out with CDs, and the digital revolution began. Fast forward a bit, and here we stand.
Luke, I am your, your, your, your, your, your, your
Damn, just when I was getting into it, too. I haven’t been keeping strict tabs on it, largely because I don’t keep strict tabs on anything, but I’d guess that approximately two videos out of every one I rent has a significant scratch, strategically located right at the climax of the movie. Not at the beginning, mind you, because then the previous renter would have failed at maximizing the amount of time you waste. If they put the scratch at the beginning of the movie, you might even still be sober enough to storm back to the movie store and demand another copy. Put the scratch at the end, and most movie-goers will just assume that they saw the good parts anyhow, and forget about it. But climatic scratches leave scars. By the time the scratch actually does manifest itself, you’re expected to suffer through the still-frame slideshow, under the delusional hopes than in just a couple of minutes, the show will resume as normal.
When that fails, naturally, you’re supposed to resign yourself to the fact that you’ll have to skip to the next chapter, and backtrack to just after the scratched area of the disk. That probably takes a while, on account of your excessive optimism of just how insignificant the scratch is. Eventually, you just skip the chapter you were on altogether, hoping you didn’t miss any Oscar moments.
(By “you,” I mean my wife, at least when I watch DVDs, because I still haven’t figured out how to navigate the arcane series of steps required to coerce the remote control into skipping past the affected area. And, by this point, I’m usually too drunk to learn.)
As the next chapter begins, you realize that, while you (or my wife) has succesfully crossed the Grand Canyon, several minor fault lines still emanate from the crevasse. Eventually, my much soberer wife decides that it is no longer worth her time babysitting the remote, gives up, and hands control over to me. Damn. I should really learn how to use that remote one day.
The funny thing about all of this is that the movie industry wants us to upgrade our DVDs to Blu-rays. The demos I see at electronics stores really are spectacular, too. But what the marketing fails to mention is the economics of renting, as summarized by the Fundamental Theorem of Renting:
Renters demand to rent products that are in better shape than when they return them.
Like all deep truths of the universe, this Fundamental Theorem has a number of interesting corollaries, such as the fact that, to be true, every renter must find ways to return the product in worse shape than when they rented it. Car renters intuitively know this, and home renters are required to pay a deposit to combat the Fundamental Theorem, but movie renters pretend that renting is a harmless affair.
It doesn’t matter anyway. Blu Ray is too late. By the time it came out, physical media was no longer that important. Now I just have to wait until my wife figures out how to stream Netflix…