Wednesday, December 18, 2013

Testing new functionality: Log-in and features at

[This is a guest post by Marketing/Growth-Hacking consultant Martha Andrews - contact her directly at]

We have been thinking of ways, and getting many suggestions, on how to improve and enhance the Mailinator experience. We’ve been working hard (maybe you’ve noticed) to grow.  We want to reach a group of new users, and invite current users to use us more!
After the redesign this summer, and the launch of the adventures of Mailinator Guy, we took a look at the long list of suggestions that come from the site users.  It happens that there were some who wanted to make the disposable service… well, less disposable.
So here it is: log in with a gmail account, and get an email address (that you can customize from available inboxes).  Plus, this gives you an easy way to direct all the email from a selected address to that inbox – and one click forwarding to your gmail account. Not bad for a start.
And I know what many are thinking… I have to sign in?  No, no you don’t.  The site still works just as it always has without signing-up.  
Let’s be clear – Mailinator isn’t looking to replace your primary email system.  There are just too many free, good web mail applications out there – and heck, you probably have as many as I do. But, we've added new functionality to allow Mailinator to more closely act as a satellite email system to your real account, functionality like one-click forwarding, storage, and a private email address.  Our goal is to seamlessly give you the disposable tool you love, and give you options for longer-term use.  We want to be your #1 secondary email address! (if that makes sense)

We are only allowing access to gmail users, for now, but that may change if we see traction.

Here is what you get for logging in:
You will get your very own private inbox - and your very own email address:
<username>  Super, right? A username is assigned to you when you login - but you can change it, up to once a day to anything that is not already in use.  You will be able to receive, store and one-click forward anything that hits this inbox to your gmail account.

You will also be able to store emails from any inbox at in your private inbox with the click of a button. You can also pick a particular inbox at Mailinator (one at a time) and have all the email that is sent to that address, forwarded to your private inbox. Mind you it will also be available to the public because that is what we do here at Mailinator. All (and alternate domain) email is available to anyone who checks that inbox just like always.

All of this is illustrated below: The leftmost column is the public Mailinator as you've always used it. The middle column is now Mailinator with your Private Inbox (long-term storage).

So there it is: innovation – or at least trying new things – in action.  Please give it a try, and let us know what you think on the site.

Saturday, November 9, 2013

Increasing Brand-Value of with a Web Comic

[This is a guest post by Marketing/Growth-Hacking consultant Martha Andrews - contact her directly at ]

We have been doing some investigation in increasing brand awareness for

Mailinator has always been a useful site providing free, disposable email - but analysis of the usage patterns of Mailinator indicated that users tended to use it on a monthly or bi-weekly basis rather than weekly or daily. In some ways this makes sense – the site provides a utility, but the use case for the general public isn't necessarily needed on a daily basis. When it is needed however - our goal is to make sure you think of!

In order to raise all usage levels of the site, we focused on public awareness of the site, expecting that an increased DAU (and MAU) would/should be an expected side effect.  I’ve frequented many trainings in the last year, and I cannot tell you how many times I’ve heard, “Create interesting and useful content and people will flock to your site…Its all about providing content.” 

But what content could we give to our users?  We already give away the service – we have a useful tool, and want more folks to know about it and use it.

Before creating content, something to consider is who is in our user base?  Web analytics told us the primary user base is male, ages 18-40 – think of the average Reddit user.  So how do we engage 18-40 year old men, and potentially other demographic groups?  How do we get them to think of more often?  What could we provide that would be of use of them?  Getting folks to think of the site more often, and giving them content and reasons to visit the site would likely expand their current usage pattern.

We tried targeted ads through traditional web and social channels. Specifically - we experimented with Google Adwords,, Facebook, and Twitter.  The ads provided momentary increases in traffic to varying degrees but nothing long lasting.  We can get more users to Mailinator any day of the week by spending some money, but that cannot be sustained given that doesn not generate direct user revenue.

During a recent site redesign we incorporated a new mascot of sorts – an amiable looking caped crusader we dubbed “Mailinator Guy” - the Anti-Spam Superhero. 
The more we lived with Mailinator Guy, the more often we started asking: what if we launched a web comic based on Mailinator Guy?  Would a comic be appropriate content?  If you've ever read Mailinator's FAQ you know the tone of the site is already quite tongue-in-cheek - we at least amuse ourselves.  Frankly the thought of bringing Mailinator Guy to life was simply too fun an experiment not to try.

ComicRocket has over 35,000 web comics indexed on their site alone, so competition for user attention is stiff.  However we found Facebook ads for the comic to be highly successful in engaging a new audience.  We think it works well because they are demographically targetable, colorful, eye catching and fun.  A thumbnail or portion of the strip is always displayed in the posts and ads.

We also engage our existing audience through the Mailinator Facebook page (with >30000 likes). There is an announcement on the page when a new comic is published. Additionally, the colorful thumbnail version of the strip is shown on the landing page and on mailbox pages at  We have found as we release a comic each week, we are slowly habituating our audience to check on a weekly basis – and if they are following us on twitter or are a Facebook fan, we have an excuse to contact them.

The feedback loop of the two sites is strong. provides the free disposable email utility that brings new users by itself. The site's design reinforces the Mailinator Guy character and encourages our primary demographic, arguably also the primary demographic for web comics, to follow the comic.  The site provides a weekly enticement for a visit. That visit keeps the Mailinator brand fresh in our user's mind causing the loop.

Of course some users of Mailinator will never care about the comic. And some users of the comic will never care about  And that's just fine - both sites have inherent stand-alone value.

At the time of this writing we have published six episodes of The Adventures of Mailinator Guy. Our advertising efforts, both paid and direct referrals from, have driven traffic that averaged 1,500 unique weekly users the first three weeks, to over 3,000 unique weekly visitors the following three weeks.  The most traffic occurs on Mondays when we have usually posted and announced a new episode.  Along with an increase in users, producing the comic has become more efficient and economical, and still pretty darn fun.

Importantly - traffic to has increased 15% in the 6 weeks since the launch of the comic and continues to grow.

These numbers are very encouraging but honestly the most important outcome is the increase in Mailinator brand recognition. The site's brand value will continue to grow and that's the real goal of this exercise.  

At the end of the day - everyone is happy.  More people know about the Mailinator brand - this makes us happy - and have either avoided some spam by using our free disposable email service, have had a laugh at the expense of Mailinator Guy - and his sidekick (you'll have to read the strip to find out about him), or both.

If you're up for following the adventures of Mailinator Guy yourself - here's the link to get you started:

The Adventures of Mailinator Guy! 

Saturday, October 5, 2013

Introducing .... Mailinator Guy

Let me tell you a secret - if you want a positively sure way to get unfettered love and hate mail at the same time - just go redesign a website.

If you've used Mailinator for any length of time you see we changed the website design a few years back. We brought it from probably somewhere in Web 1.0 to current day.

The redesign was more than simply a redesign however - it changed how Mailinator worked. A great deal of internal code was added to prevent abuse of third-party sites using Mailinator.

Overall the redesign was a huge upgrade and success - and I promise you the lovers far outweigh the haters  :)  (site traffic got a significant and sustained increase too).

Our redesign this summer also served to introduce's anti-spam superhero - Mailinator Guy !

He's here to save your inbox from spam. He can fly, has a magic envelope he carries with him everywhere, and of course has a neato secret lair.

Now - Mailinator Guy has his own webcomic !   So if you're a fan of Mailinator and like to read webcomics - what are you waiting for !

Check it out here:

Tuesday, April 16, 2013

Code Review for a String Lock Map

I recently needed a mechanism to lock on given Strings in a particular piece of code. As maybe a weak example, say I was doing disk file updates to files each with unique filenames (i.e. read, update, write-back). I could have synchronized the methods that modified the files, but I preferred to not lock at that level - in other words, only one file update could occur at a time then.

I'd prefer to simply lock on that file - or more specifically, that filename. Its a dubious proposition to lock on String objects in Java as different objects can represent the same set of characters. You can intern() all your strings (which makes sure the objects you have are the "one" system-wide representation of that String) but that is rather global. I wanted to do things locally - where I can control the scoping of the lock.

(The filename example is just an example, I've needed this structure in many different scenarios - so don't look to just solve that problem).

I might have missed it, but I didn't find any facilities in java.util.concurrent to help me (and to be fair, I haven't kept up with the happenings there. Long story short, I wrote my own.

Of course the best advice when writing concurrent locking datastructures is - DON'T.

But I did. And I probably plan on using it eventually, so I put it here for review. I'll update it as comments come in (so if you see a comment below that makes little sense it's probably because I heeded it and changed the code).

The use case I want:

public static final StringLock stringLock = new StringLock();


try {

     // load disk file
     // change its contents a bit
     // rewrite it
} finally {

So.. find below the code I've come with so far. Its a bit heavy under contention but the normal case (no contention) it syncs 3 times (writes on ConcurrentHashMap being a sync). I could also use Cliff Click's NonBlockingHashMap to remove two of the syncs but and I'd be happy to be convinced that'd be superior for some reason.

public class StringLock {

 private final ConcurrentHashMap lockedStrings = new ConcurrentHashMap();

 public void lock(String s) { 
   Object lockObject = null; 
   while (true) { 
     lockObject = lockedStrings.get(s); 

     if (lockObject != null) { 
       synchronized (lockObject) { 
          Object lockObject2 = lockedStrings.get(s); 
          if (lockObject2 == lockObject) { 
            try { lockObject2.wait(); } catch (Exception e) { } 
     } else { 
        lockObject = new Object(); 
        Object lockObject1 = lockedStrings.putIfAbsent(s, lockObject); 
        if (lockObject1 == null) { 

 public void unlock(String s) { 
    Object lockObject = lockedStrings.get(s); 
    synchronized (lockObject) { 

Tuesday, August 21, 2012

I Further Predict the Death of Your Web Framework

In my last post, I philosophized how new technology is going to change the bottleneck of web (and other) systems (which despite everything else, have remained surprisingly stable for awhile).

This spelled the demise of many systems that relied on a given system bottleneck, specifically, slow runtime systems.

But there is another technological shift conspiring against many web frameworks that isn't focused on performance, but instead focused on "ease of use" - which in many cases may hit far closer to home.

That shift is the reorganization of MVC.

MVC stands for "model-view-controller" which loosely means you have a datastore/database (the model) which is retrieved and manipulated (by the controller) such that it can finally be shown to a user (the view).

That is, pretty much always the flow. Well - kinda. The first thing you might notice is that "MVC" is an out-of-order acronym per the dataflow. In that case it would be "MCV". And happily, given that dataflow is paramount to my story - I'll use that in the rest of this article (that might irk you if you're CDO (which is like "OCD", except in alphabetical order, like it SHOULD BE)).

A predecessor to MCV was a simpler idea of simply "client/server" where client was the view, server was the controller and model (or, some or all of the controller could be in the client too). However, the client in that case actually implied it was a real client - that is a program that received the data and showed it.

In the web, the browser is the client, but interestingly in things like Rails, Jails, Nails, Grails, Struts, Play!, PHP, ASP.Net, and many others the "view" is on the server which then renders HTML and sends that to the browser. As far as the programmer is concerned, the whole MCV is on the server. The browser is often just a dumb terminal.

In the last year or two however, the popularity of a new type of framework is changing all that.

That change is coming from libraries such as backbone.js and ember.js (and many, many others).

Those libraries allow you to render views (not just show, actually render) in the browser itself. In addition, they let you leverage a lot of javascript magic in the browser. This is pretty awesome for several reasons.

The computing power of rendering is moved to the client's machine. Rendering isn't probably your biggest computing expense, but take off that computing cost from your server (times every web request you get) and its measurable.

And as you can imagine, if the "V" of MCV actually migrates to the client, all that's left on the server is "MC" (to be fair, sometimes even part of the "C" goes to the client).

What thousands and thousands of Rails developers discovered upon moving to backbone is that they no longer needed their fancy template views. Their backend became a system that pushed JSON over HTTP.

Very clean and very simple. At my new company Refresh (we're hiring!), our backend pushes the exact same JSON to our webpage as it does to our IOS app. And that same system will someday seamlessly become our API too.

For me, using Rails for webapps over Java (where I spent plenty of time years ago) was a simple decision. ActiveRecord was beautiful and elegant (especially compared to things like Java's hibernate). Also, the view layer was simple, well-laid-out, and standardized. If anything, Java had too many choices.

But these days, I tend to use NoSQL on the backend. And ember on the front-end. All I need in the middle is something to manipulate and push JSON. Why was I paying the Rails tax? (again insert any language that is a multiple slower than Java in that sentence).

I'm not particularly picking on Rails - it is just a full MCV solution that I no longer need. There are plenty of those.

And if you're thinking this is a win for Node.js - you're probably right. With much more javascript coding entering your web framework as a whole, using Node on the backend is probably the winner of all this on the usability front. Javascript on the server isn't the fastest, but it's pretty darn good at manipulating JSON (and thank you to whoever it was that shot XML dead).

So my not-so-amazing prediction is that in a few short years time full web frameworks from any languages disappear. Node picks up some of that slack but so do less feature-ful frameworks (and maybe performant ones). Even non-frameworks altogether get more use.

There's surely no mourning required here. Web frameworks change every few years no matter how you slice it. But between this post and my last, I see two converging fronts out to kill some our most popular ones right now.

Personally, I'm hoping to never server-side render HTML again. I'll let your browser do my rendering while I sit back, chill, and push some JSON.

And yes, Mailinator is in rewrite now to use ember, much to the chagrin of web scraping programs everywhere! (but much to the happy of JSON receivers)

Monday, August 13, 2012

Your Bottleneck is Dead. Long Live Your Bottleneck.

There's an old joke that, if you think about it, you can apply directly to system bottlenecks.

Two hikers are walking through the woods when they come face-to-face with a pack of wolves. One of the hikers immediately drops to the ground and hastily changes from his hiking boots to the running shoes he had in his backpack.

The 2nd hiker says, "What are you doing ! You can't outrun those wolves!"

The 1st replies, "I don't have to outrun those wolves. I just have to outrun you."

Web developers tend to know their system's biggest bottleneck, but how often do you know the 2nd biggest one? Right, in one sense it doesn't matter - because the 2nd bottleneck doesn't get to become a bottleneck unless it's the biggest one.

There is an economic model hidden within every complex system. This includes something as mundane as web system performance. Knuth famously said (or re-said) that "premature optimization is the root of all evil" which could be restated as - if you optimize before you know what needs it, you're optimizing (and probably breaking) the wrong thing.

Hence we don't (or aren't supposed to) optimize what doesn't need it. Seems obvious - but it has interesting ramifications.

When something doesn't need optimizing, we can afford to be (and often tend to be) lazy with it when it comes to performance. Concretely, if your code's database access is going to take a 15 milliseconds, worrying that processing that data will take 20 microseconds because of your sloppy n^2 algorithm probably isn't worth much thought.

If that statement raises your ire, feel free to sit in your chair and pout - because there are thousands of websites that were happily coded using notepad with interpreted and dynamic scripting languages that flagrantly use gotos and lists as if they were hashmaps. I've seen it. It's enough to turn your stomach. It's not pretty.

For the average, everyday web hardware ecosystem - we have CPU power to spare. And in the bigger business sense, if I can save time developing a website cutting performance-concerned corners with no ramifications, all the better.

Web development largely started in scripting languages (i.e. perl-cgi, php). Again for the same reason - CPU to spare as compared to other bottlenecks.

In fact, I'll go so far as to say that the popularity of scripting language web frameworks required the condition that disks be some order of magnitude slower than CPUs. That's right - I'm looking at you Rails, Grails, Nails, and Jails.. (ok, not Jails, it's a Java web framework but it rhymed).

Java web frameworks added a lot of structure, verbosity, and performance that simply wasn't needed (and eventually, amazing bloat). If your bottleneck was the database/disk - your web processing simply had to not add significantly to that - and regardless of the language, that wasn't hard.

A simple definition of latency is the time it takes to get data back after requesting it. Similarly to an program anyway, bandwidth could be viewed as how long it takes us to get all the data requested (once you start getting any).

Think of how that relates to code performance. If your latency is 3ms (a reasonable server harddisk seek time) - it doesn't matter if your code is hand-loved machine language or interpreted COBOL - it does nothing for that 3ms. In CPU time, 3ms is an eternity.

As a general tendency however, the more data you receive, the more processing that likely goes around it. Consider a few megabyte JSON message - at a minimum it will likely be parsed. Possibly shoved into a map or an object.

Said another way - lowering latency and increasing bandwidth will tend to put more pressure on processing data (i.e. requiring more CPU/code performance)

So all this time we're happily and harshly slowed down by slow things like spindly harddisk drives and networks. Then, in walk Solid State Drives. Prices and capacities are both heading in the normal directions for new technology (down and up, respectively).

Latency goes from standard spindle drive 3ms seeks to (varying reports) 100microsecond seeks.

Argue the specifics if you will, but for some number of existing systems, installing an SSD will remove the database as the primary bottleneck. In fact, this is probably the cheapest way to improve your system's performance today.

What happens to the bottleneck in those systems? It will shift somewhere else (i.e. the SSD put on its running shoes). In many cases, it will shift to the CPU (CPU in this case is a polite way of saying "your code").

Everyday across the world, there are meetings at companies complaining about the performance of their website. Today, many of those say "get the DBA in here".

In some of those meetings soon, the shift will be away from blaming the database. Some will push for code optimization (postmature), some for bigger hardware, and some for faster languages.

Keep in mind, this is a subtle, slow moving effect. Having your CPUs pegged all the time might not make you change anything today but may make you reconsider your architecture next time you build something.

Of course the network is a bottleneck too - at least for now. In places like Korea and Kansas City that's not so true. If you haven't heard, if you live in Kansas City you can get Google Fiber to your home. In other words, your internet speed will be 100 times faster than the average internet in the US. (In fact, if your machine has the common SATA2 disk drive interface, sending a file to your neighbor across town in Kansas City will only take about 3 times as long as storing it on your own disk just a few inches away).

Here's another prediction - in 5 years the phrase "downloading a movie" won't exist. (We used to say we were "downloading an image" which was preceded by us saying we were just "downloading").

If bandwidth drastically increases, it will change how we write code. We think nothing of loading a 1M webpage now which 10 years ago was offensive. In the future, we may think the same thing about a 100M webpage.

Given that data expands to fill available bandwidth (modified Parkinson's Law) our programs will tend to process much more data. Processing speed will matter more and more.

And the more often code becomes the bottleneck, the more often solutions to fix that will be considered.

Simply - your favorite bottlenecks might be changing. And for that to happen, your disk doesn't necessarily need to be able outrun your CPU - it just has to be able to outrun your code. (And it wouldn't hurt if it could also outrun, you know, wolves too).

My startup Refresh is looking for awesome IOS and front-end engineers. Join us! Email us at De

Wednesday, May 16, 2012

The love and hate of Node.js

I've been in the happy position lately of interviewing engineers for my new start-up. If you've read any of my previous articles or seen my talks (Talk Notes (warning: PDF download)) you know I love this stuff (Jobs page).

I'm always up for a spirited discussion about algorithms or languages with smart people, but I do consider too much technical religion to be a red flag and it seems to be a rather common affliction.

So when I started interviewing recently, I was immediately reminded of the blind loyalty some times given to pieces of technology. As if all other competing languages/frameworks/variable-naming-schemes are "crap" where "crap" is loosely defined as, well, I'm not sure - but something that the person saying "crap" sure doesn't like. It's probably safe to say that any popular technology has (or had) a useful purpose at one point, and also I think it's safe to say that same piece of technology is not always the right solution.

Along with the perpetual Hackers news debates, I right away ran into a "Node Guy". That is, a guy maybe just a tad overzealous about using Node.js for, you know, everything.

I asked him why he chose Node for his last project. The answer was "Because it scales super well".

I will say, the marketing hype around Node is pretty good. I am not saying his answer was wrong. It wasn't. But it's pretty similar to answering the question "Why do you want to buy a Porsche?" - with the answer "Because it's fast".

Likely true, but by no means a distinguishing feature.

It isn't hard to find discussions in the developer community defending the performance of Node. Node at its core, is a C++ server. That part is likely competitively fast. But after that, the focus is on the performance of the JavaScript callouts. Someone told me that "microbenchmarks" aren't fair to Node and don't show the true performance. I think they're right in both cases - because microbenchmarks likely involve only a small amount of JavaScript. Truth is, the more JavaScript your Node application runs, the more its going to lose against server frameworks built in faster languages. In other words, microbenchmarks are probably the best Node is ever going to look.

Google's V8 JavaScript engine literally enabled Node to exist at all.
There are of course another set of people (the "Node haters") that are nothing short of incensed by the idea of Node. To them, it feels rather silly to create a server framework in a language like JavaScript. I can relate to someone who has spent years eek'ing out all possible performance of a C++ server only to watch someone write one in JavaScript and claim speediness.

In the early days of any field of science - science, invention, and engineering must overlap. That is, folks think up science, try it, and piece it together to see if it works - however rickety. Eventually however, enough tools and best-practices exist to allow details to be abstracted.

When that happens, many more people can create cool things with existing and tested pieces (i.e. engineer them together). Simply, you need to worry less about the details of the science to get things done. People with no knowledge of the underlying science can glue widgets together to make something. Often amazing things - at that time, you might consider that that "science" is somewhat beginning its evolution towards being an "art".

Possibly the quintessential computer science course is something like "Algorithms & Data Structures". Do you need that course to develop apps these days? Again, by proof of existence - I think not. If you have a phone interview with me I will ask you the difference between an Growable Array (aka Vector, ArrayList, etc) and a Linked List. Both structures do arguably the same thing but with notably different performance characteristics.

It's quite hard to create any application without using some form of list, but as long you have a "list" you know you can get the job done. I promise you there are thousands of sites and apps using the wrong list at the wrong time and incurring nasty linear time searches or insertions in nasty places. Truth is, if that never shows up as a measurable bottleneck, then one could argue that despite the algorithmic transgression - that code is "good enough".

Happy or sad, "good enough" is getting "good enougher" all the time. CPUs are fast and getting faster covering the tracks of slow code. We've never lived in a better time for algorithmic indifference. Comparatively, disks are slow, which make databases slow, which make the performance of the algorithms and languages you choose in your app less important than ever (not to be confused with "unimportant"). In fact, I'd argue that the entire resurgence of interpreted, dynamic languages can be traced back to the lackadaisical 5ms seek times of spinning disk drives. That's a bold statement and probably a whole other article - but if disks/databases are basically always the bottleneck (rather true in most web apps) - who cares how fast your language runs.

(disclaimer: If you've read anything else I've ever wrote you know I'm merely an observer of this trend, not a subscriber)

The controversy over Node is that it implies that developers from the client are piercing into the server. A domain typically full of people that came up from the OS layer. Those people are asking does it really make sense to write servers in a historically (slow) client language?

Further, and possibly a bit more personally, should people who only know client languages be writing servers at all? Server code is a unique skill just like many other things. Dabbling in writing servers is like me dabbling in doing web design - trust me, it's not pretty. There's only so many lower levels left - would you want a JavaScript operating system?

On the notable other hand - People who only know or love that client language have been given a whole new freedom and ability. They'll argue (with a rather reasonable defense) that Node.js represents one of the easiest ways to create a server or web app. Even if they don't defend the performance, in many practical cases, they don't need to - like it or not at the right time it can be "good enough" (again, proof by existence). It's positively no wonder they defend Node. They are defending their newly found, wondrous abilities that can solve real problems. Wouldn't you?

So as my information-less friend said - Node will scale. But that is, indeed information-less. So can Ruby, Rails, Java, C++, and COBOL - architectures scale - languages and servers don't. Just like most web apps, a Node application will probably be bottle-necked at its database. You can fool yourself that Node itself is "insanely fast" but you'd be fooling yourself (Java/Vert.x vs. Node, Java/Jetty vs. Node, Node vs. lots) and rest assured that despite scaling, some portion of your latency is baked into your language/framework performance. Your users are paying milliseconds for your choice of an interpreted and/or dynamic language.

Should your start-up use Node? That depends on a lot of things. If history is a teacher however, massive success will likely push you to something statically typed. Facebook "fixed" PHP by compiling PHP to C++ and Twitter painfully (after years of fail whales) migrated from Ruby to Scala/Java. Square has migrated to JRuby to improve performance, I'll be interested to watch if its enough (I'm feeling yet another article on the nefarious demons upon drop-in replacing a global-interpreter-locked Ruby with a true multithreaded one).

The fight over Node is, in truth, one of the least truly technical developer fights I've seen. People on both sides are simply defending their abilities and maybe their livelihoods - the technical points are rather obvious. I'd say Node is definitely a possible solution for some non-trivial set of problems - then again, I can think of plenty of situations I'd also veer away from it. But of course - I'm not very technically religious - and I'm definitely not a "Node Guy".

All this being said - I am seriously hiring across the stack (including mobile) at my new start-up. If you have a desire to argue with me about this sort of stuff on a daily basis - email me at and attach a resume.

This article was spawned from my own comment at William Edwards blog