Friday, February 1, 2008

More on the "pen1s" post

In my post about searching 185 mailinator emails per second in Mailinator, I think I failed to adequately explain the goal of it. Its intent was to document a thought process - it was not intended nor tried to be a taxonomy of search algorithms (I gave some very nice links if you're looking for that). I really don't think the world needs another taxonomy of string searching algorithms, or at least not one written by me.. Nor does it need another academic analysis of any existing algorithms - there are plenty of those out there (and I linked Wikipedia in that post too where appropriate to describe specific algorithms).

Over a long history of reading about algorithms, I realized that I rarely found many analysis of algorithms from an engineer viewpoint. I am a recovering academician, but engineering is where you get to tinker with things. I've read many times Boyer-moore's best case is O(n/m) - but whats the *real* case? The usual case? How can that influence me in using any of these algorithms? Many searching algorithms display their worst case when you search for stuff like "AAAA" and your search string is "AAAABAAAABAAAAAB" - well guess what, I don't often search for that kind of thing in practice.

Also many people don't truly understand Big O notation (my great aunt made incredible homemade donuts, but ask her about algorithmic analysis? not so much). Considering some examples in the concrete, especially realistic cases can provide insight. For example, tell your favorite non-algorithmic person that there is an algorithm called "bubble sort", and it sorts numbers for you. But in order to sort 100 numbers, it has to look at each of those numbers on the order of 100 times. Effectively it must do on the order of 10,000 comparisons to sort 100 numbers - they'd look at you like you're crazy.

That post (and future ones in the series if i write them) attempts to detail an evolutionary engineering thought process on designing an algorithm for a specific purpose. The article started with the canonical "naive" but undeniably correct solution. When considering any problem, correctness is in my opinion, an excellent place to start.

I did not "forgot" to mention any other searching algorithms. They simply didn't not yet arise in this thought process (several people mentioned Aho-Corasick, which I didnt mention by name but certainly alluded to its features when I mentioned the finite state machines of regular expressions). A Trie is a basis for AC, but its definitely not AC (which others seemed confused about).

As I said, we're by far mostly interested in the "usual case" running time (which future installments will empirically demonstrate). I was also pretty clear in the last installment that what I had presented so far was *not* was I was using in Mailinator. Some people seemed to miss that (they people seem so excited to comment, they must have not actually read much of the post). I'm not sure how far this thread of blog posts will go - again, I'm documenting my thinking.

And very importantly, I was also clear (and trying to be clearer) that I am in some sense over-engineering this problem. I know that - that is basically a goal here. I likely don't need the performance I ultimately will end up with. This is not science - its fun.

So to concisely re-state the purpose of the blog entry - I'm documenting the thought process that coincided with evolving needs for an over-engineered multi-term string searching algorithm. Thought processes almost by nature go down wrong avenues - which isn't bad, its educational. The interesting parameters are such that we have a hundred or so words (expecting that to grow) and rarely find any of them. Engineering analysis is limited as compared to academic analysis in that you give specific cases and values - you are really only able to look at one instance of a big problem. But again, that all I'm trying to do here.

Thus, If you're looking for a in-depth explanation of Boyer-Moore (which some people who didn't read the sentence in the last post where I said I wasn't providing that) seemed to want, or you want a nice taxonomy of effective string searching algorithms, then you should probably go read something like that. Because this.. wasn't that.

As other interesting facts, I chose 185 emails/second because it was a relatively representative rate for Mailinator. I truly don't know Mailinator's max rate - I've seen 1200 emails/second and I've seen 10 emails/second. The number 185 was chosen as representative, not as the only possibility.

Finally, as I said, I'm aware of algorithms such as Rabin-Karp, Aho-Corasick, and Wu-Manber (in fact, I see Udi most days at work). From a storytelling standpoint, it wouldn't be much of a tale if I didn't get an interesting result in the end. Aho-Corasick is linear on the search space and Wu-Manber is nicely sub-linear. Not to give it away, but Mailinator's algorithm is also sub-linear. It isn't Wu-Manber and all truth be told, Wu-Manber is notably better in memory use and worst case running time.

If you're asking then why should you use such a thing? I'm not advocating you should. Again, the idea here isn't so much the final algorithm (which is really just the punchline). Its really way more about how we get there.

No comments: