Monday, February 18, 2008

Kill the myth please. NIO is *not* faster than IO

I'm giving 3 talks at the Software Developer's Conference West in Santa Clara, CA on March 3-7.

One of them is a tutorial on how to interview in Silicon Valley (fun stuff).

Another is how to write high-performance servers in Java. Specifically, why NIO is really not the best way anymore (post linux 2.6 kernel and NPTL) and multithreaded I/O is the new old-way of doing things. Its a fun talk and largely discusses the internals of Mailinator and how it runs a few thousand simultaneous threads without breaking a sweat.

On top of that, as it turns out, in pure throughput, IO smokes NIO in all tests I tried. And I'm not alone - Rahul Bhargava of Rascal Systems did a very nice analysis of this and posted it, sadly, in some forums at theserverside.com.

The post is solid gold and I'm posting it below just for posterity as I'd hate to see those forums come down some day and I lose access to that post. Incidentally, I disagree with Rahul that "fewer threads are easier to debug". It only takes 2 to make a deadlock or race condition.

The original URL for this post is:
http://www.theserverside.com/discussions/thread.tss?thread_id=26700

And like I said, it was written by Rahul Bhargava of Rascal Systems. Here it is:

-------------------------------------------------

I have been benchmarking Java NIO with various JDKs on Linux. Server is
running on a 2 CPU 1.7 GHz, 1GB RAM, Ultra160 SCSI 36GB disk

With Linux kernel 2.6.5 (Gentoo) I had NPTL turned on and support for
epoll compiled in. The server application was designed to support
multiple disptach models :

1. Reactor with Iterative Disptach with multiple selector threads. Essentially
the accepted connections were load-balanced between varying number of
selector threads. The benchmark then applied a step function to experimentally
determine the optimal # of threads and connection per selector ratio.

2. Also a simple concurrent blocking disptach model was supported. This is
essentially a reader thread per connection model.

Client application opens concurrent persistent connections to the server
and starts blasting messages. Server just reads the messages and does
basic un-marshalling to ensure message is ok.

Results were interesting:

1. With NPTL on, Sun and Blackwidow JVM 1.4.2 scaled easily to 5000+ threads. Blocking
model was consistently 25-35% faster than using NIO selectors. Lot of techniques suggested
by EmberIO folks were employed - using multiple selectors, doing multiple (2) reads if the first
read returned EAGAIN equivalent in Java. Yet we couldn't beat the plain thread per connection model
with Linux NPTL.

2. To work around not so performant/scalable poll() implementation on Linux's we tried using
epoll with Blackwidow JVM on a 2.6.5 kernel. While epoll improved the over scalability, the
performance still remained 25% below the vanilla thread per connection model. With epoll
we needed lot fewer threads to get to the best performance mark that we could get out of NIO.

Here are some numbers:

(cc = Concurrent Persistent Connections, bs = Is blocking server mode on Flag,
st = Number of server threads, ct = Connections handled per thread,
thruput = thruput of the server )

cc, bs,st,ct, thruput
1700,N,2,850,1379
1700,N,4,425,1214
1700,N,8,212,1240
1700,N,16,106,1140
1700,N,32,53,1260
1700,N,64,26,1115
1700,N,128,13,886
1700,N,256,6,618
1700,N,512,3,184
1700,Y,1700,1,1737

As you can see the last line indicates vanilla blocking server (thread per connection)
produced the best thruput even with 1700 threads active in the JVM.

With epoll, the best run was with 2 threads each handling around 850 connections in
their selector set. But the thruput is below the blocking server thruput by 25%!

Results shows that the cost of NIO selectors coupled with OS polling mechanism (in
this case efficient epoll VS selector/poll) has a significant overhead compared to
the cost of context switching 1700 threads on an NPTL Linux kernel.

Without NPTL of course it's a different story. The blocking server just melts at 400 concurrent
connections! We have run the test upto 10K connections and the blocking server outperformed
NIO driven selector based server by same margin. Moral of the story - NIO arrives at the scene
a little too late - with adequate RAM and better threading models (NPTL), performance gains
of NIO don't show up.

Sun's JVM doesn't support epoll() so we couldn't use epoll with it. Normal poll() based
selector from Sun didn't perform as well. We needed to reduce the number of connections
per thread to a small number (~ 6-10) to get comprabale numbers to epoll based selector.
That meant running lot more selector threads kind of defeats the purpose of multiplexed IO.
The benchmarks also dispell the myth created by Matt Welsh et al (SEDA) that a single
threaded reactor can keep up with the network. On a 100Mbps ethernet that was true: network
got saturated prior to server CPUs but with > 1Gbps network, we needed multiple selectors
to saturate the network. One single selector's performance was abysmal (5-6x slower than
concurrent connections)

For application that want to have fewer number of threads for debuggability etc, NIO may be
the way to go. The 25-35% performance hit may be acceptable to many apps. Fewer threads
also means easier debugging, it's a pain to attach a profiler or a debugger to a server hosting
1000+ threads :-) . Bottom line with better MT support in kernels (Linux already with NPTL), one
needs to re-consider the thread per connection model

Rahul Bhargava
CTO, Rascal Systems

8 comments:

Anonymous said...

What about kpoll in OS X? I have heard that it can handle concurrency better than epoll on linux. Is there any truth to this?

btw. great tool (mailinator) and great post...

Unknown said...

sun jvm 6 supports epoll according to this doc
http://java.sun.com/javase/6/docs/technotes/guides/io/enhancements.html

Anonymous said...

What about the third talk?

paul said...

The third talk I'm actually doing with Jeremy Manson (father of the Java Memory Model) on "Java Performance Myths".

Again, should be a fun one. Its not an optimization talk per se, its more of "everyone says synchronization is cheap" - well is it? We test tons of this stuff (including the basis of this blog post, does asynchronous I/O have worse throughput than synchronous I/O?)

Some answers won't surprise anyone (which is fine) but some just might.

Anonymous said...

very very interesting. i've played around with writing a high-perf server for a custom protocol using NIO and it was vastly more complex than it would have been to use blocking io (never quite finished, other more pressing work called). i need to stop trusting trusted sources i think. i probably would have gotten closer to done if i'd used blocking io.... :/ for what it's worth, i agree that dealing with many threads really isn't that difficult, you just have to think through the different sequences that can occur, and never *never* leave something unsafe b/c "this will never happen". 'course, it's all been made much easier with JSR 133, there's just so much you can DO now, fast.

Anonymous said...

I'm curious, I did a micro benchmark and it seems the overhead of creating a thread instead of reusing one through the use of an Executor seemed significant. Have you run into this?

Raoul Duke said...

ja, i don't know the precise numbers, but i always had the feeling since seeing Erlang that it was an existence proof that the problem was with how other systems implemented threads/processes, not with the basic ideas themselves.

devshed said...

So, the one question I have is scale. NIO is supposed to be efficient. Can a server that scales to 2k threads under IO scale to 20k threads under NIO? that's the question on many peoples minds.

This Blog has Moved !

The Mailinator Blog has moved to: https://manybrain.github.io/m8r_blog/ Check us out there !