Monday, June 8, 2009

On Java Visibility

A semi-famous example of broken Java synchronization is this:

class SomeClass {

  private boolean keepGoing = true;

    public boolean get() {
      return keepGoing;
    }

    public synchronized boolean set(boolean x) {
      keepGoing = x;
    }
}

I believe this example is in Josh Bloch's book "Effective Java" (which I now notice I somehow lost my copy in my most recent move - don't tell Josh).

The idea is here that someone (ostensibly) thought that they'd save some performance by not synchronizing a getter and just synchronizing the setter. There definitely is a cost to synchronization and needless to say, getting doesn't change anything - so why bother paying that cost for gets?

As has been pointed out long before this post, without a synchronize on a getter, there is no guarantee of visibility when doing the get. That is, if one thread calls set(false), there's no guarantee that any other thread will know it happened.

Consider code that might use the code above:


class SomeOtherClass {

  SomeClass keepGoing = new SomeClass();

  class Thread1 implements Runnable {
    public void run() {
      while (keepGoing.get()) x++;
      System.out.println("done1");
    }
  }

class Thread2 implements Runnable {
    public void run() {
    keepGoing.set(false);
  }
  }
}


Let's say you start a Thread1 running. And of course, keepGoing.get() is true, so it just keeps looping along. Then lets say an eternity (or maybe 2 seconds) later you start Thread2.

If we had reliable visibility, Thread1 would end the moment after Thread2 sets keepGoing to false.

If you've read Josh's book, its no surprise that it doesn't. Specifically, Thread1 doesn't end. It keeps going. Thread2 ends happily and Thread1 never ends.

The only interesting part to me was that this always worked. Always. Adding to the complexity of visibility concerns is that memory is "leaky". That is, even without guaranteed visibility you often get unreliable visiblity.

So, I dug a little deeper. If you're adventurous enough to grab a debug-version of the JDK and run it with the -XX:+PrintOptoAssembly option. You get to see the optimizations the JIT are making. That is, you see the assembly code version of your Java code - post-optimization. Check out Koshuke Kawaguchi's Blog for some instructions.

So here's Thread1's loop code after JIT optimization at runtime:

02c movq R10, precise klass manybrain/test/Main: 0x0000000040a50968:Constant:exact * # ptr
036 movsbl R8, [R10 + #596 (32-bit)] # byte ! Field manybrain/test/Main.keepGoing
03e testl R8, R8
041 je,s B4 P=0.000000 C=147944.000000
041
043 B2: # B3 <- B1 Freq: 1
043 movl R8, [R10 + #592 (32-bit)] # int ! Field manybrain/test/Main.x
043
04a B3: # B3 <- B2 B3 top-of-loop Freq: 1e-35
04a incl R8 # int
04d movl [R10 + #592 (32-bit)], R8 # int ! Field manybrain/test/Main.x
054 testl rax, [rip + #offset_to_poll_page] # Safepoint: poll for GC # manybrain.test.Main$Thread1::run @ bci:14 L[0]=_
# OopMap{r10=Oop off=84}
05a jmp,s B3
05a
05c B4: # N53 <- B1 Freq: 4.76837e-07
05c movl RSI, #27 # int
061 nop # 2 bytes pad for loops and calls


If you're old school, welcome home.

If you're not, then this might look like a lot of goop. So let's parse out just the interesting parts.

Line 41 is an conditional jump. Basically, if keepGoing (per the comment in line 36) is false, we jump to line 5C (label B4) and end the code segment. You and I know that keepGoing started true, so basically that jump isn't followed.

Lines 43-5a are the loop that does x++.

And checkout line 5a. That is an unconditional jump back to the top of the loop. So what does all this mean? That the JIT did some very aggressive optimization. In fact, remember our original loop from Thread1?


while (keepGoing.get()) x++


The JIT has optimized this to:

if (keepGoing.get()) {
  while (true) x++;
}


No wonder the loop never ends. You've got bigger problems now than a little leaky visibility issue. I'm positive I'm oversimplifying, but effectively the JIT saw your get method wasn't synchronized and made the assumption that it could optimize as such. If you didn't guarantee visibility, it didn't need to either. Obviously, add the synchronization modifier to the get method and all this badness won't happen.

Moral of the story is much like the inevitable topics discussed at a lunch with Jeremy Manson - you can't optimize away correct synchronization; as all you'll probably do is optimize the "correctness" part away.

5 comments:

matt said...

Don't forget you could declare it volatile.. It would probably be a better solution than using synchronise

Jeremy Manson said...

That's right! Josh and Neal Gafter pointed out that recent Hotspot versions do this in a recent talk at Google. We'll probably make that talk available online at some point.

Dror Harari said...

Would the Hotspot compiler also plant the memory barrier instruction in addition to avoiding that aggressive optimization Tima mentioned? I ask this because even if one gets visibility, there might still be the question of order of events and without a memory barrier, there could still be time period when the get() will return true after set(false) was called.

Mircea said...

Sounds to me more like whoever wrote the compiler/optimization does not know the meaning of 'variable' which keepGoing is declared as. In other words it's not an expression evaluated to a constant or simply not a constant at all that may be substituted in other expressions.

Ps. mailinator is my favorite spam eater.

Lari Hotari said...

Additional link to my previous comment:

It's also possible to disable hardware prefetching programmaticly:
http://stackoverflow.com/questions/784041/how-do-i-programatically-disable-hardware-prefetching

One stackoverflow answer has a link to PAPI http://icl.cs.utk.edu/papi/ which can be used to show low-level performance bottlenecks.