Tuesday, May 7, 2013

Embedding Jetty9 & Spring MVC

This post is re-done of one of my previous posts which was about embedding Jetty7. Now it's about new version - Jetty9 and also with support of Spring MVC. Just thought it would be a good idea to keep something like that as a reference. There is no much text below, this is because the source is clear enough and doesn't need much explanation. Though, feel free to raise questions in comments.

Wednesday, March 27, 2013

AtomicFieldUpdater vs. Atomic

Java 1.5 introduced new family of classes (Atomic*FieldUpdater) for atomic updates of object fields with properties similar to Atomic* set of classes and it seems like there is slight confusion about the purpose of these. And that confusion is understood, the reason for their existance is not very obvious. First of all they are no way faster than Atomics, if you look at source, you see that there are lots of access control checks. Then, they are not handy - developer has to write more code, understand new API, etc.

So why would you bother? There are two main use cases when Atomic*FieldUpdater can be considered an an option:

  • There is a field which is mostly read and rarely changed. In that case, volatile field can be used for read access and Atomic*FieldUpdater for ocasional updates. Thought, that optimization is arguable, because there is a good chance that in latest JVMs Atomic*.get() is intrinsic and should not be slower than volatile.
  • Atomics have much higher overhead on memory usage than primitives. In cases when memory is critical Atomic can be replaced with volatile primitive with Atomic*FieldUpdater.

References:
http://concurrency.markmail.org/message/ns4c5376otat2p54?q=FieldUpdater
http://concurrency.markmail.org/message/mpoy74yhuwgi52fa?q=FieldUpdater

Tuesday, March 12, 2013

Scala: Automatic resourse management

After completing wonderful course by Martin Odesky, I have eventually had a chance to have a little play with Scala and create something more useful than "hello world" app. And even I have had some experience with that language just a few week before, I felt slightly frustrated. I reckon all that is because I become too dull and silly spending too much time with Java :) First surprise was that I realized that this language has a compiler - with Java it almost doesn't exist, you never 'compile' you do 'build', which is very different kind of thing. With Java you always almost curtain that you code is compilable, because modern IDEs (like Intellij) do not give you a chance to leave compilation error in your code. Another surprize is that Scala compiler is deadly slow, I have a good feeling that big project will suffer with it. So, you can say that with Scala it feels like comming back to good old C++ days :)

Ok, that's was introduction, here is some stuff I wrote, and which I almost sure is just another 'bicycle', but was useful for me. After some time with language, I realized that it doesn't have any standard resource-management construction, which probably is good for Scala - language is so flexible that it allows you to build your own without much effort (mostly code is stolen from this post):

  trait Managed[T] {
    def onEnter(): T
    def onExit(t:Throwable = null)
    def attempt(block: => Unit) {
      try { block } finally {}
    }
  }

  def using[T <: Any, R](managed: Managed[T])(block: T => R): R = {
    val resource = managed.onEnter()
    var exception = false
    try {
      block(resource)
    } catch  {
      case t:Throwable => {
        exception = true
        managed.onExit(t)
        throw t
      }
    } finally {
      if (!exception) {
        managed.onExit()
      }
    }
  }

  def using[T <: Any, U <: Any, R] (managed1: Managed[T], managed2: Managed[U]) (block: T => U => R): R = {
    using[T, R](managed1) { r =>
      using[U, R](managed2) { s => block(r)(s) }
    }
  }

  class ManagedClosable[T <: Closeable](closable:T) extends Managed[T] {
    def onEnter(): T = closable
    def onExit(t:Throwable = null) {
      attempt(closable.close())
    }
  }

  implicit def closable2managed[T <: Closeable](closable:T): Managed[T] = {
    new ManagedClosable(closable)
  }
and the usage looks like this:
  def readLine() {
    using(new BufferedReader(new FileReader("file.txt"))) {
      file => {
        file.readLine()
      }
    }
  }

Monday, February 4, 2013

Evil of microbenchmarking & CAS performance on Ivy Bridge

Some days back Martin Thompson published investigation on results of his controversial CAS (compare and swap) performance test he made few months back. And that investigation really impressed me - it shows how microbenchmarking can go really wrong, even when it is done by such a smart guy.

Just to recap, test was executing several threads which were hammering CPU with CAS operations. Test showed that on average CAS on modern Ivy Bridge processor works significantly slower than on older Nehalem architecture. After a few months and Martin found out the reason for such strange behavior and amazing thing about it is that the reason for test being slower is that Ivy Bridge is actually faster.

To understand why that happens lets see what's going on when CAS is executed. Generally speaking, on high level, in relation to CPU core, memory which is going to be written can be in two states - core can either exclusively own cache line with it or do not own. If it owns that line then CAS is extremely fast - core doesn't need to notify other cores to do that operation. If core doesn't own it, the situation is very different - core has to send request to fetch cache line in exclusive mode and such request requires communication with all other cores. Such negotiation is not fast, but on Ivy Bridge it is much faster than on Nehalem. And because it is faster on Ivy Bride, core has less time to perform a set of fast local CAS operations while it owns cacheline, therefore total throughput is less.

I suppose, a very good lesson learned here - microbenchmarking can be very tricky and not easy to do properly. Also results can be easily interpreted in a wrong way. So, be careful!