Monday, 21 May 2012

Measuring latency in Java







I noticed a common pattern of Java code that attempts to measure latency within a Java process, which will give false measurements of elapsed time.

long start = System.currentTimeMillis();
doSomething();
long end = System. currentTimeMillis();
long time = end - start;


The problem with this snippet of code is that it measures latency by adjusted wall clock time. Wall clock time will continously drift from correct time and the operating system periodically sends a delta to skew it to keep it synchronized with real time; so can potentially skew your measurement.

The code can be changed by replacing System.currentTimeMillis() with System.nanoTime() as the former will include the drift adjustments and the latter gives measurements without including the skew. For example, it could skew the wall clock time by +20 milliseconds; thus adding an illusionary added 20 units of latency to the measurement.

The lesson from this is if you need to measure elapsed time in Java, ALWAYS use System.nanoTime()

long start = System.nanoTime();
doSomething();
long time = System.nanoTime() - start;

2 comments:

  1. One caveat, Java nano time measurements must be on a single thread. On a multicore X86 box nano time is not necessarily consistent across cores (but probably quite close...).

    ReplyDelete
  2. Very good point but seems a grey area to confirm that it's true. I suspect the latest CPUs, like Intel Ivy Bridge, has ways of measuring elapsed time between cores but depends if the underlying OS kernel version takes advantage of the corresponding instruction set. Java makes OS-dependent calls so - I guess - that it depends on the kernel-land code being executed.

    ReplyDelete