java.util.Random and java.util.concurrent.ThreadLocalRandom in multithreaded environments

by Mikhail Vorontsov

Generating pseudorandom data

There are 2 types of random generators in Java: pseudorandom and secure. Pseudorandom generators are transforming a seed into a new portion of pseudorandom data based on some formula. Secure random generators are using some machine specific sources of actually random events (file/socket access, for example) to generate its data.

Secure random generators:

  • should be used only when cryptographically strong random data is required
  • are slow
  • could be waiting for external events (“stuck”) if you have requested a lot of random data (Linux /dev/random is an example of such generator)

Pseudorandom generators, on the other hand, depend only on the initial “seed” value, so you can generate the same sequence of pseudorandom events if you will provide the same seed to the algorithm. In general case, such generators are fast because they are CPU bound only (do not rely on any IO). We will review the evolution of pseudorandom generators in Java and the reasons behind these changes.

java.util.Random

java.util.Random is available from Java 1.0. It is a thread safe class, so you may share (in theory) instances of this class between several threads and do not expect to get the same random data in 2 threads at the same time. Such thread safety is achieved via using an AtomicLong for the generator seed.

Random uses AtomicLong CAS (compare-and-set) operations for updating its seed. Despite being a light non-blocking primitive used in a lot of non-blocking algorithms, CAS behaves really poorly under the high contention. Wait for test results to see how poorly it behaves.

java.util.concurrent.ThreadLocalRandom

java.util.concurrent.ThreadLocalRandom was added in Java 7 as an attempt to overcome all performance issues associated with java.util.Random. This new class extends java.util.Random, so you may pass it to all logic expecting the old java.util.Random.

Here are main ThreadLocalRandom implementation details:

  • Internally it does not use an AtomicLong seed from Random. Instead it uses an ordinary long.
  • You can not create an instance of ThreadLocalRandom yourself, because its constructor is not public. Instead you should use its static factory ThreadLocalRandom.current(). This factory method queries internal ThreadLocal<ThreadLocalRandom>
  • It is CPU-cache aware, so it uses 8 long dummy fields for padding, thus pushing anything else out of its 64 byte L1 cache line.

All of these changes are important, which will be revealed by the following tests.

Continue reading

Core Java 7 Change Log

by Mikhail Vorontsov


01 Jan 2014 update – covered all Java 7 versions up to Java 7u45.

This article will list performance related changes made in the core part of various Java 7 updates. I will track changes in the following packages:

  • java.lang.*
  • java.io
  • java.math
  • java.net
  • java.nio.*
  • java.text.*
  • java.util.*

These changes have one peculiar property – Oracle usually does not include them in the release notes. Probably, they consider them too minor… Anyway, this page will try to help you a little: hopefully, you will not miss updates like a “famous” String.substring change in Java 7u6.

This page will be updated after new Java 7 updates. I will also add a list of changes done before Java 7u45 (give me some time!).

Java 7u45 (compared to Java 7u25)

Most of these changes were made in Java 7u40, but I have unfortunately missed that release.

Shared empty table in ArrayList and HashMap update

File changed: \util\ArrayList.java
File changed: \util\HashMap.java

Two most popular Java collections – ArrayList and HashMap are now consuming less memory if their instances are empty (size=0). The only reason for this update, from my point of view is a critical mass of cases in Oracle performance benchmarks when a map/list is created, but not populated later due to a conditional logic (for example, empty parameter list was provided in the HTTP request).

This update reduces the garbage collector workload by creating less unused garbage.

In case of ArrayList, only objects created via an empty constructor are updated. If you are using a constructor specifying the initial size, there are no changes at all for you.

1
2
3
4
5
6
7
8
9
/**
 * Shared empty array instance used for empty instances.
 */
private static final Object[] EMPTY_ELEMENTDATA = {};
 
public ArrayList() {
    super();
    this.elementData = EMPTY_ELEMENTDATA;
}
/**
 * Shared empty array instance used for empty instances.
 */
private static final Object[] EMPTY_ELEMENTDATA = {};

public ArrayList() {
    super();
    this.elementData = EMPTY_ELEMENTDATA;
}

HashMap got a more thorough update – internal table initialization is now moved away from the constructor. It is now always initialized with an empty table. As a consequence, all getters and setters in the HashMap are now checking if a map is empty. This makes getters slightly faster in case of an empty map – you no longer have to look at the actual table, but at the same time it makes all setters/getters slower in every other case (this is a very tiny slowdown – a zero check of a single int field, but this is still an extra instruction to execute).

I hope that this HashMap change has a solid justification – there are enough always empty maps in the Oracle benchmark applications. In my personal opinion this change is quite questionable – I do not like “taxes” you always have to pay just for a corner case optimization.

1
2
3
4
5
6
7
8
9
/**
 * An empty table instance to share when the table is not inflated.
 */
static final Entry<?,?>[] EMPTY_TABLE = {};
 
/**
 * The table, resized as necessary. Length MUST Always be a power of two.
 */
transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE;
/**
 * An empty table instance to share when the table is not inflated.
 */
static final Entry<?,?>[] EMPTY_TABLE = {};

/**
 * The table, resized as necessary. Length MUST Always be a power of two.
 */
transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE;

An author of these changes has responded me here.

Continue reading

Book review: Systems Performance: Enterprise and the Cloud

All you need to know about Linux performance monitoring and tuning.

If you have visited this blog, you are likely to be interested in Java performance tuning. You want to know why ArrayList is better than LinkedList, why HashMap will usually outperform TreeMap (but we should still use the latter if we want a sorted map) or why date parsing could be so painfully slow.

Nevertheless, at some point of your career you will reach the situation when you will have to consider your application environment – server hardware, other applications running on your server and other servers running in your network (as well as many other things).

You may for example want to know why disk operations were so quick on your development box, but became a major issue on the production box. There could be various reasons:

  • Trying to acquire a file lock on NFS too often
  • Other process is using the same disk – legally or due to a misconfiguration
  • Operating system is using the same disk for paging
  • Your development box has an SSD installed, but a production box relies on the “ancient” 🙂 HDD technology
  • Or lots of other reasons

Or you may be on the other side of the spectrum and trying to squeeze the last cycles out of a critical code path. In this situation you may want to know which levels of memory hierarchy your code is accessing (L1-L3 CPU caches, RAM, disks). Java does not provide you such information, so you have to use OS monitoring tools to obtain it. This will allow you to modify your algorithm, tune your dataset size so it will fit into the appropriate level of memory hierarchy.

Or you are probably on the edge of the progress and want to deploy your brand new application on the cloud. The biggest issue with clouds is that you have to pay for everything – excessive CPU usage (as well as for non-excessive 🙂 ), suboptimal IO as well as high memory consumption (usually via a requirement to pay for the larger and more expensive instances). Besides that your application might be affected by the other tenants of the same physical box – for example HDD is a non-interruptible device – one of tenants can make a temporary denial of service “attack” on the other tenants while it is still in his quota. What tools and strategies would you use for performance monitoring/tuning in the cloud?

“Systems Performance: Enterprise and the Cloud” by Brendan Gregg is the best reference book I have seen on Linux and Solaris performance monitoring. It is written for system administrators, so it is not bound to any programming languages. The book starts with a description of methodologies which could be used for performance issue troubleshooting. Introduction chapters are followed by the chapters related to the following hardware components:

  • CPU
  • Memory
  • File systems
  • Disks
  • Network
  • Cloud computing

Each of these chapters starts with an overview of a given hardware component followed by possible performance tuning methodologies description.

The last chapter of this book describes a real world performance investigation (in my opinion, you should start reading this book from this chapter 🙂 ).

I would recommend to order a paper version of this book, because it should serve as a handy reference book for the complex performance issue investigations.

Static code compilation in Groovy 2.0

by Mikhail Vorontsov

A tiny introduction to Groovy

Groovy is a JVM-based language useful for scripting and other non-CPU critical applications (like Grails web framework). It is a dynamically typed language allowing you to add methods and properties to existing objects at runtime.

The ability to add methods and properties at runtime is implemented via methodMissing and propertyMissing methods/handlers as well as a dynamic method registration. Such feature allows you to support your own DSLs via parsing not existing method names at runtime and registering the actual method bodies corresponding to such methods/properties. It allows you, for example, to generate database access method like List<Person> getPersonsByN( N n ) where N is any field defined in the Persons database table.

Such functionality made Groovy popular in the web development due to ability to generate repeating data access methods at runtime in the frameworks. Unfortunately (or luckily 🙂 ), Groovy method calls are using the dynamic dispatch model – Groovy runtime chooses the best matching method signature based on the runtime argument types instead of compile time argument types, like Java does. Dynamic dispatch requires each Groovy method call to use the Groovy runtime method lookup code based on the reflection. So, are method calls in Groovy extremely slow? The answer is no – Groovy does a very good job of caching call sites, not making another reflection lookup if possible.

Groovy static compilation

One of the main features of Groovy 2.0 was the static compilation mode. It is turned on by annotating methods or the whole class with the @CompileStatic annotation. This annotation actually turns on 2 features:

  1. Static type checking
  2. Static compilation

Continue reading

String.intern in Java 7 and 8 – part 3

by Mikhail Vorontsov

I want to return to String.intern method I have discussed earlier ( part 1, part 2 ). During the last couple of months I have used interning heavily in my pet project in order to see the pros and cons of using String.intern for every non-temporary string in my application (non-temporary – the one which is expected to live longer than for a few seconds and most likely get into the GC old generation).

As I have written before, the advantages of String.intern in Java 7 and 8 are:

  • Rather fast execution, nearly no performance loss in the multithreaded mode (still using the global string pool).
  • Saving memory, thus allowing your dataset to shrink, which will let your application to run faster (in general).

The main disadvantages of this method (as I have noticed before) are:

  • The requirement to set -XX:StringTableSize=N JVM parameter in advance and dealing with its fixed value (needs JVM restart in order to expand the JVM string pool).
  • Adding calls to String.intern in a lot of places globally (probably via your own wrapper) – which has linear code maintenance cost.

After a couple of months of using String.intern in my project, I think that its usage should be limited to the fields having a limited domain of distinct values (like person first names or state/province names). We should not try to intern anything with a low probability of a repeating value – it would be a waste of CPU time.

Continue reading

A possible memory leak in the manual MultiMap implementation

by Mikhail Vorontsov

Pure Java 6 and 7

In this short article I will describe a junior level memory leak I have recently seen in a pure JDK application (no 3rd party libs were allowed).

Suppose you have a map from String identifiers to some Collections, for example a set of String properties of such identifiers: Map<String, Set<String>>. The actual type of the inner collection does not matter for this article – it should just be a Collection. Such collections are generally called multimaps.

The following method was initially written to obtain the inner set by its identifier:

1
2
3
4
5
6
7
8
9
10
11
12
private final Map<String, Set<String>> m_ids = new HashMap<String, Set<String>>( 4 );
 
private Set<String> getSetByNameFaulty( final String id )
{
    Set<String> res = m_ids.get( id );
    if ( res == null )
    {
        res = new HashSet<String>( 1 );
        m_ids.put( id, res );
    }
    return res;
}
private final Map<String, Set<String>> m_ids = new HashMap<String, Set<String>>( 4 );

private Set<String> getSetByNameFaulty( final String id )
{
    Set<String> res = m_ids.get( id );
    if ( res == null )
    {
        res = new HashSet<String>( 1 );
        m_ids.put( id, res );
    }
    return res;
}

This method checks if an identifier is already present in the map and either returns the corresponding Set or allocates a new one and adds it into the map. This method is useful for populating our map:

1
2
3
4
5
6
7
private void populateJava67()
{
    getSetByNameFaulty( "id1" ).add( "property1" );
    getSetByNameFaulty( "id1" ).add( "property2" );
    getSetByNameFaulty( "id1" ).add( "property3" );
    getSetByNameFaulty( "id2" ).add( "property1" );
}
private void populateJava67()
{
    getSetByNameFaulty( "id1" ).add( "property1" );
    getSetByNameFaulty( "id1" ).add( "property2" );
    getSetByNameFaulty( "id1" ).add( "property3" );
    getSetByNameFaulty( "id2" ).add( "property1" );
}

The next step while writing a program would be to add some accessors to our map, like the following one:

1
2
3
4
private boolean hasPropertyFaulty( final String id, final String property )
{
    return getSetByNameFaulty( id ).contains( property );
}
private boolean hasPropertyFaulty( final String id, final String property )
{
    return getSetByNameFaulty( id ).contains( property );
}

This method looks good and is unlikely to be caught by any code quality tools. Unfortunately, it has a major flaw: if you will query properties of unknown identifier, a new empty set will be allocated in our map inside getSetByNameFaulty method. This is definitely a not wanted side effect. Instead we should let our new getSetByName method know if we want to write something to the returned set:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
private Set<String> getSetByName( final String id, final boolean isWrite )
{
    Set<String> res = m_ids.get( id );
    if ( res == null )
    {
        if ( isWrite )
        {
            res = new HashSet<String>( 1 );
            m_ids.put( id, res );
        }
        else
            res = Collections.emptySet();
    }
    return res;
}
 
private boolean hasProperty( final String id, final String property )
{
    return getSetByName( id, false ).contains( property );
}
 
private void populateJava67Better()
{
    getSetByName( "id1", true ).add( "property1" );
    getSetByName( "id1", true ).add( "property2" );
    getSetByName( "id1", true ).add( "property3" );
    getSetByName( "id2", true ).add( "property1" );
}
private Set<String> getSetByName( final String id, final boolean isWrite )
{
    Set<String> res = m_ids.get( id );
    if ( res == null )
    {
        if ( isWrite )
        {
            res = new HashSet<String>( 1 );
            m_ids.put( id, res );
        }
        else
            res = Collections.emptySet();
    }
    return res;
}

private boolean hasProperty( final String id, final String property )
{
    return getSetByName( id, false ).contains( property );
}

private void populateJava67Better()
{
    getSetByName( "id1", true ).add( "property1" );
    getSetByName( "id1", true ).add( "property2" );
    getSetByName( "id1", true ).add( "property3" );
    getSetByName( "id2", true ).add( "property1" );
}

Continue reading

How to iterate zip file records (java.util.zip.ZipFile, java.util.zip.ZipInputStream)

by Mikhail Vorontsov

The right way to iterate a zip file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
final ZipFile file = new ZipFile( FILE_NAME );
try
{
    final Enumeration<? extends ZipEntry> entries = file.entries();
    while ( entries.hasMoreElements() )
    {
        final ZipEntry entry = entries.nextElement();
        System.out.println( entry.getName() );
        //use entry input stream:
        readInputStream( file.getInputStream( entry ) )
    }
}
finally
{
    file.close();
}
    
private static int readInputStream( final InputStream is ) throws IOException {
    final byte[] buf = new byte[ 8192 ];
    int read = 0;
    int cntRead;
    while ( ( cntRead = is.read( buf, 0, buf.length ) ) >=0  )
    {
        read += cntRead;
    }
    return read;
}
final ZipFile file = new ZipFile( FILE_NAME );
try
{
    final Enumeration<? extends ZipEntry> entries = file.entries();
    while ( entries.hasMoreElements() )
    {
        final ZipEntry entry = entries.nextElement();
        System.out.println( entry.getName() );
        //use entry input stream:
        readInputStream( file.getInputStream( entry ) )
    }
}
finally
{
    file.close();
}
    
private static int readInputStream( final InputStream is ) throws IOException {
    final byte[] buf = new byte[ 8192 ];
    int read = 0;
    int cntRead;
    while ( ( cntRead = is.read( buf, 0, buf.length ) ) >=0  )
    {
        read += cntRead;
    }
    return read;
}

Continue reading

Primitive types to String conversion and String concatenation

by Mikhail Vorontsov

Primitive types to String conversion

From time to time you may need to create a string in your program from several values, some of them may be of primitive types. If you have two or more primitive type values in the beginning of your string concatenation, you need to explicitly convert first of them to a string (otherwise System.out.println( 1 + 'a' ) will print ’98’, but not ‘1a’). Of course, there is a family of String.valueOf methods (or corresponding wrapper type methods), but who needs them if there is another way which requires less typing?

Concatenating an empty string literal and the first of your primitive type variables (in our example, "" + 1) is the easiest idea. Result of this expression is a String and after that you can safely concatenate any primitive type values to it – compiler will take care of all implicit conversions to String.

Unfortunately, this is the worst way one can imagine. In order to understand why it is so, we need to review how string concatenation operator is translated in Java. If we have a String value (doesn’t matter which sort of it – literal or variable or method call) followed by + operator followed by any type expression:

1
string_exp + any_exp
string_exp + any_exp

Java compiler will translate it to:

1
new StringBuilder().append( string_exp ).append( any_exp ).toString()
new StringBuilder().append( string_exp ).append( any_exp ).toString()

If you have more than one + operator in the expression, you will end up with several StringBuilder.append calls before final toString call.

Continue reading

Java varargs performance issues

by Mikhail Vorontsov

One of Java 5 new features is variable length arguments lists. They could be useful if you have to provide several arguments of the same type to your method. For example, if prior to Java 5 you had to write a method which prints all its arguments to a console, it would have been:

1
2
3
4
5
public static void printAll( final Object[] args )
{
    for ( int i = 0; i < args.length; ++i )
        System.out.println( args[ i ] );
}
public static void printAll( final Object[] args )
{
    for ( int i = 0; i < args.length; ++i )
        System.out.println( args[ i ] );
}

And its call would look like:

1
printAll( new Object[] { new Integer( 75 ), new Date(), "String" + 50 } );
printAll( new Object[] { new Integer( 75 ), new Date(), "String" + 50 } );

In Java 5 varargs support was added to the language. The same method now looks much simpler:

1
2
3
4
5
public static void printAllNew( final Object... args )
{
  for ( final Object arg : args )
      System.out.println( arg );
}
public static void printAllNew( final Object... args )
{
  for ( final Object arg : args )
      System.out.println( arg );
}

Its call also became shorter (due to the added variable length argument list support):

1
printAllNew( 75, new Date(), "String" + 50 );
printAllNew( 75, new Date(), "String" + 50 );

Continue reading

Single file vs multi file storage

by Mikhail Vorontsov

Introduction

This article will discuss how badly your application may be impacted by a data storage containing a huge number of tiny files. While it is obvious that it is better to keep the same amount of data in a single file rather than in a million of tiny files, we will see what modern operating systems offer to alleviate this problem.

This article will describe data layout problem I had recently faced. Suppose we have a database containing a few million records with average size of 10 KB, but no hard upper limit. Each of these records has a unique long id. Besides that, there is some other data, but its size is unlikely to exceed ~5% of space occupied by records.

For some reason there was a decision to use ordinary files as a data storage. The original implementation used one file per entry. Files were separated into directories according to auxiliary attributes. All this data was read into memory on startup. There are other analytical batch tools which also read the whole database in memory before data processing.

The system required very long time to startup due to necessity to read all these tiny files from disk. My task was to solve this bottleneck.

Continue reading