Large HashMap overview: JDK, FastUtil, Goldman Sachs, HPPC, Koloboke, Trove – January 2015 version

by Mikhail Vorontsov

This is a major update of the previous version of this article. The reasons for this update are:

  • The major performance updates in fastutil 6.6.0
  • Updates in the “get” test from the original article, addition of “put/update” and “put/remove” tests
  • Adding identity maps to all tests
  • Now using different objects for any operations after map population (in case of Object keys – except identity maps). Old approach of reusing the same keys gave the unfair advantage to Koloboke.

I would like to thank Sebastiano Vigna for providing the initial versions of “get” and “put” tests.

Introduction

This article will give you an overview of hash map implementations in 5 well known libraries and JDK HashMap as a baseline. We will test separately:

  • Primitive to primitive maps
  • Primitive to object maps
  • Object to primitive maps
  • Object to Object maps
  • Object (identity) to Object maps

This article will provide you the results of 3 tests:

  • “Get” test: Populate a map with a pregenerated set of keys (in the JMH setup), make ~50% successful and ~50% unsuccessful “get” calls. For non-identity maps with object keys we use a distinct set of keys (the different object with the same value is used for successful “get” calls).
  • “Put/update” test: Add a pregenerated set of keys to the map. In the second loop add the equal set of keys (different objects with the same values) to this map again (make the updates). Identical keys are used for identity maps and for maps with primitive keys.
  • “Put/remove” test: In a loop: add 2 entries to a map, remove 1 of existing entries (“add” pointer is increased by 2 on each iteration, “remove” pointer is increased by 1).

This article will just give you the test results. There will be a followup article on the most interesting implementation details of the various hash maps.

Test participants

JDK 8

JDK HashMap is the oldest hash map implementation in this test. It got a couple of major updates recently – a shared underlying storage for the empty maps in Java 7u40 and a possibility to convert underlying hash bucket linked lists into tree maps (for better worse case performance) in Java 8.

FastUtil 6.6.0

FastUtil provides a developer a set of all 4 options listed above (all combinations of primitives and objects). Besides that, there are several other types of maps available for each parameter type combination: array map, AVL tree map and RB tree map. Nevertheless, we are only interested in hash maps in this article.

Goldman Sachs Collections 5.1.0

Goldman Sachs has open sourced its collections library about 3 years ago. In my opinion, this library provides the widest range of collections out of box (if you need them). You should definitely pay attention to it if you need more than a hash map, tree map and a list for your work :) For the purposes of this article, GS collections provide a normal, synchronized and unmodifiable versions of each hash map. The last 2 are just facades for the normal map, so they don’t provide any performance advantages.

HPPC 0.6.1

HPPC provides array lists, array dequeues, hash sets and hash maps for all primitive types. HPPC provides normal hash maps for primitive keys and both normal and identity hash maps for object keys.

Koloboke 0.6.5

Koloboke is the youngest of all libraries in this article. It is developed as a part of an OpenHFT project by Roman Leventov. This library currently provides hash maps and hash sets for all primitive/object combinations. This library was recently renamed from HFTC, so some artifacts in my tests will still use the old library name.

Trove 3.0.3

Trove is available for a long time and quite stable. Unfortunately, not much development is happening in this project at the moment. Trove provides you the list, stack, queue, hash set and map implementations for all primitive/object combinations. I have already written about Trove.

Data storage implementations and tests

This article will look at 5 different sorts of maps:

  1. int-int
  2. int-Integer
  3. Integer-int
  4. Integer-Integer
  5. Integer (identity map)-Integer

We will use JMH 1.0 for testing. Here is the test description: for each map size in (10K, 100K, 1M, 10M, 100M) (outer loop) generate a set of random keys (they will be used for each test at a given map size) and then run a test for each map implementations (inner loop). Each test will be run 100M / map_size times. “get”, “put” and “remove” tests are run separately, so you can update the test source code and run only a few of them.

Note that each test suite takes around 7-8 hours on my box. Spreadsheet-friendly results will be printed to stdout once all test suites will finish.

int-int

Each section will start with a table showing how data is stored inside each map. Only arrays will be shown here (some maps have special fields for a few corner cases).

tests.maptests.primitive.FastUtilMapTest int[] key, int[] value
tests.maptests.primitive.GsMutableMapTest int[] keys, int[] values
tests.maptests.primitive.HftcMutableMapTest long[] (key-low bits, value-high bits)
tests.maptests.primitive.HppcMapTest int[] keys, int[] values, boolean[] allocated
tests.maptests.primitive.TroveMapTest int[] _set, int[] _values, byte[] _states

As you can see, Koloboke is using a single array, FastUtil and GS use 2 arrays, and HPPC and Trove use 3 arrays to store the same data. Let’s see what would be the actual performance.

“Get” test results

All “get” tests make around 50% of unsuccessful get calls in order to test both success and failure paths in each map.

Each test results section will contain the results graph. X axis will show a map size, Y axis – time to run a test in milliseconds. Note, that each test in a graph has a fixed number of map method calls: 100M get call for “get” test; 200M put calls for “put” test; 100M put and 50M remove calls for “remove” tests.

There would be the links to OpenOffice spreadsheets with all test results at the end of this article.

int-int 'get' test results

int-int ‘get’ test results

GS and FastUtil test results lines are nearly parallel, but FastUtil is faster due to a lower constant factor. Koloboke becomes fastest only on large enough maps. Trove is slower than other implementations at each map size.

“Put” test results

“Put” tests insert all keys into a map and then use another equal set of keys to insert entries into a map again (these methods calls would update the existing entries). We make 100M put calls with “insert” functionality and 100M put calls with “update” functionality in each test.

int-int 'put' test results

int-int ‘put’ test results

This test shows the implementation difference more clear: Koloboke is fastest from the start (though FastUtil is as fast on small maps); GS and FastUtil are parallel again (but GS is always slower). HPPC and Trove are the slowest.

“Remove” test results

In “remove” test we interleave 2 put operations with 1 remove operation, so that a map size grows by 1 after each group of put/remove calls. In total we make 100M put and 50M remove calls.

int-int 'remove' test results

int-int ‘remove’ test results

Results are similar to “put” test (of course, both tests make a majority of put calls!): Koloboke is quickly becoming the fastest implementation; FastUtil is a bit faster than GS on all map sizes; HPPC and Trove are the slowest, but HPPC performs reasonably good on map sizes up to 1M entries.

int-int summary

An underlying storage implementation is the most important factor defining the hash map performance: the fewer memory accesses an implementation makes (especially for large maps which do not into CPU cache) to access an entry – the faster it would be. As you can see, the single array Koloboke is faster than other implementations in most of tests on large map sizes. For smaller map sizes, CPU cache starts hiding the costs of accessing several arrays – in this case other implementations may be faster due to less CPU commands required for a method call: FastUtil is the second best implementation for primitive collection tests due to its highly optimized code.

Continue reading

Going over Xmx32G heap boundary means you will have less memory available

by Mikhail Vorontsov

This small article will remind you what happens to Oracle JVM once your heap setting goes over 32G. By default, all references in JVM occupy 4 bytes on heaps under 32G. This decision is made by JVM at start-up time. You can use 8 byte references on small heaps if you will clear the -XX:-UseCompressedOops JVM option (it does not make any sense for production systems!).

Once your heap exceeds 32G, you are in 64 bit land, so your object references will now use 8 bytes instead of 4. As Scott Oaks mentioned in his
Java Performance: The Definitive Guide book (pages 234-236, read my review of this book here), an average Java program uses about 20% of heap for object references. It means that by setting anything between Xmx32G and Xmx37G – Xmx38G you will actually reduce the amount of heap available for your application (actual numbers, of course, depend on your application). This may become a big surprise for anyone thinking that adding extra memory will let his/her application to process more data :)

Continue reading

Performance of various general compression algorithms – some of them are unbelievably fast!

by Mikhail Vorontsov

07 Jan 2015 update: extending LZ4 description (thanks to Mikael Grev for a hint!)

This article will give you an overview of several general compression algorithm implementations performance. As it turned out, some of them could be used even when your CPU requirements are pretty strict.

In this article we will compare:

  • JDK GZIP – a slow algorithm with a good compression, which could be used for long term data compression. Implemented in JDK java.util.zip.GZIPInputStream / GZIPOutputStream.
  • JDK deflate – another algorithm available in JDK (it is used for zip files). Unlike GZIP, you can set compression level for this algorithm, which allows you to trade compression time for the output file size. Available levels are 0 (store, no compression), 1 (fastest compression) to 9 (slowest compression). Implemented as java.util.zip.DeflaterOutputStream / InflaterInputStream.
  • Java implementation of LZ4 compression algorithm – this is the fastest algorithm in this article with a compression level a bit worse than the fastest deflate. I advice you to read the wikipedia article about this algorithm to understand its usage. It is distributed under a friendly Apache license 2.0.
  • Snappy – a popular compressor developed in Google, which aims to be fast and provide relatively good compression. I have tested this implementation. It is also distributed under Apache license 2.0.

Continue reading

Introduction to JMH

by Mikhail Vorontsov

11 Sep 2014: Article was updated for JMH 1.0.

10 May 2014: Original version.

Introduction

This article will give you an overview of basic rules and abilities of JMH. The second article will give you an overview of JMH profilers.

JMH is a new microbenchmarking framework (first released late-2013). Its distinctive advantage over other frameworks is that it is developed by the same guys in Oracle who implement the JIT. In particular I want to mention Aleksey Shipilev and his brilliant blog. JMH is likely to be in sync with the latest Oracle JRE changes, which makes its results very reliable.

You can find JMH examples here.

JMH has only 2 requirements (everything else are recommendations):

  • You need to create a maven project using a command from the JMH official web page
  • You need to annotate test methods with @Benchmark annotation

In some cases, it is not convenient to create a new project just for the performance testing purposes. In this situation you can rather easily add JMH into an existing project. You need to make the following steps:

  1. Ensure your project directory structure is recognizable by Maven (your benchmarks are at src/main/java at least)
  2. Copy 2 JMH maven dependencies and maven-shade-plugin from the JMH archetype. No other plugins mentioned in the archetype are required at the moment of writing (JMH 1.0).

How to run

Run the following maven command to create a template JMH project from an archetype (it may change over the time, check for the latest version near the start of the the official JMH page):

$ mvn archetype:generate \
          -DinteractiveMode=false \
          -DarchetypeGroupId=org.openjdk.jmh \
          -DarchetypeArtifactId=jmh-java-benchmark-archetype \
          -DgroupId=org.sample \
          -DartifactId=test \
          -Dversion=1.0

Alternatively, copy 2 JMH dependencies and maven-shade-plugin from the JMH archetype (as described above).

Create one (or a few) java files. Annotate some methods in them with @Benchmark annotation – these would be your performance benchmarks.

You have at least 2 simple options to run your tests::

Follow the procedure from the official JMH page):

$ cd your_project_directory/
$ mvn clean install
$ java -jar target/benchmarks.jar

The last command should be entered verbatim – regardless of your project settings you will end up with target/benchmarks.jar sufficient to run all your tests. This option has a slight disadvantage – it will use the default JMH settings for all settings not provided via annotations ( @Fork, @Warmup and @Measurement annotations are getting nearly mandatory in this mode). Use java -jar target/benchmarks.jar -h command to see all available command line options (there are plenty).

Or use the old way: add main method to some of your classes and write a JMH start script inside it. Here is an example:

1
2
3
4
5
Options opt = new OptionsBuilder()
                .include(".*" + YourClass.class.getSimpleName() + ".*")
                .forks(1)
                .build();
new Runner(opt).run();
Options opt = new OptionsBuilder()
                .include(".*" + YourClass.class.getSimpleName() + ".*")
                .forks(1)
                .build();
new Runner(opt).run();

After that you can run it with target/benchmarks.jar as your classpath:

$ cd your_project_directory/
$ mvn clean install
$ java -cp target/benchmarks.jar your.test.ClassName

Now after extensive “how to run it” manual, let’s look at the framework itself.

Continue reading

String deduplication feature (from Java 8 update 20)

by Mikhail Vorontsov

This article will provide you a short overview of a string deduplication feature added into Java 8 update 20.

String objects consume a large amount of memory in an average application. Some of these strings may be duplicated – there exist several distinct instances of the same String (a != b, but a.equals(b)). In practice, a lot of Strings could be duplicated due to various reasons.

Originally, JDK offered String.intern() method to deal with the string duplication. The disadvantage of this method is that you have to find which strings should be interned. This generally requires a heap analysis tool with a duplicate string lookup ability, like YourKit profiler. Nevertheless, if used properly, string interning is a powerful memory saving tool – it allows you to reuse the whole String objects (each of whose is adding 24 bytes overhead to the underlying char[]).

Starting from Java 7 update 6, each String object has its own private underlying char[]. This allows JVM to make an automatic optimization – if an underlying char[] is never exposed to the client, then JVM can find 2 strings with the same contents, and replace the underlying char[] of one string with an underlying char[] of another string.

That’s done by the string deduplication feature added into Java 8 update 20. How it works:

Continue reading

Trove library: using primitive collections for performance

by Mikhail Vorontsov


19 July 2014: article text cleanup, added a chapter on JDK to Trove migration.
16 July 2012: original version.

This article would describe Trove library, which contains a set of primitive collection implementations. The latest version of Trove (3.1a1 at the time of writing) would be described here.

Why should you use Trove? Why not to keep using well known JDK collections? The answer is performance and memory consumption. Trove doesn’t use any java.lang.Number subclasses internally, so you don’t have to pay for boxing/unboxing each time you want to pass/query a primitive value to/from the collection. Besides, you don’t have to waste memory on the boxed numbers (24 bytes for Long/Double, 16 bytes for smaller types) and reference to them. For example, if you want to store an Integer in JDK map, you need 4 bytes for a reference (or 8 bytes on huge heaps) and 16 bytes for an Integer instance. Trove, on the other hand, uses just 4 bytes to store an int. Trove also doesn’t create Map.Entry for each key-value pair unlike java.util.HashMap, further reducing the map memory footprint. For sets, it doesn’t use a Map internally, just keeping set values.

There are 3 main collection types in Trove: array lists, sets and maps. There are also queues, stacks and linked lists, but they are not so important (and usually instances of these collections tend to be rather small).

Array lists

All array lists are built on top of an array of corresponding data type (for example, int[] for TIntArrayList). There is a small problem which you should deal with: a value for the absent elements (default value). It is zero by default, but you can override it using

1
2
public TIntArrayList(int capacity, int no_entry_value);
public static TIntArrayList wrap(int[] values, int no_entry_value);
public TIntArrayList(int capacity, int no_entry_value);
public static TIntArrayList wrap(int[] values, int no_entry_value);

There are 2 useful methods called getQuick/setQuick – they just access the underlying array without any additional checks. As a side-effect they would allow to access elements between list size and capacity (don’t use it too much – it is still better to add values legally and when getQuick values as long as you are inside array list boundaries.

Each Trove array list has several helper methods which implement the java.util.Collections functionality:

1
2
3
4
5
6
7
8
9
10
11
12
public void reverse()
public void reverse(int from, int to)
public void shuffle(java.util.Random rand)
public void sort()
public void sort(int fromIndex, int toIndex)
public void fill(int val)
public void fill(int fromIndex, int toIndex, int val)
public int binarySearch(int value)
public int binarySearch(int value, int fromIndex, int toIndex)
public int max()
public int min()
public int sum()
public void reverse()
public void reverse(int from, int to)
public void shuffle(java.util.Random rand)
public void sort()
public void sort(int fromIndex, int toIndex)
public void fill(int val)
public void fill(int fromIndex, int toIndex, int val)
public int binarySearch(int value)
public int binarySearch(int value, int fromIndex, int toIndex)
public int max()
public int min()
public int sum()

Continue reading

Creating an exception in Java is very slow

by Mikhail Vorontsov


29 June 2014 update – now using JMH for testing. Added 2 ways to avoid the cost of exceptions. Made some editorial changes to the original text to reflect JMH usage.

Filling in the stack trace is slow…

Creating an exception in Java is a very slow operation. Expect that throwing an exception will cost you around 1-5 microseconds. Nearly all this time is spent on filling in the exception thread stack. The deeper the stack trace is, the more time it will take to populate it.

Usually we throw an exception in case of unexpected problems. This means that we don’t expect exceptions to be thrown at a rate of thousands per second per process. But sometimes you will encounter a method which uses exceptions for more likely events. We have already seen a good (actually bad) example of such behavior in Base 64 encoding and decoding performance article: sun.misc.BASE64Decoder is extremely slow due to using an exception for “give me more data” requests:

at java.lang.Throwable.fillInStackTrace(Throwable.java:-1)
at java.lang.Throwable.fillInStackTrace(Throwable.java:782)
- locked <0x6c> (a sun.misc.CEStreamExhausted)
at java.lang.Throwable.<init>(Throwable.java:250)
at java.lang.Exception.<init>(Exception.java:54)
at java.io.IOException.<init>(IOException.java:47)
at sun.misc.CEStreamExhausted.<init>(CEStreamExhausted.java:30)
at sun.misc.BASE64Decoder.decodeAtom(BASE64Decoder.java:117)
at sun.misc.CharacterDecoder.decodeBuffer(CharacterDecoder.java:163)
at sun.misc.CharacterDecoder.decodeBuffer(CharacterDecoder.java:194)

You may encounter the same problem if you try to run pack method from String packing part 2: converting Strings to any other objects with a string starting with a digit, but followed by letters. Let’s see how long does it take to pack ‘12345’ and ‘12345a’ with that method:

Benchmark                        (m_param)   Mode   Samples         Mean   Mean error    Units
t.StringPacking2Tests.testPack      12345a  thrpt        10        0.044        0.000   ops/us
t.StringPacking2Tests.testPack       12345  thrpt        10        7.934        0.154   ops/us

As you can see, we were able to convert “12345” from String about 200 times faster than “12345a”. Most of processing time is again being spent filling in stack traces:

at java.lang.Throwable.fillInStackTrace(Throwable.java:-1)
at java.lang.Throwable.fillInStackTrace(Throwable.java:782)
- locked <0x87> (a java.lang.NumberFormatException)
at java.lang.Throwable.<init>(Throwable.java:265)
at java.lang.Exception.<init>(Exception.java:66)
at java.lang.RuntimeException.<init>(RuntimeException.java:62)
at java.lang.IllegalArgumentException.<init>(IllegalArgumentException.java:53)
at java.lang.NumberFormatException.<init>(NumberFormatException.java:55)
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.valueOf(Long.java:540)
at tests.StringPacking2Tests.pack(StringPacking2Tests.java:69)
...

Continue reading

String switch performance

by Mikhail Vorontsov

Suppose we have a large number of commands. For simplicity of writing this article, they all would be implemented as methods of one class. We should be able to call any of these commands by a string name. We will allow case-insensitive calls. Our “class with commands” would look like:

1
2
3
4
5
6
7
8
9
10
public class ObjectWithCommands {
    public Object Command1( final Object arg ) { return arg; }
    public Object Command2( final Object arg ) { return arg; }
    ...
    public Object Command9( final Object arg ) { return arg; }
    public Object Command10( final Object arg ) { return arg; }
    ...
    public Object Command99( final Object arg ) { return arg; }
    public Object Command100( final Object arg ) { return arg; }
}
public class ObjectWithCommands {
    public Object Command1( final Object arg ) { return arg; }
    public Object Command2( final Object arg ) { return arg; }
    ...
    public Object Command9( final Object arg ) { return arg; }
    public Object Command10( final Object arg ) { return arg; }
    ...
    public Object Command99( final Object arg ) { return arg; }
    public Object Command100( final Object arg ) { return arg; }
}

This article will check the performance of various ways of calling these commands.

But first of all, a quiz :) Suppose you are going to call your commands using the following class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class EqualsIgnoreCaseCaller {
    public static Object call( final ObjectWithCommands obj, final String commandName, final Object arg )
    {
        if ( commandName.equalsIgnoreCase( "Command1" ) )
            return obj.Command1( arg );
        if ( commandName.equalsIgnoreCase( "Command2" ) )
            return obj.Command2( arg );
        ...
        if ( commandName.equalsIgnoreCase( "Command99" ) )
            return obj.Command99( arg );
        if ( commandName.equalsIgnoreCase( "Command100" ) )
            return obj.Command100( arg );
    }
}
public class EqualsIgnoreCaseCaller {
    public static Object call( final ObjectWithCommands obj, final String commandName, final Object arg )
    {
        if ( commandName.equalsIgnoreCase( "Command1" ) )
            return obj.Command1( arg );
        if ( commandName.equalsIgnoreCase( "Command2" ) )
            return obj.Command2( arg );
        ...
        if ( commandName.equalsIgnoreCase( "Command99" ) )
            return obj.Command99( arg );
        if ( commandName.equalsIgnoreCase( "Command100" ) )
            return obj.Command100( arg );
    }
}

Which of the following method calls will be the fastest (after warmup)?

  1. EqualsIgnoreCaseCaller.call( obj, "Command9", arg );
  2. EqualsIgnoreCaseCaller.call( obj, "Command99", arg );
  3. EqualsIgnoreCaseCaller.call( obj, "Command100", arg );

Continue reading

Book review: Java Performance: The Definitive Guide by Scott Oaks

I have just finished reading this book. This is a must-have book for this website visitors. This book covers all the JVM facets: structure (what to tune), options (how to tune), writing proper code. All the books I have seen before lack at least one of these. Most lack two.

This book starts from an overview of Java Performance toolkit – OS and JRE tools useful for performance engineers. This chapter may be a bit boring, but it contains a very useful list of commands for many-many common situations. Besides, it gives you a taste of Java Flight Recorder and Java Mission Control added in Java 7u40, which have unique capabilities amongst other monitoring tools.

The next chapter covers JIT compilers, their architecture and tuning tips. This chapter will start showing you that you (likely) don’t know a lot of useful JDK parameters.

Chapters 5 and 6 will tell you about 3 most useful garbage collectors bundled in JRE: throughput, CMS and G1. You will know how each of them operates under different conditions, how to monitor them and how to tune them.

At this point you may think that Java Performance: The Definitive Guide is similar to Java Performance by Charlie Hunt – it just tells you how to tune JVM instead of writing the properly performing code. No! The following chapters will emphasize the best practices of writing the fast code.

Buy on amazon.com

Chapter 7 is about the heap analysis and optimization – first of all it will tell you how to make heap dumps and histograms and then describe several generic ways to decrease your application memory footprint (including string interning I have also written about).

Chapter 8 will describe the native memory consumption: heap, thread stacks, code cache and direct memory buffers. You will find out what Java 8 has added for native memory tracking, how to enable and configure large memory pages on Windows / Linux / Solaris, why it is a generally bad idea to allocate heaps with size between 32 and 38Gb (I have scratched the surface here)

Chapter 9 covers threading issues: how to manage a thread pool, what is ForkJoinPool added in Java 7 and how it is used by the new Streams API in Java 8, costs of thread synchronization (including the cost of memory barriers caused by synchronization), false sharing (I have touched it here). Finally it will describe how to tune the JVM threads: set the stack size, configure biased locking, thread spinning and thread priorities.

Chapter 10 is dedicated to Java EE performance (or to be more precise – to the non-DB related part of your web server code). It discusses what and how to store in the session state, how to configure the web server thread pool, session beans pitfalls, possible issues with XML and JSON parsing, object serialization and finally choosing coarse or fine grained interface with client based on the network throughput.

Chapter 11 describes JDBC and JPA. Surprisingly, it does not teach you how to write the proper SQL :) Instead it shows you how choosing the proper JDBC / JPA methods may far outweigh the gains from the SQL queries tuning.

Chapter 12 describes Java SE tuning: buffered I/O, class loading, random number generation, JNI, exceptions, String performance ( 1, 2, 3, 4, 5, 6 ), logging, Java Collections API, Java 8 lambdas vs anonymous classes and finally Java 8 stream and filter performance.

Finally, the appendix lists JVM flags useful for performance tuning. Just ten pages of flags :)

I would recommend this book as a reference book for any performance related investigations in Java 7 and Java 8.

Buy on amazon.com