Category Archives: CPU optimization

Oracle Java Mission Control Overview

by Mikhail Vorontsov

Introduction

This article will describe the Java Mission Control – a JDK GUI tool (jmc / jmc.exe) available since Java 7u40. We will also discuss Java Flight Recorder – a surprisingly good JDK profiler with some features not available in any other project. Finally, we will look at JOverflow Analyzer – yet another semi-free tool (free for development, commercial for production), which allows you to analyze a lot of memory usage anti-patterns in your application based on a simple HPROF file.

Java Mission Control

Oracle Java Mission Control is a tool available in the Oracle JDK since Java 7u40. This tool originates from JRockit JVM where it was available for years. JRockit and its version of JMC were well described in a Oracle JRockit: The Definitive Guide book written by two JRockit senior developers (also visit the Marcus Hirt blog – the first place you should be looking for any JMC news).

Oracle JMC could be used for 2 main purposes:

  • Monitoring the state of multiple running Oracle JVMs
  • Java Flight Recorder dump file analysis

JMC license

Current JMC license (see “Supplemental license terms” here ) allows you to freely use JMC for development, but it requires the purchase of a commercial license if you want to use it in production (this is my personal opinion, I am not a lawyer 🙂 ). This means that you can avoid spending extra dollars if you have a proper QA process 🙂

JMC plug-ins

JMC offers a few plugins. You can install them via Help -> Install New Software menu (you may not know that plugins exist and never go there 🙁 ). Note that each plugin may have its own license, so be careful and read the licenses. I will give an overview of “JOverflow Analysis” plugin in this article – it looks for a list of inefficient memory usage patterns in your app heap.

Realtime process monitoring

You can attach to a JVM by right-clicking on it in the JVM Browser tab of the main window and choosing “Start JMX Console” menu option. You will see the following screen. There is nothing fancy here, just pay attention to the “+” buttons which allow you to add more counters to this screen.

Main monitoring screen

Main monitoring screen

Continue reading

Implementing a world fastest Java int-to-int hash map*

by Mikhail Vorontsov

* Fastest among int-to-int map implementations I have tested in my previous article in the tests I have implemented for that article.

I would like to thank Sebastiano Vigna and Roman Leventov for sharing their hash map wisdom with me. Some implementation ideas were inspired by “Code Optimization: Effective Memory Usage” by Kris Kaspersky.

This article will give you a step by step overview of various implementation tricks used in the modern hash map implementations. At the end of this article you will have a probably fastest Java int-to-int hash map implementation available at the moment of writing of this article.

Open indexing

Most of modern hash maps are based on the idea of open indexing. What does it mean? Your map is based on the array of keys (values will always be placed at the matching array index, so forget about them for now). You have to find your key in the array of keys for each map operation. How does it implemented?

First of all, you need the initial lookup position in the array. It may be calculated by any function which maps a key into an integer in the range [0, array.length - 1]. A key is usually mapped into an integer by means of hashCode method. A simplest function here could be Math.abs(key.hashCode() % array.length) (keep in mind that % result could be negative).

As you understand, a mapping of large set of keys into a small set of integer values means that you may end up with some collisions (they are called hash collisions) – same results of the initial function for the different keys. Collisions are resolved by trying to apply another function to the original array index. The simplest of such functions is (prevIdx + 1) % array.length. There is one requirement for such functions – if applied in a loop, they should cover the whole set or array cells, so that you can use the whole array capacity. Another example of such function is incrementing the index by one prime number if the array length is another prime number.

Free and removed cells

In theory, that’s enough to implement your own hash map. In practice, you need to distinguish free and removed cells from occupied cells (you can avoid using removed cells if you’ll do extra work in remove method – see how it is implemented in the latest FastUtil). Removed cells are also known as “tombstones”.

Your keys array is initially filled with free “cells”. You set a cell into “removed” state if you need to remove an existing key.

Let’s take a look at an example:

Open indexing example

Open indexing example

This int key map uses the initial and next functions defined above:

1
2
initial = Math.abs( key % array.length );
nextIdx = ( prevIdx + 1 ) % array.length;
initial = Math.abs( key % array.length );
nextIdx = ( prevIdx + 1 ) % array.length;

This map originally contained keys 1, 2, 3 and 4, but key=3 was subsequently removed from a map, so it was replaced with a removed (“R”) placeholder.

Let’s see what should we do to find the following keys:

Key Description
2 Start function points at a cell with index=2 at once. We have key=2 at a cell with index=2, so no further lookup is required.
3 Start function points at a cell with index=3. This cell is “removed”, so we have to apply “nextIdx” function in a loop until we either find a key or a free cell. We check cell index=4 next – bad luck, key is not equal. Then we check cell index=5: it is a free cell, so we stop the lookup – key is not found.

Next, let’s see what will happen if we want to add key=10: initial = key % array.length = 10 % 9 = 1. Cell at index=1 is already occupied with another key, so we can not use it. So is cell at index=2. The cell at index=3 is “removed”, so we can reuse it and put key=10 into it.

Removed cells cleanup

In many cases your hash map may degrade to O(n2) complexity if you would keep the removed cells in the map. Fastest maps are implementing the removed cells cleanup one way or another. As a result, all other map methods will need to distinguish just 2 cell states: free or used. Besides that, remove method is usually called infrequently compared to get and less frequently than put, which means that some extra complexity during key removal will be paid off by fasted execution of other methods. This article will use FastUtil cleanup logic.

Key scrambling

The initial index function I have mentioned above ( initial = Math.abs( key % array.length ); ) will put consecutive keys in the consecutive array cells. This is highly undesirable if your next cell function is simply picking up the next array cell, because it will cause the long lookup chains to be created in a pretty common case.

In order to avoid it, we need to “scramble” the key, shuffling its bits. I will rely on FastUtil scrambling code:

1
2
3
4
5
6
private static final int INT_PHI = 0x9E3779B9;
 
public static int phiMix( final int x ) {
    final int h = x * INT_PHI;
    return h ^ (h >> 16);
}
private static final int INT_PHI = 0x9E3779B9;

public static int phiMix( final int x ) {
    final int h = x * INT_PHI;
    return h ^ (h >> 16);
}

As a result, consecutive keys will not be placed in the consecutive array cells, thus keeping the average hash chain length under control. As for “random” keys case, you are likely to end up with a pretty good distribution of keys over the keys array as well.

Now you are definitely ready to implement your own hash map. We will be implementing an int-int map in the next several sections of this article.

Continue reading

Large HashMap overview: JDK, FastUtil, Goldman Sachs, HPPC, Koloboke, Trove – January 2015 version

by Mikhail Vorontsov

This is a major update of the previous version of this article. The reasons for this update are:

  • The major performance updates in fastutil 6.6.0
  • Updates in the “get” test from the original article, addition of “put/update” and “put/remove” tests
  • Adding identity maps to all tests
  • Now using different objects for any operations after map population (in case of Object keys – except identity maps). Old approach of reusing the same keys gave the unfair advantage to Koloboke.

I would like to thank Sebastiano Vigna for providing the initial versions of “get” and “put” tests.

Introduction

This article will give you an overview of hash map implementations in 5 well known libraries and JDK HashMap as a baseline. We will test separately:

  • Primitive to primitive maps
  • Primitive to object maps
  • Object to primitive maps
  • Object to Object maps
  • Object (identity) to Object maps

This article will provide you the results of 3 tests:

  • “Get” test: Populate a map with a pregenerated set of keys (in the JMH setup), make ~50% successful and ~50% unsuccessful “get” calls. For non-identity maps with object keys we use a distinct set of keys (the different object with the same value is used for successful “get” calls).
  • “Put/update” test: Add a pregenerated set of keys to the map. In the second loop add the equal set of keys (different objects with the same values) to this map again (make the updates). Identical keys are used for identity maps and for maps with primitive keys.
  • “Put/remove” test: In a loop: add 2 entries to a map, remove 1 of existing entries (“add” pointer is increased by 2 on each iteration, “remove” pointer is increased by 1).

This article will just give you the test results. There will be a followup article on the most interesting implementation details of the various hash maps.

Test participants

JDK 8

JDK HashMap is the oldest hash map implementation in this test. It got a couple of major updates recently – a shared underlying storage for the empty maps in Java 7u40 and a possibility to convert underlying hash bucket linked lists into tree maps (for better worse case performance) in Java 8.

FastUtil 6.6.0

FastUtil provides a developer a set of all 4 options listed above (all combinations of primitives and objects). Besides that, there are several other types of maps available for each parameter type combination: array map, AVL tree map and RB tree map. Nevertheless, we are only interested in hash maps in this article.

Goldman Sachs Collections 5.1.0

Goldman Sachs has open sourced its collections library about 3 years ago. In my opinion, this library provides the widest range of collections out of box (if you need them). You should definitely pay attention to it if you need more than a hash map, tree map and a list for your work 🙂 For the purposes of this article, GS collections provide a normal, synchronized and unmodifiable versions of each hash map. The last 2 are just facades for the normal map, so they don’t provide any performance advantages.

HPPC 0.6.1

HPPC provides array lists, array dequeues, hash sets and hash maps for all primitive types. HPPC provides normal hash maps for primitive keys and both normal and identity hash maps for object keys.

Koloboke 0.6.5

Koloboke is the youngest of all libraries in this article. It is developed as a part of an OpenHFT project by Roman Leventov. This library currently provides hash maps and hash sets for all primitive/object combinations. This library was recently renamed from HFTC, so some artifacts in my tests will still use the old library name.

Trove 3.0.3

Trove is available for a long time and quite stable. Unfortunately, not much development is happening in this project at the moment. Trove provides you the list, stack, queue, hash set and map implementations for all primitive/object combinations. I have already written about Trove.

Data storage implementations and tests

This article will look at 5 different sorts of maps:

  1. intint
  2. intInteger
  3. Integerint
  4. IntegerInteger
  5. Integer (identity map)Integer

We will use JMH 1.0 for testing. Here is the test description: for each map size in (10K, 100K, 1M, 10M, 100M) (outer loop) generate a set of random keys (they will be used for each test at a given map size) and then run a test for each map implementations (inner loop). Each test will be run 100M / map_size times. “get”, “put” and “remove” tests are run separately, so you can update the test source code and run only a few of them.

Note that each test suite takes around 7-8 hours on my box. Spreadsheet-friendly results will be printed to stdout once all test suites will finish.

int-int

Each section will start with a table showing how data is stored inside each map. Only arrays will be shown here (some maps have special fields for a few corner cases).

tests.maptests.primitive.FastUtilMapTest int[] key, int[] value
tests.maptests.primitive.GsMutableMapTest int[] keys, int[] values
tests.maptests.primitive.HftcMutableMapTest long[] (key-low bits, value-high bits)
tests.maptests.primitive.HppcMapTest int[] keys, int[] values, boolean[] allocated
tests.maptests.primitive.TroveMapTest int[] _set, int[] _values, byte[] _states

As you can see, Koloboke is using a single array, FastUtil and GS use 2 arrays, and HPPC and Trove use 3 arrays to store the same data. Let’s see what would be the actual performance.

“Get” test results

All “get” tests make around 50% of unsuccessful get calls in order to test both success and failure paths in each map.

Each test results section will contain the results graph. X axis will show a map size, Y axis – time to run a test in milliseconds. Note, that each test in a graph has a fixed number of map method calls: 100M get call for “get” test; 200M put calls for “put” test; 100M put and 50M remove calls for “remove” tests.

There would be the links to OpenOffice spreadsheets with all test results at the end of this article.

int-int 'get' test results

int-int ‘get’ test results

GS and FastUtil test results lines are nearly parallel, but FastUtil is faster due to a lower constant factor. Koloboke becomes fastest only on large enough maps. Trove is slower than other implementations at each map size.

“Put” test results

“Put” tests insert all keys into a map and then use another equal set of keys to insert entries into a map again (these methods calls would update the existing entries). We make 100M put calls with “insert” functionality and 100M put calls with “update” functionality in each test.

int-int 'put' test results

int-int ‘put’ test results

This test shows the implementation difference more clear: Koloboke is fastest from the start (though FastUtil is as fast on small maps); GS and FastUtil are parallel again (but GS is always slower). HPPC and Trove are the slowest.

“Remove” test results

In “remove” test we interleave 2 put operations with 1 remove operation, so that a map size grows by 1 after each group of put/remove calls. In total we make 100M put and 50M remove calls.

int-int 'remove' test results

int-int ‘remove’ test results

Results are similar to “put” test (of course, both tests make a majority of put calls!): Koloboke is quickly becoming the fastest implementation; FastUtil is a bit faster than GS on all map sizes; HPPC and Trove are the slowest, but HPPC performs reasonably good on map sizes up to 1M entries.

int-int summary

An underlying storage implementation is the most important factor defining the hash map performance: the fewer memory accesses an implementation makes (especially for large maps which do not into CPU cache) to access an entry – the faster it would be. As you can see, the single array Koloboke is faster than other implementations in most of tests on large map sizes. For smaller map sizes, CPU cache starts hiding the costs of accessing several arrays – in this case other implementations may be faster due to less CPU commands required for a method call: FastUtil is the second best implementation for primitive collection tests due to its highly optimized code.

Continue reading

Performance of various general compression algorithms – some of them are unbelievably fast!

by Mikhail Vorontsov

07 Jan 2015 update: extending LZ4 description (thanks to Mikael Grev for a hint!)

This article will give you an overview of several general compression algorithm implementations performance. As it turned out, some of them could be used even when your CPU requirements are pretty strict.

In this article we will compare:

  • JDK GZIP – a slow algorithm with a good compression, which could be used for long term data compression. Implemented in JDK java.util.zip.GZIPInputStream / GZIPOutputStream.
  • JDK deflate – another algorithm available in JDK (it is used for zip files). Unlike GZIP, you can set compression level for this algorithm, which allows you to trade compression time for the output file size. Available levels are 0 (store, no compression), 1 (fastest compression) to 9 (slowest compression). Implemented as java.util.zip.DeflaterOutputStream / InflaterInputStream.
  • Java implementation of LZ4 compression algorithm – this is the fastest algorithm in this article with a compression level a bit worse than the fastest deflate. I advice you to read the wikipedia article about this algorithm to understand its usage. It is distributed under a friendly Apache license 2.0.
  • Snappy – a popular compressor developed in Google, which aims to be fast and provide relatively good compression. I have tested this implementation. It is also distributed under Apache license 2.0.

Continue reading

Introduction to JMH

by Mikhail Vorontsov

11 Sep 2014: Article was updated for JMH 1.0.

10 May 2014: Original version.

Introduction

This article will give you an overview of basic rules and abilities of JMH. The second article will give you an overview of JMH profilers.

JMH is a new microbenchmarking framework (first released late-2013). Its distinctive advantage over other frameworks is that it is developed by the same guys in Oracle who implement the JIT. In particular I want to mention Aleksey Shipilev and his brilliant blog. JMH is likely to be in sync with the latest Oracle JRE changes, which makes its results very reliable.

You can find JMH examples here.

JMH has only 2 requirements (everything else are recommendations):

  • You need to create a maven project using a command from the JMH official web page
  • You need to annotate test methods with @Benchmark annotation

In some cases, it is not convenient to create a new project just for the performance testing purposes. In this situation you can rather easily add JMH into an existing project. You need to make the following steps:

  1. Ensure your project directory structure is recognizable by Maven (your benchmarks are at src/main/java at least)
  2. Copy 2 JMH maven dependencies and maven-shade-plugin from the JMH archetype. No other plugins mentioned in the archetype are required at the moment of writing (JMH 1.0).

How to run

Run the following maven command to create a template JMH project from an archetype (it may change over the time, check for the latest version near the start of the the official JMH page):

$ mvn archetype:generate \
          -DinteractiveMode=false \
          -DarchetypeGroupId=org.openjdk.jmh \
          -DarchetypeArtifactId=jmh-java-benchmark-archetype \
          -DgroupId=org.sample \
          -DartifactId=test \
          -Dversion=1.0

Alternatively, copy 2 JMH dependencies and maven-shade-plugin from the JMH archetype (as described above).

Create one (or a few) java files. Annotate some methods in them with @Benchmark annotation – these would be your performance benchmarks.

You have at least 2 simple options to run your tests::

Follow the procedure from the official JMH page):

$ cd your_project_directory/
$ mvn clean install
$ java -jar target/benchmarks.jar

The last command should be entered verbatim – regardless of your project settings you will end up with target/benchmarks.jar sufficient to run all your tests. This option has a slight disadvantage – it will use the default JMH settings for all settings not provided via annotations ( @Fork, @Warmup and @Measurement annotations are getting nearly mandatory in this mode). Use java -jar target/benchmarks.jar -h command to see all available command line options (there are plenty).

Or use the old way: add main method to some of your classes and write a JMH start script inside it. Here is an example:

1
2
3
4
5
Options opt = new OptionsBuilder()
                .include(".*" + YourClass.class.getSimpleName() + ".*")
                .forks(1)
                .build();
new Runner(opt).run();
Options opt = new OptionsBuilder()
                .include(".*" + YourClass.class.getSimpleName() + ".*")
                .forks(1)
                .build();
new Runner(opt).run();

After that you can run it with target/benchmarks.jar as your classpath:

$ cd your_project_directory/
$ mvn clean install
$ java -cp target/benchmarks.jar your.test.ClassName

Now after extensive “how to run it” manual, let’s look at the framework itself.

Continue reading

Trove library: using primitive collections for performance

by Mikhail Vorontsov


19 July 2014: article text cleanup, added a chapter on JDK to Trove migration.
16 July 2012: original version.

This article would describe Trove library, which contains a set of primitive collection implementations. The latest version of Trove (3.1a1 at the time of writing) would be described here.

Why should you use Trove? Why not to keep using well known JDK collections? The answer is performance and memory consumption. Trove doesn’t use any java.lang.Number subclasses internally, so you don’t have to pay for boxing/unboxing each time you want to pass/query a primitive value to/from the collection. Besides, you don’t have to waste memory on the boxed numbers (24 bytes for Long/Double, 16 bytes for smaller types) and reference to them. For example, if you want to store an Integer in JDK map, you need 4 bytes for a reference (or 8 bytes on huge heaps) and 16 bytes for an Integer instance. Trove, on the other hand, uses just 4 bytes to store an int. Trove also doesn’t create Map.Entry for each key-value pair unlike java.util.HashMap, further reducing the map memory footprint. For sets, it doesn’t use a Map internally, just keeping set values.

There are 3 main collection types in Trove: array lists, sets and maps. There are also queues, stacks and linked lists, but they are not so important (and usually instances of these collections tend to be rather small).

Array lists

All array lists are built on top of an array of corresponding data type (for example, int[] for TIntArrayList). There is a small problem which you should deal with: a value for the absent elements (default value). It is zero by default, but you can override it using

1
2
public TIntArrayList(int capacity, int no_entry_value);
public static TIntArrayList wrap(int[] values, int no_entry_value);
public TIntArrayList(int capacity, int no_entry_value);
public static TIntArrayList wrap(int[] values, int no_entry_value);

There are 2 useful methods called getQuick/setQuick – they just access the underlying array without any additional checks. As a side-effect they would allow to access elements between list size and capacity (don’t use it too much – it is still better to add values legally and when getQuick values as long as you are inside array list boundaries.

Each Trove array list has several helper methods which implement the java.util.Collections functionality:

1
2
3
4
5
6
7
8
9
10
11
12
public void reverse()
public void reverse(int from, int to)
public void shuffle(java.util.Random rand)
public void sort()
public void sort(int fromIndex, int toIndex)
public void fill(int val)
public void fill(int fromIndex, int toIndex, int val)
public int binarySearch(int value)
public int binarySearch(int value, int fromIndex, int toIndex)
public int max()
public int min()
public int sum()
public void reverse()
public void reverse(int from, int to)
public void shuffle(java.util.Random rand)
public void sort()
public void sort(int fromIndex, int toIndex)
public void fill(int val)
public void fill(int fromIndex, int toIndex, int val)
public int binarySearch(int value)
public int binarySearch(int value, int fromIndex, int toIndex)
public int max()
public int min()
public int sum()

Continue reading

Creating an exception in Java is very slow

by Mikhail Vorontsov


29 June 2014 update – now using JMH for testing. Added 2 ways to avoid the cost of exceptions. Made some editorial changes to the original text to reflect JMH usage.

Filling in the stack trace is slow…

Creating an exception in Java is a very slow operation. Expect that throwing an exception will cost you around 1-5 microseconds. Nearly all this time is spent on filling in the exception thread stack. The deeper the stack trace is, the more time it will take to populate it.

Usually we throw an exception in case of unexpected problems. This means that we don’t expect exceptions to be thrown at a rate of thousands per second per process. But sometimes you will encounter a method which uses exceptions for more likely events. We have already seen a good (actually bad) example of such behavior in Base 64 encoding and decoding performance article: sun.misc.BASE64Decoder is extremely slow due to using an exception for “give me more data” requests:

at java.lang.Throwable.fillInStackTrace(Throwable.java:-1)
at java.lang.Throwable.fillInStackTrace(Throwable.java:782)
- locked <0x6c> (a sun.misc.CEStreamExhausted)
at java.lang.Throwable.<init>(Throwable.java:250)
at java.lang.Exception.<init>(Exception.java:54)
at java.io.IOException.<init>(IOException.java:47)
at sun.misc.CEStreamExhausted.<init>(CEStreamExhausted.java:30)
at sun.misc.BASE64Decoder.decodeAtom(BASE64Decoder.java:117)
at sun.misc.CharacterDecoder.decodeBuffer(CharacterDecoder.java:163)
at sun.misc.CharacterDecoder.decodeBuffer(CharacterDecoder.java:194)

You may encounter the same problem if you try to run pack method from String packing part 2: converting Strings to any other objects with a string starting with a digit, but followed by letters. Let’s see how long does it take to pack ‘12345’ and ‘12345a’ with that method:

Benchmark                        (m_param)   Mode   Samples         Mean   Mean error    Units
t.StringPacking2Tests.testPack      12345a  thrpt        10        0.044        0.000   ops/us
t.StringPacking2Tests.testPack       12345  thrpt        10        7.934        0.154   ops/us

As you can see, we were able to convert “12345” from String about 200 times faster than “12345a”. Most of processing time is again being spent filling in stack traces:

at java.lang.Throwable.fillInStackTrace(Throwable.java:-1)
at java.lang.Throwable.fillInStackTrace(Throwable.java:782)
- locked <0x87> (a java.lang.NumberFormatException)
at java.lang.Throwable.<init>(Throwable.java:265)
at java.lang.Exception.<init>(Exception.java:66)
at java.lang.RuntimeException.<init>(RuntimeException.java:62)
at java.lang.IllegalArgumentException.<init>(IllegalArgumentException.java:53)
at java.lang.NumberFormatException.<init>(NumberFormatException.java:55)
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.valueOf(Long.java:540)
at tests.StringPacking2Tests.pack(StringPacking2Tests.java:69)
...

Continue reading

String switch performance

by Mikhail Vorontsov

Suppose we have a large number of commands. For simplicity of writing this article, they all would be implemented as methods of one class. We should be able to call any of these commands by a string name. We will allow case-insensitive calls. Our “class with commands” would look like:

1
2
3
4
5
6
7
8
9
10
public class ObjectWithCommands {
    public Object Command1( final Object arg ) { return arg; }
    public Object Command2( final Object arg ) { return arg; }
    ...
    public Object Command9( final Object arg ) { return arg; }
    public Object Command10( final Object arg ) { return arg; }
    ...
    public Object Command99( final Object arg ) { return arg; }
    public Object Command100( final Object arg ) { return arg; }
}
public class ObjectWithCommands {
    public Object Command1( final Object arg ) { return arg; }
    public Object Command2( final Object arg ) { return arg; }
    ...
    public Object Command9( final Object arg ) { return arg; }
    public Object Command10( final Object arg ) { return arg; }
    ...
    public Object Command99( final Object arg ) { return arg; }
    public Object Command100( final Object arg ) { return arg; }
}

This article will check the performance of various ways of calling these commands.

But first of all, a quiz 🙂 Suppose you are going to call your commands using the following class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class EqualsIgnoreCaseCaller {
    public static Object call( final ObjectWithCommands obj, final String commandName, final Object arg )
    {
        if ( commandName.equalsIgnoreCase( "Command1" ) )
            return obj.Command1( arg );
        if ( commandName.equalsIgnoreCase( "Command2" ) )
            return obj.Command2( arg );
        ...
        if ( commandName.equalsIgnoreCase( "Command99" ) )
            return obj.Command99( arg );
        if ( commandName.equalsIgnoreCase( "Command100" ) )
            return obj.Command100( arg );
    }
}
public class EqualsIgnoreCaseCaller {
    public static Object call( final ObjectWithCommands obj, final String commandName, final Object arg )
    {
        if ( commandName.equalsIgnoreCase( "Command1" ) )
            return obj.Command1( arg );
        if ( commandName.equalsIgnoreCase( "Command2" ) )
            return obj.Command2( arg );
        ...
        if ( commandName.equalsIgnoreCase( "Command99" ) )
            return obj.Command99( arg );
        if ( commandName.equalsIgnoreCase( "Command100" ) )
            return obj.Command100( arg );
    }
}

Which of the following method calls will be the fastest (after warmup)?

  1. EqualsIgnoreCaseCaller.call( obj, "Command9", arg );
  2. EqualsIgnoreCaseCaller.call( obj, "Command99", arg );
  3. EqualsIgnoreCaseCaller.call( obj, "Command100", arg );

Continue reading

Base64 encoding and decoding performance

by Mikhail Vorontsov

02 Apr 2014 update: added Guava implementation and byte[] < -> byte[] section.

21 Mar 2014 update: major rewrite + added javax.xml.bind.DatatypeConverter class description.

21 Feb 2014 update: added MiGBase64 class description.

25 Dec 2013 update: added Java 8 java.util.Base64 class description.

We will discuss what is Base64 algorithm and what is the performance of several different well-known libraries implementing Base64 encoding/decoding.

Base64 is an algorithm mapping all 256 byte values to 64 printable byte values (printable means that those bytes are printed in US-ASCII encoding). This is done by packing 3 input bytes to 4 output bytes. Base64 is generally used in text-based data exchange protocols when there is still a need to transfer some binary data. The best known example is encoding of e-mail attachments.

JDK Base64 implementations

Surprisingly, there was no Base64 implementation in the core JDK classes before Java 6. Some web forums advise to use two non-public sun.* classes which are present in all Sun/Oracle JDK: sun.misc.BASE64Encoder and sun.misc.BASE64Decoder. The advantage of using them is that you don’t need to ship any other libraries with your application. The disadvantage is that those classes are not supposed to be used from outside JDK classes (and, of course, they can be removed from JDK implementation… in theory, at least).

Sun has added another Base64 implementation in Java 6 (thanks to Thomas Darimont for his remainder!): it was hidden in javax.xml.bind package and was unknown to many developers. javax.xml.bind.DatatypeConverter class has 2 static methods – parseBase64Binary and printBase64Binary, which are used for Base64 encoding and decoding.

Java 8 has finally added a Base64 implementation in the java. namespace – java.util.Base64. This static factory class provides you with the basic/MIME/URL and filename safe encoder and decoder implementations.

Surprisingly (or may be not), all these implementations do not share any logic even in Java 8.

Third party Base64 implementations

I will also mention 4 quite well known Base64 third party implementations.

  • The first one is present in the Apache Commons Codec library and called org.apache.commons.codec.binary.Base64.
  • The next one is present in the Google Guava library and accessible via com.google.common.io.BaseEncoding.base64() static method.
  • Another one was written by Robert Harder and available from his website: http://iharder.net/base64. This is a single class which you will have to add to your project.
  • The last one was written by Mikael Grev nearly 10 years ago. It is available from http://migbase64.sourceforge.net/. This is also a single class you have to add into your project. This implementation claims to be the fastest Base64 implementation, but unfortunately this is not true any longer. Besides, it has a strictest limit on the maximal length of byte[] to decode (see below).

Continue reading