Tag Archives: book review

Book review: Java Performance: The Definitive Guide by Scott Oaks

I have just finished reading this book. It is a must-have book for this website visitors. This book covers all the JVM facets: structure (what to tune), options (how to tune), writing proper code. All the books I have seen before lack at least one of these. Most lack two.

This book starts from an overview of Java Performance toolkit – OS and JRE tools useful for performance engineers. This chapter may be a bit boring, but it contains a very useful list of commands for many-many common situations. Besides, it gives you a taste of Java Flight Recorder and Java Mission Control added in Java 7u40, which have unique capabilities amongst other monitoring tools.

The next chapter covers JIT compilers, their architecture and tuning tips. This chapter will start showing you that you (likely) don’t know a lot of useful JDK parameters.

Chapters 5 and 6 will tell you about 3 most useful garbage collectors bundled in JRE: throughput, CMS and G1. You will know how each of them operates under different conditions, how to monitor them and how to tune them.

At this point you may think that “Java Performance: The Definitive Guide” is similar to “Java Performance” by Charlie Hunt – it just tells you how to tune JVM instead of writing the properly performing code. No! The following chapters will emphasize the best practices of writing the fast code.

Chapter 7 is about the heap analysis and optimization – first of all it will tell you how to make heap dumps and histograms and then describe several generic ways to decrease your application memory footprint (including string interning I have also written about).

Chapter 8 will describe the native memory consumption: heap, thread stacks, code cache and direct memory buffers. You will find out what Java 8 has added for native memory tracking, how to enable and configure large memory pages on Windows / Linux / Solaris, why it is a generally bad idea to allocate heaps with size between 32 and 38Gb (I have scratched the surface here)

Chapter 9 covers threading issues: how to manage a thread pool, what is ForkJoinPool added in Java 7 and how it is used by the new Streams API in Java 8, costs of thread synchronization (including the cost of memory barriers caused by synchronization), false sharing (I have touched it here). Finally it will describe how to tune the JVM threads: set the stack size, configure biased locking, thread spinning and thread priorities.

Chapter 10 is dedicated to Java EE performance (or to be more precise – to the non-DB related part of your web server code). It discusses what and how to store in the session state, how to configure the web server thread pool, session beans pitfalls, possible issues with XML and JSON parsing, object serialization and finally choosing coarse or fine grained interface with client based on the network throughput.

Chapter 11 describes JDBC and JPA. Surprisingly, it does not teach you how to write the proper SQL 🙂 Instead it shows you how choosing the proper JDBC / JPA methods may far outweigh the gains from the SQL queries tuning.

Chapter 12 describes Java SE tuning: buffered I/O, class loading, random number generation, JNI, exceptions, String performance ( 1, 2, 3, 4, 5, 6 ), logging, Java Collections API, Java 8 lambdas vs anonymous classes and finally Java 8 stream and filter performance.

Finally, the appendix lists JVM flags useful for performance tuning. Just ten pages of flags 🙂

I would recommend this book as a reference book for any performance related investigations in Java 7 and Java 8.

Book review: Systems Performance: Enterprise and the Cloud

All you need to know about Linux performance monitoring and tuning.

If you have visited this blog, you are likely to be interested in Java performance tuning. You want to know why ArrayList is better than LinkedList, why HashMap will usually outperform TreeMap (but we should still use the latter if we want a sorted map) or why date parsing could be so painfully slow.

Nevertheless, at some point of your career you will reach the situation when you will have to consider your application environment – server hardware, other applications running on your server and other servers running in your network (as well as many other things).

You may for example want to know why disk operations were so quick on your development box, but became a major issue on the production box. There could be various reasons:

  • Trying to acquire a file lock on NFS too often
  • Other process is using the same disk – legally or due to a misconfiguration
  • Operating system is using the same disk for paging
  • Your development box has an SSD installed, but a production box relies on the “ancient” 🙂 HDD technology
  • Or lots of other reasons

Or you may be on the other side of the spectrum and trying to squeeze the last cycles out of a critical code path. In this situation you may want to know which levels of memory hierarchy your code is accessing (L1-L3 CPU caches, RAM, disks). Java does not provide you such information, so you have to use OS monitoring tools to obtain it. This will allow you to modify your algorithm, tune your dataset size so it will fit into the appropriate level of memory hierarchy.

Or you are probably on the edge of the progress and want to deploy your brand new application on the cloud. The biggest issue with clouds is that you have to pay for everything – excessive CPU usage (as well as for non-excessive 🙂 ), suboptimal IO as well as high memory consumption (usually via a requirement to pay for the larger and more expensive instances). Besides that your application might be affected by the other tenants of the same physical box – for example HDD is a non-interruptible device – one of tenants can make a temporary denial of service “attack” on the other tenants while it is still in his quota. What tools and strategies would you use for performance monitoring/tuning in the cloud?

“Systems Performance: Enterprise and the Cloud” by Brendan Gregg is the best reference book I have seen on Linux and Solaris performance monitoring. It is written for system administrators, so it is not bound to any programming languages. The book starts with a description of methodologies which could be used for performance issue troubleshooting. Introduction chapters are followed by the chapters related to the following hardware components:

  • CPU
  • Memory
  • File systems
  • Disks
  • Network
  • Cloud computing

Each of these chapters starts with an overview of a given hardware component followed by possible performance tuning methodologies description.

The last chapter of this book describes a real world performance investigation (in my opinion, you should start reading this book from this chapter 🙂 ).

I would recommend to order a paper version of this book, because it should serve as a handy reference book for the complex performance issue investigations.

Book review: Programming Pearls (2nd edition)

A marvelous algorithm collection!

Today I’d like to review “Programming Pearls” (2nd Edition) book. You may think it is seriously out of date, because it was published in 1999, and you will be wrong.

This book has a list of hand picked real problems solved by various developers at times when RAM was counted in kilobytes and megabytes and CPU frequency in megahertz rather than in gigahertz. These algorithms are still very important, because current computer systems are limited by memory access times rather than by CPU frequency. It means that in many cases you may gain a serious performance boost by getting these algorithms out of your grandfather’s trunk 🙂

This book can not replace any of well-known algorithm text books like “Algorithms (4th Edition)” or “Introduction to Algorithms”. It describes particular quite counter-intuitive applications of these algorithms for some problems. And the most important difference of this book from algorithm textbooks – it is easy and interesting to read, so you are unlikely to get asleep while reading this book late night 🙂

This book describes algorithms in the following areas:

  • Numeric values sorting and searching, anagrams
  • Data structures – right choice of a structure for a problem
  • Program testing and verification
  • Performance tuning principles
  • Back of the envelope calculations (very useful for engineering interview – in Google, for example)
  • Algorithm design techniques
  • Code tuning
  • Squeezing space
  • Sorting
  • Searching
  • Heaps
  • Algorithms on strings (this is just an overview, this is an in-depth book in this area: “Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology”)

Here are a few problems discussed in this book:

  • How to sort up to 10 million unique non-negative integers, all of which are less than 107 in 1.25M memory? What if we have only 1M (or less) memory available? What if our integers are not unique, but number of occurrences of each value is limited?
  • Find all sets of anagrams in the given dictionary.
  • You have a file with 4 billion 32-bit integers. Find an integer which is not in the file. How would you do it if you have an ample amount of RAM? What about case when you have only a few hundred bytes of RAM, but you are allowed to write temporary files?
  • Given a vector of floating-point numbers, find a maximum sum in any contiguous subvector of the input. The author starts from a straightforward O(n3) algorithm and goes down to O(n2), O(n*logN) and finally O(n) algorithm.
  • You have a sorted array of 1000 integers. How can you tune a binary search algorithm for this case?
  • Given a very long sequence of bytes, how would you efficiently count a number of one (set) bits?
  • How to compress 75,000 English words into 52K RAM in order to use them in a spellchecker program?
  • How to generate the random text which looks like it was hand-written if you have several input texts to teach your program?

Book review: MapReduce Design Patterns: Building Effective Algorithms and Analytics for Hadoop and Other Systems

A good MapReduce algorithms reference book!

I’d like to review “MapReduce Design Patterns: Building Effective Algorithms and Analytics for Hadoop and Other Systems” book I have recently read.

This book describes a list of MapReduce algorithms (authors call them “patterns”) as well as a few ways to create the more complex algorithms out of these building blocks. Algorithms were divided into the following groups:

  • Summarization – Numerical summarization (sum, min, max, count, avg, median, std deviation); Inverted index summarization; Counting with counters (limited number of counters)
  • Filtering – Simple filter; Bloom filter; Top N filter; Distinct filter
  • Data organization – Structured to hierarchical; Partitioning; Binning; Total order sorting; Shuffling
  • Join patterns – Reduce side join; Replicated join; Composite join; Cartesian product

This book describes how to implement these algorithms on pure Hadoop using pure Java. Authors of this book suggest that such skill still should be required for 10% most complicated tasks not covered by frameworks like Pig.

A lot of reviewers have already pointed out that examples in this book were not tested. There is a number of mistakes (but not serious ones) in Java listings.

I want to suggest not paying to much attention to these mistakes – do not treat this book as your first Hadoop textbook. Use this book as a reference of MapReduce algorithms instead – read text part of a book instead of source codes! It describes all these algorithms in sufficient depth (Bloom filter description may be an exception) in order to understand how they operate even without prior Hadoop/MapReduce knowledge.

This book may also be useful for interview preparation to the companies using BigData solutions (like Google) – it describes the algorithms, and the algorithms knowledge is usually required on such interviews.