I/O bound algorithms: SSD vs HDD

by Mikhail Vorontsov

This article will investigate an impact of modern SSDs on the I/O bound algorithms of HDD era.

Improved write speed of SSD

Modern SSD provide read/write speeds up to 500Mb/sec. Compare it to approximately 100Mb/sec cap on the speed of modern HDD. It means that your application has to produce the output 5 times faster than before in order to still be I/O bound.

Let’s make 3 tests:

  1. Fill an 8 Gb file with a repeating sequence of 1024 bytes using a BufferedOutputStream with 32K buffer size. Data will be written in a loop, no extra processing is done inside the loop ( testWriteNoProcessing method from the following code snippet ).
  2. Fill an 8 Gb file with a sequence of 1024 bytes which is recomputed before writing it on the every iteration. Data will be written using a BufferedOutputStream with 32K buffer size ( testWriteSimple ).
  3. Same as previous test, but data will not be written to disk. This test will estimate how long does it take to prepare the data to write.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
 private static void testWriteNoProcessing( final long fileSize ) throws IOException {
    final File outFile = File.createTempFile( "tmp", ".tmp" );
    final long segments = fileSize / 1024;
    final byte[] buf = new byte[ 1024 ];
    //generate initial data
    new Random( 1000 ).nextBytes( buf );
    //write durations
    final long[] writes = new long[(int) segments];
 
    final long start = System.currentTimeMillis();
    final OutputStream os = new BufferedOutputStream( new FileOutputStream( outFile ), 32768 );
    try
    {
        for ( long count = 0; count < segments; ++count )
        {
            final long before = System.currentTimeMillis();
            os.write( buf );
            final long after = System.currentTimeMillis();
            writes[((int) count)] = after - before;
        }
    }
    finally {
        final long beforeClose = System.currentTimeMillis();
        os.close();
        final long time = System.currentTimeMillis();
        System.out.println( "File size = " + outFile.length() / 1024 / 1024 / 1024 + " G");
        System.out.println( "Time to close a file = " + ((time - beforeClose) / 1000.0) + " sec" );
        System.out.println( "Time to write a file with size = " + ( fileSize / 1024 / 1024 / 1024) + " G = " +
                ( (time - start) / 1000.0) + " sec "
        );
        outFile.delete();
    }
 
    long min = Long.MAX_VALUE;
    long max = Long.MIN_VALUE;
    long total = 0;
    int cnt = 0;
    List<Long> lst = new ArrayList<Long>( 1024 );
    for (final long len : writes) {
        if (len != 0) {
            if (len < min) min = len;
            if (len > max) max = len;
            cnt++;
            total += len;
            lst.add(len);
        }
    }
    System.out.println( "Expected count = " + writes.length / 32 + " Actual count = " + cnt );
    System.out.println( "Min duration = " + min + " Max duration = " + max );
    System.out.println( "Avg duration = " + ( total / cnt ) );
 
    Collections.sort( lst );
    System.out.println("Median duration = " + lst.get(lst.size() / 2));
    System.out.println("75% duration = " + lst.get(lst.size() * 3 / 4));
    System.out.println("90% duration = " + lst.get(lst.size() * 9 / 10));
    System.out.println("95% duration = " + lst.get(lst.size() * 19 / 20));
    System.out.println( "99% duration = " + lst.get( lst.size() * 99 / 100 ) );
 
    for ( int i = 0 ; i < lst.size(); ++i )
        if ( lst.get( i ) != 1 )
            System.out.println( "writes[" + i + "] = " + lst.get( i ) );
}
 private static void testWriteNoProcessing( final long fileSize ) throws IOException {
    final File outFile = File.createTempFile( "tmp", ".tmp" );
    final long segments = fileSize / 1024;
    final byte[] buf = new byte[ 1024 ];
    //generate initial data
    new Random( 1000 ).nextBytes( buf );
    //write durations
    final long[] writes = new long[(int) segments];

    final long start = System.currentTimeMillis();
    final OutputStream os = new BufferedOutputStream( new FileOutputStream( outFile ), 32768 );
    try
    {
        for ( long count = 0; count < segments; ++count )
        {
            final long before = System.currentTimeMillis();
            os.write( buf );
            final long after = System.currentTimeMillis();
            writes[((int) count)] = after - before;
        }
    }
    finally {
        final long beforeClose = System.currentTimeMillis();
        os.close();
        final long time = System.currentTimeMillis();
        System.out.println( "File size = " + outFile.length() / 1024 / 1024 / 1024 + " G");
        System.out.println( "Time to close a file = " + ((time - beforeClose) / 1000.0) + " sec" );
        System.out.println( "Time to write a file with size = " + ( fileSize / 1024 / 1024 / 1024) + " G = " +
                ( (time - start) / 1000.0) + " sec "
        );
        outFile.delete();
    }

    long min = Long.MAX_VALUE;
    long max = Long.MIN_VALUE;
    long total = 0;
    int cnt = 0;
    List<Long> lst = new ArrayList<Long>( 1024 );
    for (final long len : writes) {
        if (len != 0) {
            if (len < min) min = len;
            if (len > max) max = len;
            cnt++;
            total += len;
            lst.add(len);
        }
    }
    System.out.println( "Expected count = " + writes.length / 32 + " Actual count = " + cnt );
    System.out.println( "Min duration = " + min + " Max duration = " + max );
    System.out.println( "Avg duration = " + ( total / cnt ) );

    Collections.sort( lst );
    System.out.println("Median duration = " + lst.get(lst.size() / 2));
    System.out.println("75% duration = " + lst.get(lst.size() * 3 / 4));
    System.out.println("90% duration = " + lst.get(lst.size() * 9 / 10));
    System.out.println("95% duration = " + lst.get(lst.size() * 19 / 20));
    System.out.println( "99% duration = " + lst.get( lst.size() * 99 / 100 ) );

    for ( int i = 0 ; i < lst.size(); ++i )
        if ( lst.get( i ) != 1 )
            System.out.println( "writes[" + i + "] = " + lst.get( i ) );
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
private static void testWriteSimple( final long fileSize ) throws IOException {
    final File outFile = File.createTempFile( "tmp", ".tmp" );
    final long segments = fileSize / 1024;
    final byte[] buf = new byte[ 1024 ];
    //generate initial data
    new Random( 1000 ).nextBytes( buf );
    //write durations
    final long[] writes = new long[(int) segments];
 
    final long start = System.currentTimeMillis();
    final OutputStream os = new BufferedOutputStream( new FileOutputStream( outFile ), 32768 );
    try
    {
        for ( long count = 0; count < segments; ++count )
        {
            //some calculation before each write
            for ( int i = 0; i < 1024; ++i )
                buf[ i ] = (byte) (buf[ i ] * buf[ i ] / 3);
            final long before = System.currentTimeMillis();
            os.write( buf );
            final long after = System.currentTimeMillis();
            writes[((int) count)] = after - before;
        }
    }
    finally {
        final long beforeClose = System.currentTimeMillis();
        os.close();
        final long time = System.currentTimeMillis();
        System.out.println( "File size = " + outFile.length() / 1024 / 1024 / 1024 + " G");
        System.out.println( "Time to close a file = " + ((time - beforeClose) / 1000.0) + " sec" );
        System.out.println( "Time to write a file with size = " + ( fileSize / 1024 / 1024 / 1024) + " G = " +
                ( (time - start) / 1000.0) + " sec "
        );
        outFile.delete();
    }
 
    long min = Long.MAX_VALUE;
    long max = Long.MIN_VALUE;
    long total = 0;
    int cnt = 0;
    List<Long> lst = new ArrayList<Long>( 1024 );
    for (final long len : writes) {
        if (len != 0) {
            if (len < min) min = len;
            if (len > max) max = len;
            cnt++;
            total += len;
            lst.add(len);
        }
    }
    System.out.println( "Expected count = " + writes.length / 32 + " Actual count = " + cnt );
    System.out.println( "Min duration = " + min + " Max duration = " + max );
    System.out.println( "Avg duration = " + ( total / cnt ) );
 
    Collections.sort( lst );
    System.out.println("Median duration = " + lst.get(lst.size() / 2));
    System.out.println("75% duration = " + lst.get(lst.size() * 3 / 4));
    System.out.println("90% duration = " + lst.get(lst.size() * 9 / 10));
    System.out.println("95% duration = " + lst.get(lst.size() * 19 / 20));
    System.out.println( "99% duration = " + lst.get( lst.size() * 99 / 100 ) );
 
    for ( int i = 0 ; i < lst.size(); ++i )
        if ( lst.get( i ) != 1 )
            System.out.println( "writes[" + i + "] = " + lst.get( i ) );
}
private static void testWriteSimple( final long fileSize ) throws IOException {
    final File outFile = File.createTempFile( "tmp", ".tmp" );
    final long segments = fileSize / 1024;
    final byte[] buf = new byte[ 1024 ];
    //generate initial data
    new Random( 1000 ).nextBytes( buf );
    //write durations
    final long[] writes = new long[(int) segments];

    final long start = System.currentTimeMillis();
    final OutputStream os = new BufferedOutputStream( new FileOutputStream( outFile ), 32768 );
    try
    {
        for ( long count = 0; count < segments; ++count )
        {
            //some calculation before each write
            for ( int i = 0; i < 1024; ++i )
                buf[ i ] = (byte) (buf[ i ] * buf[ i ] / 3);
            final long before = System.currentTimeMillis();
            os.write( buf );
            final long after = System.currentTimeMillis();
            writes[((int) count)] = after - before;
        }
    }
    finally {
        final long beforeClose = System.currentTimeMillis();
        os.close();
        final long time = System.currentTimeMillis();
        System.out.println( "File size = " + outFile.length() / 1024 / 1024 / 1024 + " G");
        System.out.println( "Time to close a file = " + ((time - beforeClose) / 1000.0) + " sec" );
        System.out.println( "Time to write a file with size = " + ( fileSize / 1024 / 1024 / 1024) + " G = " +
                ( (time - start) / 1000.0) + " sec "
        );
        outFile.delete();
    }

    long min = Long.MAX_VALUE;
    long max = Long.MIN_VALUE;
    long total = 0;
    int cnt = 0;
    List<Long> lst = new ArrayList<Long>( 1024 );
    for (final long len : writes) {
        if (len != 0) {
            if (len < min) min = len;
            if (len > max) max = len;
            cnt++;
            total += len;
            lst.add(len);
        }
    }
    System.out.println( "Expected count = " + writes.length / 32 + " Actual count = " + cnt );
    System.out.println( "Min duration = " + min + " Max duration = " + max );
    System.out.println( "Avg duration = " + ( total / cnt ) );

    Collections.sort( lst );
    System.out.println("Median duration = " + lst.get(lst.size() / 2));
    System.out.println("75% duration = " + lst.get(lst.size() * 3 / 4));
    System.out.println("90% duration = " + lst.get(lst.size() * 9 / 10));
    System.out.println("95% duration = " + lst.get(lst.size() * 19 / 20));
    System.out.println( "99% duration = " + lst.get( lst.size() * 99 / 100 ) );

    for ( int i = 0 ; i < lst.size(); ++i )
        if ( lst.get( i ) != 1 )
            System.out.println( "writes[" + i + "] = " + lst.get( i ) );
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
 private static void testInMemoryProcessing( final long fileSize ) throws IOException {
    final long segments = fileSize / 1024;
    final byte[] buf = new byte[ 1024 ];
    //generate initial data
    new Random( 1000 ).nextBytes( buf );
 
    long sum = 0;
    final long start = System.currentTimeMillis();
    try
    {
        for ( long count = 0; count < segments; ++count )
        {
            //some calculation before each write
            for ( int i = 0; i < 1024; ++i )
                buf[ i ] = (byte) (buf[ i ] * buf[ i ] / 3);
            for ( int i = 0; i < 1024; ++i )
                sum += buf[ i ];
        }
    }
    finally {
        System.out.println( sum );
        final long time = System.currentTimeMillis();
        System.out.println( "Time to generate data for a file with size = " + ( fileSize / 1024 / 1024 / 1024) + " G = " +
                ( (time - start) / 1000.0) + " sec "
        );
    }
}
 private static void testInMemoryProcessing( final long fileSize ) throws IOException {
    final long segments = fileSize / 1024;
    final byte[] buf = new byte[ 1024 ];
    //generate initial data
    new Random( 1000 ).nextBytes( buf );

    long sum = 0;
    final long start = System.currentTimeMillis();
    try
    {
        for ( long count = 0; count < segments; ++count )
        {
            //some calculation before each write
            for ( int i = 0; i < 1024; ++i )
                buf[ i ] = (byte) (buf[ i ] * buf[ i ] / 3);
            for ( int i = 0; i < 1024; ++i )
                sum += buf[ i ];
        }
    }
    finally {
        System.out.println( sum );
        final long time = System.currentTimeMillis();
        System.out.println( "Time to generate data for a file with size = " + ( fileSize / 1024 / 1024 / 1024) + " G = " +
                ( (time - start) / 1000.0) + " sec "
        );
    }
}

Tests were performed on 2 laptops - one with HDD and another one with an SSD. In both cases disks were approximately half-full. All durations are in milliseconds unless other is specified.

HDD

Write, no data processing
 File size = 8 G
 Time to close a file = 0.106 sec
 Time to write a file with size = 8 G = 90.51 sec
 Expected write count = 262144 Actual write count = 19972
 Min duration = 1 Max duration = 211
 Avg duration = 5
 Median duration = 1
 75% max duration = 1
 90% max duration = 16
 95% max duration = 25
 99% max duration = 53
In memory processing Time to generate data for a file with size = 8 G = 19.461 sec
Write with data processing
 File size = 8 G
 Time to close a file = 0.069 sec
 Time to write a file with size = 8 G = 97.749 sec
 Expected write count = 262144 Actual write count = 13725
 Min duration = 1 Max duration = 268
 Avg duration = 4
 Median duration = 1
 75% max duration = 1
 90% max duration = 15
 95% max duration = 23
 99% max duration = 44

SSD

Write, no data processing
 File size = 8 G
 Time to close a file = 0.064 sec
 Time to write a file with size = 8 G = 37.902 sec
 Expected write count = 262144 Actual write count = 19425
 Min duration = 1 Max duration = 119
 Avg duration = 1
 Median duration = 1
 75% max duration = 1
 90% max duration = 1
 95% max duration = 3
 99% max duration = 26
In memory processing Time to generate data for a file with size = 8 G = 19.367 sec
Write with data processing
 File size = 8 G
 Time to close a file = 0.0 sec
 Time to write a file with size = 8 G = 45.891 sec
 Expected write count = 262144 Actual write count = 10911
 Min duration = 1 Max duration = 776
 Avg duration = 1
 Median duration = 1
 75% max duration = 1
 90% max duration = 1
 95% max duration = 1
 99% max duration = 1

First of all, both systems show us an important property of modern operating systems: after data was sent to an operating system for writing to disk, it may be written in the background, thus not blocking your program up to a certain degree. If you will add processing time from columns 1 and 2, you will see that the result is less than a processing time from column 3. As I understand, OS will buffer a certain amount of data internally and flush it to disk in the background. The problems will arise when your application is producing data faster than an OS can write them to disk. In this case, your application will wait for underlying OS buffer to be flushed before your data could be passed to OS. Your application will become I/O bound in such scenario.

In order to analyze how write operations are being handled, System.currentTimeMillis() calls were added before and after every write request. Common statistical functions were calculated on this data (min, max, avg, median, 75%, 90%, 95% and 99% max). Statistical functions were calculated only on values greater than 0 ms, because I was mostly interested in time required to send data to OS.

First of all, you can see that average write duration is 4-5 ms for HDD and 1 ms for SSD, which shows us the raw difference in the disk performance. About 90% of HDD writes took just 1 ms, which is most likely the time to transfer data to OS buffer, but the remaining 10% of write requests took longer. On SSD the picture becomes more clear: due to presence of calculations in the program and higher SSD performance, OS was able to flush data to SSD fast enough in order to cope with application (SSD, column 3). Only very few write requests took longer than 1 ms. On the other hand, if no processing is made in the application and we only write data to disk, then more than 5% of write requests have to wait for OS to write previously sent data (SSD, column 1).

Better seek time on SSD

Besides stream read/write speed improvement, SSD have way more serious advantage over SSD - their seek time is an order of magnitude smaller than HDD seek time. This advantage is caused by the absence of moving parts inside SSD. This will allow you to implement previously not possible algorithms requiring random seeks on the large files.

Why did I mention the large files? In case of a small file which fits in the OS file buffer, it may be read into memory after a while (when you have accessed a file dense enough), and from that moment you will actually accessing an in-memory data.

Let us create a file which contains a linked list of entries. Each entry will contain:

1
2
3
4
int id;        //current id
long offset;   //offset to the next entry in the file
byte[ 1024 - 8 - 4 ] padding; //to make each entry large enough
    
int id;        //current id
long offset;   //offset to the next entry in the file
byte[ 1024 - 8 - 4 ] padding; //to make each entry large enough
    

All these entries will be constructed so:

  • Create a list with Integer values from 1 to N.
  • Shuffle it using Collections.shuffle
  • Create a file: first entry will have id=0 and point to an entry with id=1. Position of an entry with id=1 will be looked up in the shuffled list (and multiplied by 1024, which is a size of an entry). All subsequent entries will be constructed similarly. The last record offset will be negative, which will tell us about the end of list.

Let's say, N=3 and the shuffled list is [3, 1, 2]. The file will have the following entries:

id = 0 offset = 2048
id = 3 offset = -1
id = 1 offset = 3072
id = 2 offset = 1024

The following method will construct such file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
private static void writeLinkedFile( final File file, final int segments ) throws IOException {
    //create a shuffled list
    final List<Integer> lst = new ArrayList<Integer>( segments );
    for ( int i = 1; i <= segments; ++i )
        lst.add( i );
    Collections.shuffle( lst, new Random( 1000 ) );
    //now create a file. A file contains a number of segments. Each segment starts with int ID (a number from
    //previous list), followed by a long offset - pointer to the next record, followed by 1K - 12 byte padding.
    //the first record has ID=0 and points to a record with ID = 1
 
    //map from value [1 ; segments] to its shuffled position
    final int[] positions = new int[ segments + 1 ];
    for ( int i = 0; i < lst.size(); ++i )
        positions[ lst.get( i ) ] = i;
 
    final byte[] padding = new byte[ 1024 - 8 - 4 ];
 
    if ( file.exists() )
        file.delete();
    final DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream( file ), 64 * 1024 ) );
    try
    {
        dos.writeInt(0);//ID
        dos.writeLong(1024L * (positions[1] + 1) ); //+1 due to an added record with id=0
        dos.write(padding);
        for ( int i = 0; i < segments; ++i )
        {
            final Integer id = lst.get(i);
            dos.writeInt(id); //ID (it is in the shuffled list)
            if ( id == segments )
                dos.writeLong( -1L );
            else
            {
                final Integer nextId = positions[ id + 1 ] + 1; //index: +1 due to an added record with id=0
                dos.writeLong(1024L * nextId);
            }
            dos.write(padding);
        }
    }
    finally {
        dos.close();
    }
}
private static void writeLinkedFile( final File file, final int segments ) throws IOException {
    //create a shuffled list
    final List<Integer> lst = new ArrayList<Integer>( segments );
    for ( int i = 1; i <= segments; ++i )
        lst.add( i );
    Collections.shuffle( lst, new Random( 1000 ) );
    //now create a file. A file contains a number of segments. Each segment starts with int ID (a number from
    //previous list), followed by a long offset - pointer to the next record, followed by 1K - 12 byte padding.
    //the first record has ID=0 and points to a record with ID = 1

    //map from value [1 ; segments] to its shuffled position
    final int[] positions = new int[ segments + 1 ];
    for ( int i = 0; i < lst.size(); ++i )
        positions[ lst.get( i ) ] = i;

    final byte[] padding = new byte[ 1024 - 8 - 4 ];

    if ( file.exists() )
        file.delete();
    final DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream( file ), 64 * 1024 ) );
    try
    {
        dos.writeInt(0);//ID
        dos.writeLong(1024L * (positions[1] + 1) ); //+1 due to an added record with id=0
        dos.write(padding);
        for ( int i = 0; i < segments; ++i )
        {
            final Integer id = lst.get(i);
            dos.writeInt(id); //ID (it is in the shuffled list)
            if ( id == segments )
                dos.writeLong( -1L );
            else
            {
                final Integer nextId = positions[ id + 1 ] + 1; //index: +1 due to an added record with id=0
                dos.writeLong(1024L * nextId);
            }
            dos.write(padding);
        }
    }
    finally {
        dos.close();
    }
}

File may be constructed just once. After that we may repeatedly test disk seek performance. The following method will check consistency of the just constructed file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
private static void followLinkedFile( final File file ) throws IOException {
    int expectedId = 0;
    long offset = 0;
    final long start = System.currentTimeMillis();
    int cnt = 0;
    long prevTs = System.currentTimeMillis();
 
    final RandomAccessFile raf = new RandomAccessFile( file, "r" );
    try
    {
        while ( offset >= 0 )
        {
            raf.seek( offset );
            final int id = raf.readInt();
            if ( id != expectedId )
                throw new AssertionError( "File is broken! Expected " + expectedId + ", but got " + id );
            offset = raf.readLong();
            expectedId = id + 1;
            cnt++;
            if ( cnt == 10000 )
            {
                final long curTs = System.currentTimeMillis();
                final double oneSeek = 1.0 * (curTs - prevTs) / cnt;
                System.out.println( expectedId + " speed = " + 1000.0 / oneSeek + " seeks/sec" );
                cnt = 0;
                prevTs = curTs;
            }
        }
    }
    finally {
        raf.close();
    }
    final long time = System.currentTimeMillis() - start;
    System.out.println( "Time to read till id =" + expectedId + " = " + ( time / 1000.0) + " sec");
}
private static void followLinkedFile( final File file ) throws IOException {
    int expectedId = 0;
    long offset = 0;
    final long start = System.currentTimeMillis();
    int cnt = 0;
    long prevTs = System.currentTimeMillis();

    final RandomAccessFile raf = new RandomAccessFile( file, "r" );
    try
    {
        while ( offset >= 0 )
        {
            raf.seek( offset );
            final int id = raf.readInt();
            if ( id != expectedId )
                throw new AssertionError( "File is broken! Expected " + expectedId + ", but got " + id );
            offset = raf.readLong();
            expectedId = id + 1;
            cnt++;
            if ( cnt == 10000 )
            {
                final long curTs = System.currentTimeMillis();
                final double oneSeek = 1.0 * (curTs - prevTs) / cnt;
                System.out.println( expectedId + " speed = " + 1000.0 / oneSeek + " seeks/sec" );
                cnt = 0;
                prevTs = curTs;
            }
        }
    }
    finally {
        raf.close();
    }
    final long time = System.currentTimeMillis() - start;
    System.out.println( "Time to read till id =" + expectedId + " = " + ( time / 1000.0) + " sec");
}

Initially, a file with 2M entries was created (file size = 2Gb). Laptop with HDD has about twice as fast CPU compared to a laptop with SSD. I was not surprised to find out that HDD laptop managed to seek data 2 times faster than SSD laptop 🙂

The next file contained 16M entries (file size = 16Gb). This time HDD has nearly stopped: it made only approximately 95 seeks per second, which means that a single seek required about 10 ms. CPU on that laptop is able to execute about 20 million commands at the same time!

On the other hand, SSD was able to consistently make about 2100 seeks per second, which is 21 times faster than HDD. So, on one hand it is fast compared to HDD. On the other hand, the first laptop CPU can execute about a million commands while an SSD executes a seek operation...

Summary

  • Replacing an HDD storage with an SSD one can turn your application from being I/O bound to being CPU-bound. It is especially related to a pure stream data read/write operations, because modern SSDs are capable to process data at speeds over 300Mb/sec (few applications can produce data at such speed).
  • Modern operating systems try to write data in the background, not blocking your application. Your write requests will be blocked only if OS can't write data faster than your application produces it.
  • SSD seek time has dramatically decreased compared to HDD seek time (~100 seeks/sec for HDD -> over 2000 seeks/sec for SSD). On the other hand, even SSD seek is too slow compared to modern CPU speed. On my laptop, CPU can execute about a million commands while an SSD executes a seek operation. Always try to arrange your data as a stream.

If you want to know more about the impact of memory hierarchies on your application performance, take a look at the following books: "Memory Systems: Cache, DRAM, Disk" and "Computer Architecture, Fifth Edition: A Quantitative Approach (The Morgan Kaufmann Series in Computer Architecture and Design)".