ZFS L2ARC

An exciting new ZFS feature has now become publicly known – the second level ARC, or L2ARC. I’ve been busy with its development for over a year, however this is my first chance to post about it. This post will show a quick example and answer some basic questions.

Background in a nutshell

The “ARC” is the ZFS main memory cache (in DRAM), which can be accessed with sub microsecond latency. An ARC read miss would normally read from disk, at millisecond latency (especially random reads). The L2ARC sits in-between, extending the main memory cache using fast storage devices – such as flash memory based SSDs (solid state disks).


old model

new model

with ZFS

Some example sizes to put this into perspective, from a lab machine named “walu”:

For this server, the L2ARC allows around 650 Gbytes to be stored in the total ZFS cache (ARC + L2ARC), rather than just DRAM with about 120 Gbytes.

A previous ZFS feature (the ZIL) allowed you to add SSD disks as log devices to improve write performance. This means ZFS provides two dimensions for adding flash memory to the file system stack: the L2ARC for random reads, and the ZIL for writes.

Adam has been the mastermind behind our flash memory efforts, and has written an excellent article in Communications of the ACM about flash memory based storage in ZFS; for more background, check it out.

L2ARC Example

To illustrate the L2ARC with an example, I’ll use walu – a medium sized server in our test lab, which was briefly described above. Its ZFS pool of 44 x 7200 RPM disks is configured as a 2-way mirror, to provide both good reliability and performance. It also has 6 SSDs, which I’ll add to the ZFS pool as L2ARC devices (or “cache devices”).

I should note – this is an example of L2ARC operation, not a demonstration of the maximum performance that we can achieve (the SSDs I’m using here aren’t the fastest I’ve ever used, nor the largest.)

20 clients access walu over NFSv3, and execute a random read workload with an 8 Kbyte record size across 500 Gbytes of files (which is also its working set).

1) disks only

Since the 500 Gbytes of working set is larger than walu’s 128 Gbytes of DRAM, the disks must service many requests. One way to grasp how this workload is performing is to examine the IOPS that the ZFS pool delivers:

The pool is pulling about 1.89K ops/sec, which would require about 42 ops per disk of this pool. To examine how this is delivered by the disks, we can either use zpool iostat or the original iostat:

iostat is interesting as it lists the service times: wsvc_t + asvc_t. These I/Os are taking on average between 9 and 10 milliseconds to complete, which the client application will usually suffer as latency. This time will be due to the random read nature of this workload – each I/O must wait as the disk heads seek and the disk platter rotates.

Another way to understand this performance is to examine the total NFSv3 ops delivered by this system (these days I use a GUI to monitor NFSv3 ops, but for this blog post I’ll hammer nfsstat into printing something concise):

That’s about 2.27K ops/sec for NFSv3; I’d expect 1.89K of that to be what our pool was delivering, and the rest are cache hits out of DRAM, which is warm at this point.

2) L2ARC devices

Now the 6 SSDs are added as L2ARC cache devices:

And we wait until the L2ARC is warm.

Time passes …

Several hours later the cache devices have warmed up enough to satisfy most I/Os which miss main memory. The combined ‘capacity/used’ column for the cache devices shows that our 500 Gbytes of working set now exists on those 6 SSDs:

The pool_0 disks are still serving some requests (in this output 30 ops/sec) but the bulk of the reads are being serviced by the L2ARC cache devices – each providing around 2.6K ops/sec. The total delivered by this ZFS pool is 15.8K ops/sec (pool disks + L2ARC devices), about 8.4x faster than with disks alone.

This is confirmed by the delivered NFSv3 ops:

walu is now delivering 18.7K ops/sec, which is 8.3x faster than without the L2ARC.

However the real win for the client applications is that of read latency; the disk-only iostat output showed our average was between 9 and 10 milliseconds, the L2ARC cache devices are delivering the following:

Our average service time is between 0.4 and 0.6 ms (wsvt_t + asvc_t columns), which is about 20x faster than what the disks were delivering.

What this means …

An 8.3x improvement for 8 Kbyte random IOPS across a 500 Gbyte working set is impressive, as is improving storage I/O latency by 20x.

But this isn’t really about the numbers, which will become dated (these SSDs were manufactured in July 2008, by a supplier who is providing us with bigger and faster SSDs every month).

What’s important is that ZFS can make intelligent use of fast storage technology, in different roles to maximize their benefit. When you hear of new SSDs with incredible ops/sec performance, picture them as your L2ARC; or if it were great write throughput, picture them as your ZIL.

The example above was to show that the L2ARC can deliver, over NFS, whatever these SSDs could do. And these SSDs are being used as a second level cache, in-between main memory and disk, to achieve the best price/performance.

Questions

I recently spoke to a customer about the L2ARC and they asked a few questions which may be useful to repeat here:

What is L2ARC?

Isn’t flash memory unreliable? What have you done about that?

Aren’t SSDs really expensive?

What about writes – isn’t flash memory slow to write to?

What’s bad about the L2ARC?

Internals

If anyone is interested, I wrote a summary of L2ARC internals as a block comment in usr/src/uts/common/fs/zfs/arc.c, which is also surrounded by the actual implementation code. The block comment is below (see the source for the latest version), and is an excellent reference for how it really works:

Jonathan recently linked to this block comment in a blog entry about flash memory, to show that ZFS can incorporate flash into the storage hierarchy, and here is the actual implementation.

Print Friendly
Posted on July 22, 2008 at 9:48 pm by Brendan Gregg · Permalink
In: Performance · Tagged with: , , , ,

27 Responses


Notice: comments_rss_link is deprecated since version 2.5! Use post_comments_feed_link() instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
Subscribe to comments via RSS


    Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  1. Written by Kevin Hutchinson
    on July 22, 2008 at 10:23 pm
    Permalink

    Are you testing any of this with your NBC Olympics web site in August? That could be a great way to prove the benefits? Just an idea.


  2. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  3. Written by jason arneil
    on July 23, 2008 at 12:42 am
    Permalink

    Hello Brendan,
    Really fantastic in-depth article there – really enjoyed it!
    jason.


  4. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  5. Written by c0t0d0s0.org
    on July 23, 2008 at 3:42 am
    Permalink

    [Trackback] Brendan Gregg wrote a good piece about the performance of L2ARC in ZFS L2ARC:The pool_0 disks are still serving some requests (in this output 30 ops/sec) but the bulk of the reads are being serviced by the L2ARC cache devices – each providing around 2….


  6. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  7. Written by Danilo Poccia
    on July 23, 2008 at 4:41 am
    Permalink

    Hi Brendan, now I really understand how SSDs can make ZFS fster for writing AND reading. Thanks!


  8. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  9. Written by mike svoboda
    on July 23, 2008 at 5:26 am
    Permalink

    Brendan:
    Awesome work! This was really an enjoyment to read. I really appreciate how clear / concise it was to understand the new implementation of the L2ARC. Now the rest of us can’t wait to get our hands on some new SSD systems which should start hitting the enterprise in the next coming months!


  10. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  11. Written by Jeffrey W. Baker
    on July 23, 2008 at 7:52 pm
    Permalink

    Very interesting stuff. I have been doing a lot of experiments with SSDs and other flash devices lately. Could you possibly repeat your experiment using a much larger dataset? It’s not very informative that the L2ARC works well when the working set fits entirely in the cache, and is entirely read-only.
    For instance, a 10TB working set with 100GB of flash on the front end would be quit informative.


  12. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  13. Written by Ken K
    on July 23, 2008 at 9:49 pm
    Permalink

    This is SO COOL, you just made my year. I understand the limitations of SSD, and this is about the best that anyone can ask for. Full use of the SSD, data can be written off to disk, solve the write and random write latency problem…
    I also would like to see what the results are for a larger working set size. Also, what would be the effect in a DSS system with a mixed workload… perhaps sequentials get left on spinning disk and randoms in the cache? That would be awesome.


  14. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  15. Written by Brendan Gregg
    on July 24, 2008 at 5:34 am
    Permalink

    Thanks for the positive feedback; here are some individual replies:
    Kevin – I don’t know of a plan, but you are right, getting some customer case studies published would really help promote the benefits (I’m sure we will in the coming months.)
    Jeffrey – ideally the system will be configured to have enough SSDs to cover the working set, which is why I demo’d that case – it’s what we are aiming for. With today’s SSDs, if your working set is less than 550 Gbytes, then a server such as what I demo’d would be ideal; and this capacity is only getting larger.
    Are you sure this is a 10 Tbyte working set – ie, hot data – and not the total database size? 10 Tbytes of random read working set is enormous; and is this a real production server (google cache?). Just curious (yes, I’ve heard of working set possibly getting this large, but it hasn’t been common.)
    If my walu server tackled a 10 Tbyte working set, then 550 Gbytes would be cached leaving 9.46 Tbytes uncached. If the workload was uniformly distributed across the working set – which is the worst case – then we’ve just made about 5% of our I/O run much faster, which would be around the expected performance improvement (which, for the cost of SSDs, may be a good deal.) If the workload wasn’t so uniform, then the improvement value can get higher.
    So yes, it’s very important to consider working set size. While your database may be dozens of Tbytes, your working set may only be 10s or 100s of Gbytes – and the L2ARC with current SSDs can work very well. But if your working set is much larger somehow, you should try some calculations to estimate what that means.
    If I can get the time for a larger than L2ARC run, I’ll post how it looks. I won’t be posting "best possible" results – there are groups at Sun to handle this (and official benchmarks), who will make sure that all tunables are set correctly for maximum performance.
    Ken – sequential data (which ZFS will prefetch) is already skipped by the L2ARC and left on disk (it’s the l2arc_noprefetch tunable), leaving random data for the L2ARC. So this should already work.


  16. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  17. Written by BeleniX
    on July 24, 2008 at 11:06 am
    Permalink

    [Trackback] Your story was featured in BeleniX! Here is the link to vote it up and promote it: http://belenix.org/node/178


  18. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  19. Written by BitBucket
    on August 10, 2008 at 10:48 am
    Permalink

    Brendan, thanks for the info. Cache is and will remain a useful tool in the need for speed. I’ve been looking at DRAM and NAND flash SSDs for awhile. The capacities and speeds are truly jaw dropping. Using your model I’d project that the bottom layer will disappear in the near future. NAND flash will be the primary storage and DRAM SSDs serve as L2ARC. NAND Flash is pushing 100K IOPS, DRAM is over 10X that. Expensive? Yes. But it wasn’t that long ago that disk storage was over $1000/Gb. The move to very high speed mass storage is here now. I’d love to see your model run on a 10Tb NAND flash SSD array with 1Tb of DRAM for cache!!! I’d expect around 600K IOPS with todays parts. That brings up a new set of problems dealing with systems software designed around IO latency, file systems layouts, etc. associated with rotating media and even process scheduling as IO times approach context switch times.


  20. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  21. Written by Walter Moriconi
    on August 14, 2008 at 2:09 am
    Permalink

    Hello Brendan,
    Really fantastic, now I really understand it


  22. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  23. Written by Walter Moriconi's blog
    on August 14, 2008 at 2:14 am
    Permalink

    [Trackback] fantastic post on  ZFS Second Level ARC – L2ARC – Testing Show 8x More Throughput ( Brendan Gregg ).
    must read!
    The "ARC" is the ZFS main memory cache (in DRAM),
    which can be accessed with sub microsecond latenc…


  24. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  25. Written by pgp
    on August 15, 2008 at 8:30 am
    Permalink

    Which GUI do you use NFS?


  26. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  27. Written by I A
    on September 20, 2008 at 12:17 pm
    Permalink

    Brendan: Any new data or thoughts on cases where datasets don’t fit into the L2ARC devices? Also, is compression supported while destaging to the devices? Any thoughts on whether that is a good idea?


  28. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  29. Written by mcdtracy
    on October 6, 2008 at 8:40 pm
    Permalink

    Brendan,
    Jignesh K. Shah thought of another way to use storage tiers:
    http://blogs.sun.com/jkshah/entry/zfs_with_cloud_storage_and
    He puts the ZFS log and cache on local drives and the next tier bi-coastal using iSCSI…
    This ZFS feature is a great tool for distributed computing too… I hope.


  30. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  31. Written by Cinetica Blog
    on October 17, 2008 at 1:00 pm
    Permalink

    [Trackback] EMC e Compellent hanno entrambi annunciato il supporto a questo tipo di HD, EMC li chiama EFD (Enterpise Flash Drives… perchè così li possono far pagare di più ma sono sempre quelli, ;-) ) Per EMC l’implementazione degli SSD è come quella di quals…


  32. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  33. Written by Selcuk
    on November 14, 2008 at 1:50 pm
    Permalink

    Brendan:
    thanks for all the info. I have two questions about l2arc:
    1)When choosing buffers to write to l2, do you prefer MFU buffers over MRU buffers?
    2)After a system restart, does l2arc have a mechanism to reuse the buffers in l2 or are they discarded?
    thanks.


  34. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  35. Written by Martin Douglas
    on January 2, 2009 at 8:00 am
    Permalink

    Would there be any combination (SSD/ZFS/Zones/Xvm/) here that would increase performance for virtual machines?


  36. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  37. Written by Alan Teng
    on February 26, 2009 at 10:11 pm
    Permalink

    Excellent post. I particularly liked the Question segment too as the answers are spot on easy to understand.


  38. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  39. Written by Frans ter Borg
    on April 29, 2009 at 3:39 am
    Permalink

    hi Brendan, or others,
    When sizing an active/active or active/passive 7410 cluster, are there any thoughts on the number of readzilla’s ?
    Our national SUN storage tech, mentions that we should size at least 2 x readzilla per pool, as he claims, Solaris needs to be able to balance reads and writes over two devices. I have not been able to find any docs on this matter yet and interestingly enough there is a cluster bundle that has only 1 readzilla per head, which seems to indicate that he may be mistaken.
    In case of failover in an active/active cluster, does the failover head always need an equal amount of "idle" readzillas to the "active" readzilla’s in the the primary head ? Or can the failover unit function on the pool of the primary unit without SSD L2ARC ?
    Looking forward to your response.
    Frans


  40. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  41. Written by FreeBSD/ZFS
    on June 9, 2009 at 1:48 pm
    Permalink

    Tested this "cache" on FreeBSD-8.0-Current:
    created a couple GB ram disk and added it to a pool containing data.
    Works weirdly: doesn’t seem to have any visible positive effect neither with large (1gig) nor with small files (total – a set of 200MB).
    Nuked data from pool.
    Unpacked dataset of small files (200MB, average size of 1-2KB): yes, there is an effect (several times), but drives get trashed anyway, despite that 2GB is definitely bigger than dataset of 200MB. But w/o writing the data to get cached – nothing good happens at all.
    It’s not very tunable as well.


  42. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  43. Written by Dman
    on July 3, 2009 at 9:30 am
    Permalink

    Thanks for this fantastic article Brendan. Anyone know why on Solaris 10 05/09 i’m getting the error "Operation not supported on this type of pool". I was aware of a bug in an earlier version of Solaris, but understood this to be fixed? zpool upgrade shows I’m running zfs 10, which should support L2ARC.


  44. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  45. Written by Anonymous
    on July 10, 2009 at 3:41 pm
    Permalink

    Re: L2ARC not supported in Solaris 10 05/09. http://opensolaris.org/os/community/zfs/version/10/ mentions it doesn’t work for S10U6, so apparently it’s still not fixed in S10U7 even though both releases support ZFS version 10. This person’s asking for it too, so maybe someone will answer: http://opensolaris.org/jive/thread.jspa?threadID=106865&tstart=0


  46. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  47. Written by Derek
    on July 22, 2009 at 3:11 pm
    Permalink

    @FreeBSD/ZFS: Why would expect better performance from a RAM disk, when you would normally use that RAM in the ARC layer? Essentially you are caching your cache. I can’t make sense of your approach.


  48. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  49. Written by רפידות גובה
    on August 21, 2009 at 9:28 pm
    Permalink

    I was aware of a bug in an earlier version of Solaris, but understood this to be fixed? zpool upgrade shows I’m running zfs 10, which should support L2ARC.


  50. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  51. Written by Malware Removal Bot
    on August 24, 2009 at 3:14 am
    Permalink

    Does the failover head always need an equal amount of "idle" readzillas to the "active" readzilla’s in the the primary head ?


  52. Notice: get_the_author_email is deprecated since version 2.8! Use get_the_author_meta('email') instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
  53. Written by Anonymous
    on September 23, 2009 at 4:31 pm
    Permalink

    L2ARC is supported in OpenSolaris 2009.06 and will be supported in Solaris 10 Update 8 (supposedly shipping in Oct or Nov). L2ARC is not natively supported under Solaris 10 Update 6 or Update 7. I haven’t heard whether a future ZFS patch might enable it there.


Notice: comments_rss_link is deprecated since version 2.5! Use post_comments_feed_link() instead. in /home/knmngmprl21d/public_html/blogs/wp-includes/functions.php on line 3467
Subscribe to comments via RSS