Moore’s Outlaws

My blog post eulogizing SPEC SFS has elicited quite a bit of reaction, much of it from researchers and industry observers who have drawn similar conclusions. While these responses were very positive, my polemic garnered a different reaction from SPEC SFS stalwart NetApp, where, in his response defending SPEC SFS, my former colleague Mike Eisler concocted this Alice-in-Wonderland defense of the lack of a pricing disclosure in the benchmark:

Like many industries, few storage companies have fixed pricing. As much as heads of sales departments would prefer to charge the same highest price to every customer, it isn’t going to happen. Storage is a buyers’ market. And for storage devices that serve NFS and now CIFS, the easily accessible numbers on are yet another tool for buyers. I just don’t understand why a storage vendor would advocate removing that tool.

Mike’s argument — and I’m still not sure that I’m parsing it correctly — appears to be that the infamously opaque pricing in the storage business somehow helps customers because they don’t have to pay a single “highest price”! That is, that the lack of transparent pricing somehow reflects the “buyers’ market” in storage. If that is indeed Mike’s argument, someone should let the buyers know how great they have it — those silly buyers don’t seem to realize that the endless haggling over software licensing and support contracts is for them!

And if that argument isn’t contorted enough for you, Mike takes a second tack:

In storage, the cost of the components to build the device falls continuously. Just as our customers have a buyers’ market, we storage vendors are buyers of components from our suppliers and also enjoy a buyers’ market. Re-submitting numbers after a hunk of sheet metal declines in price is silly.

His ludicrous “sheet metal” example aside (what enterprise storage product contains more than a few hundred bucks of sheet metal?), Mike’s argument appears to be that technology curves like Moore’s Law and Kryder’s Law lead to enterprise storage prices that are falling with such alarming speed that they’re wrong by the time as they are so much as written down! If it needs to be said, this argument is absurd on many levels. First, the increases in transistor density and areal storage density tend to result in more computing bandwidth and more storage capacity per dollar, not lower absolute prices. (After all, your laptop is three orders of magnitude more powerful than a personal computer circa 1980 — but it’s certainly not a thousandth of the price.)

Second, has anyone ever accused the enterprise storage vendors of dropping their prices in pace with these laws — or even abiding by them in the first place? The last time I checked, the single-core Mobile Celeron that NetApp currently ships in their FAS2020 and FAS2050 — a CPU with a criminally small 256K of L2 cache — is best described as a Moore’s Outlaw: a CPU that, even when it first shipped six (six!) years ago, was off the curve. (A single-core CPU with 256K of L2 cache was abiding by Moore’s Law circa 1997.) Though it’s no wonder that NetApp sees plummeting component costs when they’re able to source their CPUs by dumpster diving…

Getting back to SPEC SFS: even if the storage vendors were consistently reflecting technology improvements, SPEC SFS is (as I discussed) a drive latency benchmark that doesn’t realize the economics of these curves anyway; drives are not rotating any faster year-over-year, having leveled out at 15K RPM some years ago due to some nasty physical constraints (like, the sound barrier). So there’s no real reason to believe that the 2,016 15K RPM drives used in NetApp’s opulent 1,032,461 op submission are any cheaper today than when this configuration was first submitted three years ago. Yes, those same drives would likely have more capacity (being 146GB or 300GB and not the 72GB in the submission), but recall that these drives are being short-stroked to begin with — so such as additional capacity is being used at all by the benchmark, it will only be used to assure even less head movement!

Finally, even if Mike were correct that technology advances result in ever falling absolute prices, it still should not prohibit price disclosures. We all understand that prices reflect a moment in time, and if natural inflation does not dissuade us from price disclosures, nor should any technology-induced deflation.

So to be clear: SPEC SFS needs pricing disclosures. TPC has them, SPC has them — and SFS needs them if the benchmark has any aspiration to enduring relevance. While SPEC SFS’s flaws run deeper than the missing price disclosure, the disclosure would at least keep the more egregious misbehaviors in check — and it would also (I believe) show storage buyers the degree to which the systems measured by SPEC SFS do not in fact correspond to the systems that they purchase and deploy.

One final note: in his blog entry, Mike claims that “SPEC SFS performance is the minimum bar for entry into the NAS business.” If he genuinely believes this, Mike may want to write a letter to the editors of InfoWorld: in their recent review of our Sun Storage 7210, they had the audacity to ignore the lack of SPEC SFS results for the appliance, instead running their own benchmarks. Their rating for the product’s performance? 10 out of 10. What heresy!

Posted on February 19, 2009 at 9:28 pm by bmc · Permalink
In: Fishworks

11 Responses

Subscribe to comments via RSS

  1. Written by Andrew
    on February 20, 2009 at 12:28 pm

    Interesting stuff. I’m particularly intrigued by your speed of sound comment. I’m guessing that the fastest part of a harddrive is going to be the outer edge of the platter. Even assuming an outsized platter with a 4 inch diameter I calculate that the speed of the outside of that platter turning at 15k to be 178.6mph which is a long way from 760mph speed at sound at sea level.
    Either my math is wrong or there’s something else going on inside the harddisk that I’m not understanding (is there something in the motor that is spinning at a multiple of the platter speed?).
    Here’s my math for a 4" diameter platter:-
    Disk circumference = 4 * PI = 4 * 3.14159 = 12.57 inches
    Speed of outer edge = 12.57 * 15000 * 60 = 11,313,000 inches/hour
    There are 63360 inches in a mile so 11,313,000 / 63360 = 178.6 miles per hour
    The platter of a 15k 3.5" disk might be as much as half that diameter which will half the speed of the outer edge. 90mph is a long way from the speed of sound.
    I’ve always assumed that disk rotational speed was governed by noise, heat and the availability of low cost reliable electrical motors rather than a fundamental limit like the speed of sound so I’m intrigued to learn more about this.
    Note, I’m not trying to pick holes in your analysis of SPEC SFS. I love to read about the debunking of myths that are accepted as conventional wisdom. I’m just trying to get a better insight in the design constraints behind harddisks.

  2. Written by Bryan Cantrill
    on February 20, 2009 at 12:51 pm

    Yes, interesting point. That the speed of sound becomes an issue came from a colleague of mine who is a mechanical engineer with a background in disk drives; I have asked him to comment here, as I am embarrassed to confess that I neither asked him for the math nor did the math myself. I also may have misunderstood him: he may have meant simply that the speed of sound serves as an absolute limit on doubling that we don’t have with transistor density — that we’ll never see (sticking with your math) 60K RPM drives, for example. That said, I do believe that the limits beyond 15K RPM are more inarguable than merely noise and power — that heat and aerodynamics pose more insurmountable obstacles. But then, I’m a software guy. ;)

  3. Written by Andrew
    on February 20, 2009 at 1:59 pm

    I’m a software guy too :-) and my first thought was that I’d be able to impress my geek friends with a cool piece of hardware trivia. However I do also have a slightly less shallow reason for my interest. Over my career I’ve found time and time again that my decisions as a software engineer are vastly improved by knowing more about the hardware running my code. It’s a computer system at the end of the day, a fact that sadly too few engineers truly embrace. I’m looking forward to hearing from your colleague!

  4. Written by Nico
    on February 20, 2009 at 4:43 pm

    [NOTE: I'm not the guy that Bryan spoke to about this.] Spinning a disk causes stresses that become very difficult to deal with (i.e., expensive) at high RPMs. We’ve all seen videos online of CDs attached to rotorooters — they shatter long before any speed of sound issues arise. The sound barrier needn’t be an issue if you can put the disk and heads in a vaccuum, but then, the heads need to "fly" over the disk; you could put the disks and heads in water to increase the speed of sound, but the drag factor will be so much higher that it couldn’t work. Wikipedia says "[d]rives running at 10,000 or 15,000 rpm use smaller platters to mitigate increased power requirements (due to air drag) and therefore generally have lower capacity than the highest capacity desktop drives." If we believe wikipedia then the 15k RPM "barrier" is likely due to drag (but it may also have to do with structural integrity of the disks).

  5. Written by Bryan Cantrill
    on February 20, 2009 at 10:47 pm

    As it turns out, my colleague was speaking about the sound barrier metaphorically, not physically — my understanding was completely incorrect. He writes:
    "When I was talking about the sound barrier, I was speaking more to the problems with the slider design as needing a fundamental breakthrough like what was needed with breaking the sound barrier in the aerospace industry to get HDDs above 20,000 RPM. The drive industry has been trying to solve the problem by changing atmosphere in the drive, pumping the drive with helium to try and stabilize the airfoil."
    Apologies for the misunderstanding — though given this, I just hope SPEC SFS gets a pricing disclosure before we have submissions with helium-filled drives… ;)

  6. Written by Andrew
    on February 24, 2009 at 11:33 am

    Thanks for the follow-up Bryan. Fascinating stuff!

  7. Written by Warren Strange
    on March 2, 2009 at 1:01 pm

    Helium Filled Drives: Perfect for Cloud Computing eh?

  8. Written by Ben Jackson
    on March 10, 2009 at 4:28 pm

    Or inhaling the contents to produce an amusing sound once they become EOL….;)
    Great blog – especially liked the Moore’s Outlaw comment, made my morning!

  9. Written by sprewell
    on March 12, 2009 at 5:49 am

    Hey Bryan, I’ve been following and enjoying your blog for years, you really should write more. I recently wrote up some ideas of mine for a new kind of software license ( ), I thought you might be interested. I had a discussion with Eric Raymond about these ideas ( ) that might elicit interest also.

  10. Written by Dillon Beresford
    on March 16, 2009 at 8:58 pm

    Bryan, I actually just finished watching a video on Google Tech Talk from 2007. The discussion on Dtrace was great! I really enjoyed it and I dig your humor. I would like to see more discussions in the future. You are an inspiration to many of us. Thank you so much for Dtrace. IMHO you are an inspiration to many developers.
    I agree with you about the waterfall model. ;-)

  11. Written by Robert Milkowski
    on April 17, 2009 at 10:35 am

    NetApp practice regarding prices –
    I wonder how many other clients were over-paying for NetApps.
    On the other hand that’s what you get when you deal with monopoly – hopefuly Sun will break it with Open Storage.

Subscribe to comments via RSS