OpenSolaris and the power to fork

Back when Solaris was initially open sourced, there was a conscious effort to be mindful of the experiences of other projects. In particular — even though it was somewhat of a paradox — it was understood how important it was for the community to have the power to fork the operating system. As I wrote in January, 2005:

If there’s one thing we’ve learned from watching Linux, it’s to not become forkophobic. Paradoxically, in an environment where forks are actively encouraged (e.g. Linux) forking seems to be less of a problem than in environments where forking is viewed as apostasy (e.g. BSD).

Unfortunately — and now in hindsight — we know that OpenSolaris didn’t go far enough: even though the right to fork was understood, there was not enough attention paid to the power to fork. As a result, the operating system never quite got to being 100% open: there remained some annoying (but essential) little bits that could not be opened for one historical (i.e., legal) reason or another. When coupled with the fact that Sun historically had a monopoly or near-monopoly on Solaris engineering talent, the community was entirely deprived of the oxygen that it would have needed to exercise its right to fork.

But change is afoot: over the last six months, the monopoly over Solaris engineering talent has been broken. And now today, we as a community have turned an important corner with the announcement of the Illumos project. Thanks to the hard work of Garrett D’Amore and his band of co-conspirators, we have the beginning of open sourced variants of those final bits that will allow for not just the right but the power to fork. Not that anyone wants to set out to fork the system, of course, but that power is absolutely essential for the vitality of any open source community — and so will be for ours. Kudos to Garrett and crew; on behalf of all of us in the community, thank you!

Posted on August 3, 2010 at 10:34 am by bmc · Permalink · 8 Comments
In: Uncategorized

Hello Joyent!

As I mentioned in my farewell to Sun, I am excited by the future; as you may have seen, that future is joining Joyent as their VP of Engineering.

So, why Joyent? I have known Joyeurs like Jason, Dave, Mark and Ben since back when the “cloud” was still just something that you drew up on a whiteboard as a placeholder for the in-between crap that someone else was going to build and operate. But what Joyent was doing was very much what we now call cloud computing — it was just that in describing Joyent in those pre-cloud days, I found it difficult to convey exactly why what they were doing was exciting (even though to me it clearly was). I found that my conversations with others about Joyent always ended up in the ditch of “virtual hosting”, a label that grossly diminished the level of innovation that I saw in Joyent. Fortunately for my ability to explain the company, “cloud” became infused with much deeper meaning — one that matched Joyent’s vision for itself.

So Joyent was cloud before there was cloud, but so what? When I started to consider what was next for me, one of the problems that I kept coming back to was DTrace for the cloud. What does dynamic instrumentation look like in the cloud? How do you make data aggregation and retention scale across many nodes? How do you support the ad hoc capabilities that make DTrace so powerful? And how do you visualize the data that in a way that allows for those ad hoc queries to be visually phrased? To me, these are very interesting questions — but looking around the industry, it didn’t seem that too many of the cloud providers were really interested in tackling these problems. However, in a conversation at my younger son‘s third birthday party with Joyeur (and friend) Rod Boothby, it became clear that Joyent very much shared my enthusiasm for this problem — and more importantly, that they had made the right architectural decisions to allow for solving it.

My conversation with Rod kicked off more conversations, and I quickly learned that this was not the Joyent that I had known — that the company was going through a very important inflection point whereby they sought a leadership position in innovating in the cloud. To match this lofty rhetoric, the company has a very important proof point: the hiring of Ryan Dahl, inventor and author of node.js.

Before getting into the details of node.js, one should know that I am a JavaScript lover. (If you didn’t already know this about me, you might be somewhat surprised by this — and indeed, there was a time when such a confession would have to be whispered, if it could be said at all — but times have changed, and I’m loud and proud!) My affection for the language came about over a number of years, and crescendoed at Fishworks when I realized that I needed to rewrite our CLI in JavaScript. And while I’m not sure if I’m the only person or even the first to write JavaScript that was designed to be executed over a 9600 baud TTY, it sure as hell felt like I was a pioneer in some perverse way…

Given my history, I clearly have a natural predisposition towards server-side JavaScript — but node.js is much more than that: its event driven model coupled with the implicitly single-threadedness of JavaScript constrains the programmer into a model that allows for highly scalable control logic, but only with sequential code. (For more on this, see Ryan’s recent Google tech talk — though I have no idea what was meant when Ryan was introduced as “controversial”.) This idea — that one can (and should!) build a concurrent system out of sequential components — is one that Jeff and I discussed this in our ACM Queue article on real-world concurrency:

To make this concrete, in a typical MVC (model-view-controller) application, the view (typically implemented in environments such as JavaScript, PHP, or Flash) and the controller (typically implemented in environments such as J2EE or Ruby on Rails) can consist purely of sequential logic and still achieve high levels of concurrency, provided that the model (typically implemented in terms of a database) allows for parallelism. Given that most don’t write their own databases (and virtually no one writes their own operating systems), it is possible to build (and indeed, many have built) highly concurrent, highly scalable MVC systems without explicitly creating a single thread or acquiring a single lock; it is concurrency by architecture instead of by implementation.

But Ryan says all that much more concisely at 21:40 in the talk: “there’s this great thing in Unix called ‘processes.’” Amen! So node.js to me represents a confluence of many important ideas — and it’s clean, well-implemented, and just plain fun to work with.

While I am excited about node.js, it’s more than just a great idea that’s well executed — it also represents Joyent’s vision for itself as an innovator up and down the stack. One can view node.js as being to Joyent was Java was to Sun: transforming the company from one confined to a certain layer into a true systems company that innovates up and down the stack. Heady enough, but if anything this analogy understates the case: Joyent’s development of node.js is not merely an outgrowth of an innovative culture, but also a reflection of a singular focus to deliver on the economies of scale that are the great promise of cloud computing.

Add it all up — the history in the cloud space, the disposition to solving tough cloud problems that I want to solve like instrumentation and observability, and the exciting development of node.js — and you have a company in Joyent that I believe could be the next great systems company and I’m deeply honored (and incredibly excited) to be a part of it!

Posted on July 30, 2010 at 1:46 pm by bmc · Permalink · 9 Comments
In: Uncategorized

Good-bye, Sun

In Februrary 1996, I came out to Sun Microsystems to interview for a job knowing only two things: that I wanted to do operating systems kernel development — and that I didn’t particularly want to work for Sun. I was right on the first count, but knew I was wrong on the second just moments into my first conversation with Jeff. He was emphatic that I should join him in forging the future, sharing both my enthusiasm for what was possible and my disdain for the broken, busted and boogered-up. Fourteen years later, I don’t for a moment regret my decision to join Jeff and Sun: we fostered an environment where the OS was viewed not as a regrettable drag on progress, but rather as a nexus of innovation — incubating technologies that today make a real difference in people’s lives.

In 2006, itching to try something new, Mike and I talked the company into taking the risk of allowing several of us to start Fishworks. That Sun supported our endeavor so enthusiastically was the company at its finest: empowering engineers to tackle hard problems, and inspiring them to bring innovative solutions to market. And with the budding success of the 7000 Series, I would like to believe that we made good on the company’s faith in us — and more generally on its belief in innovation as differentiator.

Now the time has come for me to venture again into something new — but this time it is to be beyond the company’s walls. This is obviously with mixed emotion; while I am excited about the future, it is very difficult for me personally to leave a company in which I have had such close relationships with so many. One of Sun’s greatest strengths was that we technologists were never discouraged from interacting directly and candidly with our customers and users, and many of our most important innovations came from these relationships. This symbiosis was critically important at several junctures of my own career, and I owe many of you a profound debt of gratitude — both for your counsel over the years, and for your willingness to bet your own business and livelihood on the technologies that I helped develop. You, like us, are innovators who love nothing more than great technology, and your steadfast faith in us means more to me than I can express; thank you.

As for my virtual address, it too is changing. This post will be my last at blogs.sun.com; in the future, you can find my blog at its new (permanent) home: http://dtrace.org/blogs/bmc (where comments on this entry will be open). As for e-mail, you can find me at the first letter of my first name concatenated with my last name at acm.org.

Thank you again for everything; take care — and stay in touch!

Posted on July 25, 2010 at 5:17 pm by bmc · Permalink · 46 Comments
In: Fishworks, Solaris

Turning the corner

It’s a little hard to believe that it’s been only fifteen months since we shipped our first product. It’s been a hell of a ride; there is nothing as exhilarating nor as exhausting as having a newly developed product that is both intricate and wildly popular. Especially in the domain of enterprise storage — where perfection is not just the standard but (entirely reasonably) the expectation — this makes for some seriously spiked punch.

For my own part, I have had my head down for the last six months as the Technical Lead for our latest software release, 2010.Q1, which is publicly available as of today. In my experience, I have found that in software (if not in life), one may only ever pick two of quality, features and schedule — and for 2010.Q1, we very much picked quality and features. (As for schedule, let it be only said that this release was once known as “2009.Q4″…)

2010.Q1 Quality

You don’t often see enterprise storage vendors touting quality improvements for a very simple reason: if the product was perfect when you sold it to me, why are you talking about how much you’ve improved it? So I’m going to break a little bit with established tradition and acknowledge that the product has not been perfect, though not without good reason. With our initial development of the product, we were pushing many new technologies very aggressively: not only did we seek to build enterprise-grade storage on commodity components (a deceptively daunting challenge in its own right), we were also building on entirely new elements like flash — and then topped it all off with an ambitious, from-scratch management stack. What were we possibly thinking by making so many bets at once? We made these bets not out of recklessness, but rather because they were essential elements of our Big Bet: that customers were sick of paying monopoly rents for enterprise storage, and that we could deliver a quantum leap in price-performance. (And if nothing else, let it be said that we got that one very, very right — seemingly too right, at times.) As for the specific technology bets, some have proven to be unblemished winners, while others have been more of a struggle. Sometimes the struggle was because the problem was hard, sometimes it was because the software was immature, and sometimes it was because a component that was assumed to have known failure modes had several (or many) unanticipated (or byzantine) failure modes. And in the worst cases, of course, it was all three…

I’m pleased to report that in 2010.Q1, we turned the corner on all fronts: in addition to just fixing a boatload of bugs in key areas like clustering and networking, we engaged in fundamental work like Dave‘s rearchitecture of remote replication, adapted to new device failure modes as with Greg‘s rearchitecture around resilience to HBA logic failure, and — perhaps most importantly — integrated critical firmware upgrades to each of the essential components of the I/O path (HBAs, SIM cards and disks). Also in 2010.Q1, we changed the way the way that we run the evaluation of the software, opening the door to many in our rapidly growing customer base. As a result, this release is already running on more customer production systems than any of its predecessors were at the time that they shipped — and on many more eval and production machines within our own walls.

2010.Q1 Features

But as important as quality is to this release, it’s not the full story: the release is also packed with major features like deduplication, iSER/SRP support, Kerberized NFS support and Fibre Channel support. Of these, the last is of particular interest to me because, in addition to my role as the Technical Lead for 2010.Q1, I was also responsible for the integration of FC support into the product. There was a lot of hard work here, but much of it was born by John Forte and his COMSTAR team, who did a terrific job not only on the SCSI Target Management facility (STMF) but also on the base ALUA support necessary to allow proper FC operation in a cluster. As for my role, it was fun to cut the code to make all of this stuff work. Thanks to some great design work by Todd Patrick, along with some helpful feedback from field-facing colleagues like Ryan Matthews, I think we came up with a clean, functional interface. And working closely with both John and our test team, we have developed a rock-solid FC product. But of course (and as one might imagine), for me personally, the really gratifying bit was adding FC support to analytics. With just a pinch of DTrace and a bit of glue code, we now have visibility into FC operations by LUN, by project, by target, by initiator, by operation, by SCSI command, by size, by offset and by latency — and by any combination thereof.

As I was developing FC analytics, I would use as my source of load a silly disk benchmark I wrote back in the day when Adam and I were evaluating SSDs. Here for example, is that benchmark running against a LUN that I named “thicktail-bench”:

The initiator here is the machine “thicktail”; it’s interesting to break down by initiator and see the paths by which thicktail is accessing the LUN:

(These names are human readable because I have added aliases for each of thicktail’s two HBA ports. Had I not added those aliases, we would see WWNs here.) The above shows us that thicktail is accessing the LUN through both of its paths, which is what we would expect (but good to visually confirm). Let’s see how it’s accessing the LUN in terms of operations:

Nothing too surprising here — this is the write phase of the benchmark and we have no log devices on this system, so we fully expect this. But let’s break down by offset:

The first time I saw this, I was surprised. Not because of what it shows — I wrote this benchmark, and I know what it does — but rather because it was so eye-popping to really see its behavior for the first time. In particular, this captures an odd phase I added to this benchmark: it does random writes across an increasing large range. I did this because we had discovered that some SSDs did fine when the writes were confined to a small logical region, but broke down — badly — when the writes were over a larger region. And no, I don’t know why this was the case (presumably the firmware was in fragmented/wear-leveling/cache-busting hell); all I know is that we rejected any further exploration once the writes to the SSD were of a higher latency than that of my first hard drive: the IBM PC XT’s 10 MB ST-412, which had roughly 95 ms writes! (We felt that expecting an SSD to have better write latency than a hard drive from the first Reagan Administration was tough but fair…)

What now?

As part of our ongoing maturity as a product, we have developed a new role here at Fishworks: starting in 2010.Q1, the Technical Lead for the release will, as the release ships, transition to become the full-time Support Lead for that release in the field. This means many things for the way we support the product, but for our customers, it means that if and when you do have an issue on 2010.Q1, you should know that the buck on your support call will ultimately stop with me. We are establishing an unprecedented level of engineering integration with our support teams, and we believe that it will show in the support experience. So welcome to 2010.Q1 — and happy upgrading!

Posted on March 10, 2010 at 2:24 pm by admin · Permalink · 20 Comments
In: Fishworks

John Birrell

It is with a heavy heart that I announce that we in the DTrace community have lost one of our own: the indomitable John Birrell, who ported DTrace to FreeBSD, suffered a stroke and passed away on Friday, November 20, 2009.

We on Team DTrace knew John to be a remarkably talented and determined software engineer. As those who have attempted ports can attest, DTrace passes through rough country, and a port to a foreign system is a significant undertaking that requires mastery of both DTrace and (particularly) the target system. And in being the first to attempt a port, John’s challenge was that much greater — and his success in the endeavor a tribute to both his ability and (especially) his tenacity. For example, in performing the port, John decided that DTrace’s dependency on the cyclic subsystem was such that it, too, needed to be ported. He didn’t need to do this (and indeed, other ports have decided that an arbitrary resolution profile provider is not worth the significant trouble), but that he undertook this additional technical challenge anyway — even when any victory would remain hidden to all but the most expert eye — says a lot about John as both an engineer and a man. Later, when the port ran into some frustrating licensing issues, John once again did not give up. Rather, he backed up, and found a path forward that would satisfy all parties — even though it required significant technical reworking on his part. I have long believed that the mark of a great engineer is not how frequently they get knocked down, but rather how quickly they get back up — and in this regard, John was indisputably a giant.

John, you will be missed — not only by the FreeBSD community upon which you made an indelible mark, but by those of us in the DTrace community who only had the opportunity to work with you more recently. And while your legacy might remain anonymous to the future generations that will benefit from the fruits of your long labor, we will always know that it never would have happened without you. Thank you, and farewell.

(Those who wish to memorialize John may want to do as I did and make a donation in his memory to the FreeBSD Foundation.)

Posted on November 26, 2009 at 12:36 pm by bmc · Permalink · Comments Closed
In: Fishworks

Queue, CACM, and the rebirth of the ACM

As I have mentioned before (if in passing), I sit on the Editorial Advisory Board of ACM Queue, ACM‘s flagship publication for practitioners. In the past year, Queue has undergone a significant transformation, and now finds itself at the vanguard of a much broader shift within the ACM — one that I confess to once thinking impossible.

My story with respect to the ACM is like that of many practitioners, I suspect: I first became aware of the organization as an undergraduate computer science student, when it appeared to me as the embodiment of academic computer science. This perception was cemented by its flagship publication, Communications of the ACM, a magazine which, to a budding software engineer longing for the world beyond academia, seemed to be adrift in dreamy abstraction. So when I decided at the end of my undergraduate career to practice my craft professionally, I didn’t for a moment consider joining the ACM: it clearly had no interest in the practitioner, and I had no interest in it.

Several years into my career, my colleague David Brown mentioned that he was serving on the Editorial Board of a new ACM publication aimed at the practitioner, dubbed ACM Queue. The idea of the ACM focussing on the practitioner brought to mind a piece of Sun engineering lore from the old Mountain View days. Sometime in the early 1990s, the campus engaged itself in a water fight that pitted one building against the next. The researchers from the Sun Labs building built an elaborate catapult to launch water-filled missiles at their adversaries, while the gritty kernel engineers in legendary MTV05 assembled surgical tubing into simple but devastatingly effective three-person water balloon slingshots. As one might guess, the Labs folks never got their catapult to work — and the engineers doused them with volley after volley of water balloons. So when David first mentioned that the ACM was aiming a publication at the practitioner, my mental image was of lab-coated ACM theoreticians, soddenly tinkering with an overcomplicated contraption. I chuckled to myself at this picture, wished David good luck on what I was sure was going to be a fruitless endeavor, and didn’t think any more of it.

Several months after it launched, I happened to come across an issue of the new ACM Queue. With skepticism, I read a few of the articles. I found them to be surprisingly useful — almost embarrassingly so. I sheepishly subscribed, and I found that even the articles that I disagreed with — like this interview with an apparently insane Alan Kay — were more thought-provoking than enraging. And in soliciting articles on sordid topics like fault management from engineers like my long-time co-conspirator Mike Shapiro, the publication proved itself to be interested in both abstract principles and their practical application. So when David asked me to be a guest expert for their issue on system performance, I readily accepted. I put together an issue that I remain proud of today, with articles from Bart Smaalders on performance anti-patterns, Phil Beevers on development methodologies for high-performance software, me on DTrace — and topped off with an interview between Kirk McKusick and Jarod Jenson that, among its many lessons, warns us of the subtle perils of Java’s notifyAll.

Two years later, I was honored to be asked to join Queue’s Editorial Advisory Board, where my eyes were opened to a larger shift within the ACM: the organization — led by both its executive leadership in CEO John White and COO Pat Ryan and its past and present elected ACM leadership like Steve Bourne, Dave Patterson, Stu Feldman and Wendy Hall — were earnestly and deliberately seeking to bring the practitioner into the ACM fold. And I quickly learned that I was not alone in my undergraduate dismissal of Communications of the ACM: CACM was broadly viewed within the ACM as being woefully out of touch with both academic and practitioner alike, with one past president confessing that he himself couldn’t stomach reading it — even when his name was on the masthead. There was an active reform movement within the ACM to return the publication to its storied past, and this trajectory intersected with the now-proven success of Queue: it was decided that the in-print vehicle for Queue would shift to become the Practice section of a new, revitalized CACM. I was elated by this change, for it meant that our superlative practitioner-authored content would at last enter the walled garden of the larger academic community. And for practitioners, a newly relevant CACM would also serve to expose us to a much broader swathe of computer science.

After much preparation, the new CACM launched in July 2008. Nearly a year later, I think it can safely be called a success. To wit, I point to two specific (if personal) examples from that first issue alone: thanks to the new CACM, my colleague Adam Leventhal’s work on flash memory and our integration of it in ZFS found a much broader readership than it would have otherwise — and Adam was recently invited to join an otherwise academic symposium on flash. And thanks to the new CACM, I — and thousands of other practitioners — were treated to David Shaw’s incredible Anton, the kind of work that gives engineers an optimistic excitement uniquely induced by such moon shots. By bringing together the academic and the practitioner, the new CACM is effecting a new ACM.

So, to my fellow practitioners: I strongly encourage you to join me as a member of the ACM. While CACM is clearly a compelling and tangible benefit, it is not the only reason to join the ACM. As professionals, I believe that we have a responsibility to our craft: to learn from our peers, to offer whatever we might have to teach, and to generally leave the profession better than we found it. In other professions — in law, in medicine, and in more traditional engineering domains — this professional responsibility is indoctrinated to the point of expectation. But our discipline perhaps shows its youth in our ignorance of this kind of professional service. To be fair, this cannot be laid entirely at the practitioner’s feet: the organizations that have existed for computer scientists have simply not been interested in attracting, cultivating, or retaining the practitioner. But with the shift within the ACM embodied by the new CACM, this is changing. The ACM now aspires to be the organization that represents all computer scientists — not just those who teach students, perform research and write papers, but also those of us who cut code, deliver product and deploy systems for a living. Joining the ACM helps it make good on this aspiration; we practitioners cannot effect this essential change from outside its membership. And we must not stop at membership: if there is an article that you might like to write for the broader ACM audience, or an article that you’d like to see written, or a suggestion you might have for a CTO roundtable or a practitioner you think should be interviewed, or, for that matter, any other change that you might like to see in the ACM to further appeal to the practitioner, do not stay silent; the ACM has given us practitioners a new voice — but it is only good if we use it!

Posted on May 15, 2009 at 12:58 am by bmc · Permalink · 6 Comments
In: Fishworks

Moore’s Outlaws

My blog post eulogizing SPEC SFS has elicited quite a bit of reaction, much of it from researchers and industry observers who have drawn similar conclusions. While these responses were very positive, my polemic garnered a different reaction from SPEC SFS stalwart NetApp, where, in his response defending SPEC SFS, my former colleague Mike Eisler concocted this Alice-in-Wonderland defense of the lack of a pricing disclosure in the benchmark:

Like many industries, few storage companies have fixed pricing. As much as heads of sales departments would prefer to charge the same highest price to every customer, it isn’t going to happen. Storage is a buyers’ market. And for storage devices that serve NFS and now CIFS, the easily accessible numbers on spec.org are yet another tool for buyers. I just don’t understand why a storage vendor would advocate removing that tool.

Mike’s argument — and I’m still not sure that I’m parsing it correctly — appears to be that the infamously opaque pricing in the storage business somehow helps customers because they don’t have to pay a single “highest price”! That is, that the lack of transparent pricing somehow reflects the “buyers’ market” in storage. If that is indeed Mike’s argument, someone should let the buyers know how great they have it — those silly buyers don’t seem to realize that the endless haggling over software licensing and support contracts is for them!

And if that argument isn’t contorted enough for you, Mike takes a second tack:

In storage, the cost of the components to build the device falls continuously. Just as our customers have a buyers’ market, we storage vendors are buyers of components from our suppliers and also enjoy a buyers’ market. Re-submitting numbers after a hunk of sheet metal declines in price is silly.

His ludicrous “sheet metal” example aside (what enterprise storage product contains more than a few hundred bucks of sheet metal?), Mike’s argument appears to be that technology curves like Moore’s Law and Kryder’s Law lead to enterprise storage prices that are falling with such alarming speed that they’re wrong by the time as they are so much as written down! If it needs to be said, this argument is absurd on many levels. First, the increases in transistor density and areal storage density tend to result in more computing bandwidth and more storage capacity per dollar, not lower absolute prices. (After all, your laptop is three orders of magnitude more powerful than a personal computer circa 1980 — but it’s certainly not a thousandth of the price.)

Second, has anyone ever accused the enterprise storage vendors of dropping their prices in pace with these laws — or even abiding by them in the first place? The last time I checked, the single-core Mobile Celeron that NetApp currently ships in their FAS2020 and FAS2050 — a CPU with a criminally small 256K of L2 cache — is best described as a Moore’s Outlaw: a CPU that, even when it first shipped six (six!) years ago, was off the curve. (A single-core CPU with 256K of L2 cache was abiding by Moore’s Law circa 1997.) Though it’s no wonder that NetApp sees plummeting component costs when they’re able to source their CPUs by dumpster diving…

Getting back to SPEC SFS: even if the storage vendors were consistently reflecting technology improvements, SPEC SFS is (as I discussed) a drive latency benchmark that doesn’t realize the economics of these curves anyway; drives are not rotating any faster year-over-year, having leveled out at 15K RPM some years ago due to some nasty physical constraints (like, the sound barrier). So there’s no real reason to believe that the 2,016 15K RPM drives used in NetApp’s opulent 1,032,461 op submission are any cheaper today than when this configuration was first submitted three years ago. Yes, those same drives would likely have more capacity (being 146GB or 300GB and not the 72GB in the submission), but recall that these drives are being short-stroked to begin with — so such as additional capacity is being used at all by the benchmark, it will only be used to assure even less head movement!

Finally, even if Mike were correct that technology advances result in ever falling absolute prices, it still should not prohibit price disclosures. We all understand that prices reflect a moment in time, and if natural inflation does not dissuade us from price disclosures, nor should any technology-induced deflation.

So to be clear: SPEC SFS needs pricing disclosures. TPC has them, SPC has them — and SFS needs them if the benchmark has any aspiration to enduring relevance. While SPEC SFS’s flaws run deeper than the missing price disclosure, the disclosure would at least keep the more egregious misbehaviors in check — and it would also (I believe) show storage buyers the degree to which the systems measured by SPEC SFS do not in fact correspond to the systems that they purchase and deploy.

One final note: in his blog entry, Mike claims that “SPEC SFS performance is the minimum bar for entry into the NAS business.” If he genuinely believes this, Mike may want to write a letter to the editors of InfoWorld: in their recent review of our Sun Storage 7210, they had the audacity to ignore the lack of SPEC SFS results for the appliance, instead running their own benchmarks. Their rating for the product’s performance? 10 out of 10. What heresy!

Posted on February 19, 2009 at 9:28 pm by bmc · Permalink · 11 Comments
In: Fishworks

Eulogy for a benchmark

I come to bury SPEC SFS, not to praise it.

When we at Fishworks set out, our goal was to build a product that would disrupt the enterprise NAS market with revolutionary price/performance. Based on the economics of Sun’s server business, it was easy to know that we would deliver on the price half of that promise, but the performance half promised to be more complicated: while price is concrete and absolute, the notion of performance fluctuates with environment, workload and expectations. To cut through these factors, computing systems have long had their performance quantified with benchmarks that hold environment and workload constant, and as we began to learn about NAS benchmarks, one in particular loomed large among the vendors: SPEC‘s system file server benchmark, SFS. Curiously, the benchmark didn’t come up much in conversations with customers, who seemed to prefer talking about raw capabilities like maximum delivered read bandwidth, maximum delivered write bandwidth, maximum synchronous write IOPS (I/O operations per second) or maximum random read IOPS. But it was clear that the entrenched NAS vendors took SPEC SFS very seriously (indeed, to the point that they seemed to use no other metric to describe the performance of the system), and upstarts seeking to challenge them seemed to take it even more seriously, so we naturally assumed that we too should use SPEC SFS as the canonical metric of our system…

But as we explored SPEC SFS — as we looked at the workload that it measures, examined its run rules, studied our rivals’ submissions and then contrasted that to what we saw in the market — an ugly truth emerged: whatever connection to reality it might have once had, SPEC SFS has long since become completely divorced from the way systems are actually used. And worse than simply being outdated or irrelevant, SPEC SFS is so thoroughly misguided as to implicitly encourage vendors to build the wrong systems — ones that are increasingly outrageous and uneconomic. Quite the opposite of being beneficial to customers in evaluating systems, SPEC SFS has decayed to the point that it is serving the opposite ends: by rewarding the wrong engineering decisions, punishing the right ones and eliminating price from the discussion, SPEC SFS has actually led to lower performing, more expensive systems! And amazingly, in the update to SPEC SFS — SPEC SFS 2008 — the benchmark’s flaws have not only gone unaddressed, they have metastasized. The result is such a deformed monstrosity that — like the index case of some horrific new pathogen — its only remaining utility lies on the autopsy table: by dissecting SPEC SFS and understanding how it has failed, we can seek to understand deeper truths about benchmarks and their failure modes.

Before taking the scalpel to SPEC SFS, it is worth considering system benchmarks in the abstract. The simplest system benchmarks are microbenchmarks that measure a small, well-defined operation in the system. Their simplicity is their great strength: because they boil the system down to its most fundamental primitives, the results can serve as a truth that transcends the benchmark. That is, if a microbenchmark measures a NAS box to provide 1.04 GB/sec read bandwidth from disk, then that number can be considered and understood outside of the benchmark itself. The simplicity of microbenchmarks conveys other advantages as well: microbenchmarks are often highly portable, easily reproducible, straightforward to execute, etc.

Unfortunately, systems themselves are rarely as simple as their atoms, and microbenchmarks are unable to capture the complex interactions of a deployed system. More subtly, microbenchmarks can also lead to the wrong conclusions (or, worse, the wrong engineering decisions) by giving excessive weight to infrequent operations. In his excellent article on performance anti-patterns, my colleague Bart Smaalders discussed this problem with respect to the getpid system call. Because measuring getpid has been the canonical way to measure system call performance, some operating systems have “improved” system call performance by turning getpid into a library call. This effort is misguided, as are any decisions based on the results of measuring it: as Bart pointed out, no real application calls getpid frequently enough for it to matter in terms of delivered performance.

Making benchmarks representative of actual loads is a more complicated undertaking, with any approach stricken by potentially serious failings. The most straightforward approach is taken by application benchmarks, which run an actual (if simplified) application on the system, and measure its performance. This approach has the obvious advantage of measuring actual, useful work — or at least one definition of it. This means, too, that system effects are being taken into consideration, and that one can have confidence that more than a mere back eddy of the system is being measured. But an equally obvious drawback to this approach is that it is only measuring one application — an application which may not be at all representative of a deployed system. Moreover, because the application itself is often simplified, application benchmarks can still exhibit the microbenchmark’s failings of oversimplification. From the perspective of storage systems, application benchmarks have a more serious problem: because application benchmarks require a complete, functional system, they make it difficult to understand and quantify merely the storage component. From the application’s perspective, the system is opaque; who is to know if, say, an impressive TPC result is due to the storage system rather than more mundane factors like, say, database tuning?

Synthetic benchmarks address this failing by taking the hybrid approach of deconstructing application-level behavior into microbenchmark-level operations that they then run in mix that matches the actual use. Ideally, synthetic benchmarks combine the best of both variants: they offer the simplicity and reproducibility of the microbenchmarks, but the real-world applicability of the application-level benchmarks. But beneath this promise of synthetic benchmarks lurks an opposite peril: if not executed properly, synthetic benchmarks can embody the worst properties of both benchmark variants. That is, if a synthetic benchmark combines microbenchmark-level operations in a way that does not in fact correspond to higher level behavior, it has all of the complexity, specificity and opacity of the worst application-level benchmarks — with the utter inapplicability to actual systems exhibited by the worst microbenchmarks.

As one might perhaps imagine from the foreshadowing, SPEC SFS is a synthetic benchmark: it combines NFS operations in an operation mix designed to embody “typical” NFS load. SPEC SFS has evolved over more than a decade, having started life as NFSSTONE and then morphing into NHFSSTONE (ca. 1992) and then LADDIS (a consortium of Legato, Auspex, DEC, Data General, Interphase and Sun) before become a part of SPEC. (As an aside, “LADDIS” is clearly BUNCH-like in being a portent of a slow and miserable death — may Sun break the curse!) Here is the NFS operation mix for SPEC SFS over its lifetime:

NFS operation SFS 1.1 (LADDIS) SFS 2.0/3.0 (NFSv2) SFS 2.0/2.3 (NFSv3) SFS 2008

LOOKUP

34%

36%

27%

24%

READ

22%

14%

18%

18%

WRITE

15%

7%

9%

10%

GETATTR

13%

26%

11%

26%

READLINK

8%

7%

7%

1%

READDIR

3%

6%

2%

1%

CREATE

2%

1%

1%

1%

REMOVE

1%

1%

1%

1%

FSSTAT

1%

1%

1%

1%

SETATTR

-

-

1%

4%

READDIRPLUS

-

-

9%

2%

ACCESS

-

-

7%

11%

COMMIT

-

-

5%

N/A

The first thing to note is that the workload hasn’t changed very much over the years: it started off being 58% metadata read operations (LOOKUP, GETATTR, READLINK, READDIR, READDIRPLUS, ACCESS), 22% read operations and 15% write operations, and it’s now 65% metadata read operations, 18% read operations and 10% write operations. So where did that original workload come from? From an unpublished study at Sun conducted in 1986! (I recently interviewed a prospective engineer who was not yet born when this data was gathered — and I’ve always thought it wise to be wary of data older than oneself.) The updates to the operation mix are nearly as dubious: according to David Robinson’s thorough paper on the motivation for SFS 2.0, the operation mix for SFS 3.0 was updated based on a survey of 750 Auspex servers running NFSv2 — which even at the time of that paper’s publication in 1999 must have elicited some cocked eyebrows about the relevance of workloads on such clunkers. And what of the most recent update? The 2008 reaffirmation of the decades-old workload is, according to SPEC, “based on recent data collected by SFS committee members from thousands of real NFS servers operating at customer sites.” SPEC leaves unspoken the uncanny coincidence that the “recent data” pointed to an identical read/write mix as that survey of those now-extinct Auspex dinosaurs a decade ago — plus ça change, apparently!

Okay, so perhaps the operation mix is paleolithic. Does that make it invalid? Not necessarily, but this particular operation mix does appear to be something of a living fossil: it is biased heavily towards reads, with a mere 15% of operations being writes (and a third of these being metadata writes). While I don’t doubt that this is an accurate snapshot of NAS during the Reagan Administration, the world has changed quite a bit since then. Namely, DRAM sizes have grown by nearly five orders of magnitude (!), and client caching has grown along with it — both in the form of traditional NFS client caching, and in higher-level caching technologies like memcached or (at a larger scale) content distribution networks. This caching serves to satisfy reads before they ever make it to the NAS head, which can leave the NAS head with those operations that cannot be cached, worked around or generally ameliorated — which is to say, writes.

If the workload mix is dated because it does not express the rise of DRAM as cache, one might think that this would also shine through in the results, with systems increasingly using DRAM cache to achieve a high SPEC SFS result. But this has not in fact transpired, and the reason it hasn’t brings us to the first fatal flaw of SPEC SFS: instead of making the working set a parameter of the benchmark — and having a result be not a single number but rather a graph of results given different working set sizes — the working set size is dictated by the desired number of operations per second. In particular, in running SPEC SFS 3.0, one is required to have ten megabytes of underlying filesystem for every operation per second. (Of this, 10% is utilized as the working set of the benchmark.) This “scaling rule” is a grievous error, for it diminishes the importance of cache as load climbs: in order to achieve higher operations per second, one must have larger working sets — even though there is absolutely no reason to believe that such a simple, linear relationship exists in actual workloads. (Indeed, in my experience, if anything the opposite may be true: those who are operation intensive seem to have smaller working sets, not larger ones — and those with larger amounts of data tend to focus on bandwidth more than operations per second.)

Interestingly, when this scaling rule was established, it was done so with some misgivings. According to David Robinson’s paper (emphasis added):

From the large file set created, a smaller working set is chosen for actual operations. In SFS 1.0 the working set size was 20% of file set size or 1 MB per op/sec. With the doubling of the file set size in SFS 2.0, the working set was cut in half to 10% to maintain the same working set size. Although the amount of disk storage grows at a rapid rate, the amount of that storage actually being accessed grows at a much slower rate. [ ... ] A 10% working set size may still be too large. Further research in this area is needed.

David, at least, seems to have been aware that this scaling rule was specious even a decade ago. But if the scaling rule was suspect in the mid-1990s, it has became absurd since. To see why, take, for example, NetApp’s reasonably recent result of 137,306 operations per second. Getting to this number requires 10 MB per op/sec, or about 1.3 TB. Now, 10% of this — or about 130GB — will be accessed over the course of the benchmark. The problem is that from the perspective of caching, the only hope here is to cache metadata, as the data itself exceeds the size of cache and the data access pattern is essentially random. With the cache effectively useless, the engineering problem is no longer designing intelligent caching architectures, but rather designing a system that can quickly serve data from disk. Solving the former requires creativity, trade-offs and balance — but solving the latter just requires brute force: fast drives and more of ‘em. And in this NetApp submission, the force is particularly shock-and-awe: not just 15K RPM drives, but a whopping 224 144GB 15K RPM drives — delivering 32TB of raw capacity for a mere 1.3TB filesystem. Why would anyone overprovision storage by a factor of 20? The answer is that with the filesytem presumably designed to allocate from outer tracks before inner ones, allocating only 5% of available capacity guarantees that all data will live on those fastest, outer tracks. This practice — so-called short-stroking — means both faster transfers and minimal no head movement, guaranteeing that any I/O operation can be satisfied in just the rotational latency of a 15K RPM drive.

Short-stroking 224 15K RPM drives is the equivalent of fueling a dragster with nitromethane — it is top performance at a price so high as to be useless off the dragstrip. It’s a safe bet that if one actually had this problem — if one wished to build a system to optimize for random reads within a 130GB working set over a total data set of 1.3TB — one would never consider such a costly solution. How, then, would one solve this particular problem? Putting the entire data set on flash would certainly become tempting: an all flash-based solution is both faster and cheaper than the fleet of nitro-belching 15K RPM drives. But if this is so, does it mean that the future of SFS is to be flash-based configurations vying for king of an increasingly insignificant hill? It might have been so were in not for the revisions in SPEC SFS 2008: the scaling rule has gone from absurd to laughably ludicrous, as what used to be 10MB per op/sec, is now 120MB per op/sec. And as if this recklessness were not enough, the working set ratio has additionally been increased from 10% to 30% of total storage. One can only guess what inspired this descent into madness, but the result is certainly insane: to achieve this same 137,306 ops will require a 17TB filesystem — of which an eye-watering 5TB will be hot! This is nearly a 40X increase in working set size, without (as far as I can tell) any supporting data. At best, David’s warning that the scaling rule may have been excessive has been roundly ignored; at worst, the vendors have deliberately calculated how to adjust the problem posed by the benchmark such that thousands of 15K RPM drives remain the only possible solution, even in light of new technologies like flash. But it’s hard to know for sure which case SPEC has fallen into: the decision to both increase the scaling rule and increase the working set ratio is so terrible that incompetence becomes indistinguishable from malice.

Be it due to incompetence or malice, SPEC’s descent into a disk benchmark while masquerading as a system benchmark does worse than simply mislead the customer, it actively encourages the wrong engineering decisions. In particular, as long as SPEC SFS is thought to be the canonical metric of NFS performance, there is little incentive to add cache to NAS heads. (If SPEC SFS isn’t going to use it, why bother?) The engineering decisions made by the NAS market leaders reflect this thinking, as they continue to peddle grossly undersized DRAM configurations — like NetApp’s top-of-the-line FAS6080 and its meager maximum of 32GB of DRAM per head! (By contrast, our Sun Storage 7410 has up to 128GB of DRAM — and for a fraction of the price, I hasten to add.) And it is of no surprise that none of the entrenched players conceived of the hybrid storage pool; SPEC SFS does little to reward cache, so why focus on it? (Aside from the fact that it delivers much faster systems, of course!)

While SPEC SFS is hampered by its ancient workload and made ridiculous by its scaling rule, there is a deeper and more pernicious flaw in SPEC SFS: there is no pricing disclosure. This flaw is egregious, unconscionable and inexcusable: as the late, great Jim Gray made clear in his classic 1985 Datamation paper, one cannot consider performance in a vacuum — when purchasing a system, performance must be considered relative to price. Gray tells us how the database community came to understand this: in 1973, a bank received two bids for a new transaction system. One was for $5M from a mini-computer vendor (e.g. DEC with its PDP-11), the other for $25M from a traditional mainframe vendor (presumably IBM). The solutions offered identical performance; the fact that there was a 5X difference in price (and therefore price/performance), “crystallized” (in Gray’s words) the importance of price in benchmarking — and Gray’s paper in turn enshrined price as an essential metric of a database system. (Those interested in the details of the origins Gray’s iconoclastic Datamation paper and the long shadow that it has cast are encouraged to read David DeWitt and Charles Levine’s excellent retrospective on Gray’s work in database performance.) Today, the TPC benchmarks that Gray inspired have pricing at their heart: each submission is required to have a full disclosure report (FDR) that must include the price of the system and everything that that price includes, including part numbers and per-part pricing. Moreover, the system must be orderable: customers must be able to call up the vendor and demand the specified config at the specified price. This is a beautiful thing, because TPC allows for competition not just on performance (“TpmC” in TPC parlance) but also price/performance ($/TpmC). And indeed, in the 1990s, this is exactly what happened as low $/TpmC submissions from the likes of SQLServer running on Dell put competitive pressure on vendors like Sun to focus on price/performance — with customers being the clear winners in the contest.

By contrast, SPEC SFS’s absence of a pricing disclosure forbids competitors from competing on price/performance, instead encouraging absolute performance at any cost. This was taken to the logical extreme with NetApp and their preposterous 1,032,461 result — which took but 2,016 short-stroked 15K RPM drives! Steven Schwartz took NetApp to task for the exorbitance of this configuration, pointing out that NetApp’s configuration was a factor two to four times more expensive on a per-op basis than competitive results in his blog entry aptly titled “Benchmarks – Lies and the Lying Liars Who Tell Them.”

But are lower results any less outrageous? Take again that NetApp config. We don’t know how much that 3170 and its 224 15K RPM drives will cost you because NetApp isn’t forced to disclose it, but suffice it to say that it’s quite a bit — almost certainly seven figures undiscounted. But for the sake of argument, let’s assume that you get a steep discount and you somehow get this clustered, racked-out config to price out at $500K. Even then, given the meager 1.3TB delivered for purposes of the benchmark, this system costs an eye-watering $384/GB — which is about 8X more expensive than DRAM! So even in the unlikely event that your workload and working set match SPEC SFS, you would still be better off blowing your wallet on a big honkin’ RAM disk than buying the benchmarked configuration. And this embodies the essence of the failings of SPEC SFS: the (mis)design of the benchmark demands economic insanity — but the lack of pricing disclosure conceals that insanity from the casual observer. The lesson of SPEC SFS is therefore manifold: be skeptical of a system benchmark that is synthetic, be suspicious of a system benchmark that lacks a price disclosure — and be damning when they are one and the same.

With the SPEC SFS carcass dismembered and dispensed with, where does this leave Fishworks and our promise to deliver revolutionary price/performance? After considering SPEC SFS (and rejecting it, obviously), we came to believe that the storage benchmark well was so poisioned that the best way to demonstrate the performance of the product would be simple microbenchmarks that customers could run themselves — which had the added advantage of being closer to the raw capabilities that customers wanted to talk about anyway. In this spirit, see, for example, Brendan’s blog entry on the 7410′s performance limits. Or, if you’re more interested in latency than bandwidth, check out his screenshots of the L2ARC in action. Most importantly: don’t take our word for it — get one yourself, run it with your workload, and then use our built-in analytics to understand not just how the system runs, but also why. We have, after all, designed our systems to be run, not just to be sold…

Posted on February 2, 2009 at 2:22 am by bmc · Permalink · 10 Comments
In: Fishworks

The Hunter becomes the Hunted

I recently came into a copy of Dave Hitz‘s new book How to Castrate a Bull. A full review is to come, but I couldn’t wait to serve up one delicious bit of irony. Among the book’s many unintentionally fascinating artifacts is NetApp’s original business plan, dated January 16, 1992. In that plan, NetApp’s proposed differentiators are high availability, easy administration, high performance and low price — differentiators that are eerily mirrored by Fishworks’ proposed differentiators nearly fourteen years later. But the irony goes non-linear when Hitz discusses the “Competition” in that original business plan:

Sun Microsystems is the main supplier of NFS file servers. Sun sells over 2/3 of all NFS file servers. Our initial product will be positioned to cost significantly less than Sun’s lower-end server, with performance comparable to their high-end servers.

It is unlikely that Sun will be able to produce a server that performs as well, or costs as little, for several reasons:

  • Sun’s server hardware is inherently more expensive because it has lower production volumes than our components [...]
  • The culture among software engineers at Sun places little value on performance.
  • The structure of Sun — with SunSoft doing NFS and UNIX, and SMCC [Sun Microsystems Computer Corporation] doing hardware — makes it difficult for Sun to produce products that provide creative software-hardware system solutions.
  • Sun’s distribution costs will likely remain high due to the level of technical support required to install and manage a Sun server.

While I had known that NetApp targeted Sun in its early days, I had no idea how explicit that attack had been. Now, it must be said that Hitz was right about Sun on all counts — and that NetApp thoroughly disrupted Sun with its products, ultimately coming to dominate the NAS market itself. But it is stunning the degree to which NetApp’s own business plan — nearly verbatim — is being used against it, not least by the very company that NetApp originally disrupted. (See Slide 5 of the Fishworks elevator pitch — and use your imagination.) Indeed, like NetApp in the 1990s, the Sun Storage 7000 Series is not disruptive by accident, and as I elaborate on in this presentation, we are very deliberately positioning the product to best harness the economic winds blowing so strongly in its favor.

NetApp’s success with their original business plan and our nascent success with Fishworks point to the most important lesson that the history of technology has to teach: economics always wins — a product or a technology or a company ultimately cannot prop up unsustainable economics. Perhaps unlike Hitz, however, I had to learn that lesson the hard way: in the post-bubble meltdown that brought Sun within an inch of its life. But then again, perhaps Hitz has yet to have his final lesson on the subject…

Posted on January 22, 2009 at 12:50 am by bmc · Permalink · 11 Comments
In: Fishworks

Catching disk latency in the act

Today, Brendan made a very interesting discovery about the potential sources of disk latency in the datacenter. Here’s a video we made of Brendan explaining (and demonstrating) his discovery:



This may seem silly, but it’s not farfetched: Brendan actually made this discovery while exploring drive latency that he had seen in a lab machine due to a missing screw on a drive bracket. (!) Brendan has more details on the discovery, demonstrating how he used the Fishworks analytics to understand and visualize it.

If this has piqued your curiosity about the nature of disk mechanics, I encourage you to read Jon Elerath’s excellent ACM Queue article, Hard disk drives: the good, the bad and the ugly! As Jon notes, noise is a known cause of what is called a non-repeatable runout (NRRO) — though it’s unclear if Brendan’s shouting is exactly the kind of noise-induced NRRO that Jon had in mind…

Posted on December 31, 2008 at 4:57 pm by admin · Permalink · 19 Comments
In: Fishworks