Lessons from a decade of blogging

I started my blog June 17, 2004, tempted by the opportunity of Sun’s blogging policy, and cajoled by Bryan Cantrill’s presentation to the Solaris Kernel Team “Guerrilla Marketing” (net: Sun has forgotten about Solaris so let’s get the word out). I was a skeptical blogger. I even resisted the contraction “blog”, insisting on calling it “Adam Leventhal’s Weblog” as if linguistic purity would somehow elevate me above the vulgar blogspotter opining over toothpaste brands. (That linguistic purity did not, however, carry over into my early writing — my goodness it was painful to open that unearthed time capsule.)

A little about my blog. When I started blogging I was worried that I’d need to post frequently to build a readership. That was never going to happen. Fortunately aggregators (RSS feeds then; Twitter now) and web searches are far more relevant. My blog is narrow. There’s a lot about DTrace (a technology I helped develop), plenty in the last four years about Delphix (my employer), and samplings of flash memory, Galois fields, RAID, and musings on software and startups. The cumulative intersection consists of a single person. But — and this is hard to fathom — I’ve hosted a few hundred thousand unique visitors over the years. Aggregators pick up posts soon after posting; web searches drive traffic for years even on esoteric topics.

Ten years and 172 posts later, I wanted to see what lessons I could discern. So I turned to Google Analytics.

Most popular

3. I was surprised to see that my posts on double- and triple-parity RAID for ZFS have been among the most consistently read over the years since posting in 2006 and 2009 respectively. The former is almost exclusively an explanation of abstract algebra that I was taught in 2000, applied in 2006, and didn’t understand properly until 2009 — when wrote the post. The latter is catharsis from discovering errors in the published basis for our RAID implementation. I apparently considered it a personal affront.

2. When Oracle announced their DTrace port to Linux in 2011 a pair of posts broke the news and then deflated expectations — another personal affront — as the Oracle Linux efforts fell short of expectations (and continue to today). I had learned the lesson earlier that DTrace + a more popular operating system always garnered more interest.

1. In 2008 I posted about a defect in Apple’s DTrace implementation that was the result of it’s paranoid DRM protection. This was my perfect storm of blogging popularity: DTrace, more popular OS (Max OS X!), Apple-bashing, and DRM! The story was snapped up by Slashdot (Reddit of the mid-2000s) as “Apple Crippled Its DTrace Port” and by The Register’s Ashlee Vance (The Register’s Chris Mellor of the mid-2000s) as “Apple cripples Sun’s open source jewel: Hollywood love inspires DTrace bomb.” It’s safe to say that I’m not going to see another week with 49,312 unique visitors any time soon. And to be clear I’m deeply grateful to that original DTrace team at Apple — the subject of a different post.

And many more…

Some favorites of mine and of readers (views, time on site, and tweets) over the years:

2004 Solaris 10 11-20. Here was a fun one. Solaris 10 was a great release. Any of the top ten features would have been the headliner in a previous release so I did a series on some of the lesser features that deserved to make the marquee. (If anyone would like to fill in number 14, dynamic System V IPC, I’d welcome the submission.)

2004 Inside nohup -p. The nohup command had remained virtual untouched since being developed at Bell Labs by the late Joseph Ossanna (described as “a peach and a ramrod”). I enjoyed adding some 21st century magic, and suffocating the reader with the details.

2005 DTrace is open. It truly was an honor to have DTrace be the first open source component of Solaris. That I took the opportunity to descend to crush depth was a testament to the pride I took in that code. (tsj and Kamen, I’m seeing your comments now for the first time and will respond shortly.)

2005 Sanity and FUD. This one is honestly adorable. Only a naive believer could have been such a passionate defender of what would become Oracle Solaris.

2005 DTrace in the JavaOne Keynote. It was a trip to present to over 10,000 people at Moscone. I still haven’t brought myself to watch the video. Presentation tip: to get comfortable speaking to an audience of size N simply speak to an audience of size 10N.

2005 The mysteries of _init. I geeked out about some of the voodoo within the linker. And I’m glad I did because a few weeks ago that very post solved a problem for one of my colleagues. I found myself reading the post with fascination (of course having forgotten it completely).

2008 Hybrid Storage Pools in CACM. In one of my first published articles, I discussed how we were using flash memory — a niche product at the time — as a component in enterprise storage. Now, of course, flash has always been the obvious future of storage; no one had yet realized that at the time.

2012 Hardware Engineer. At Fishworks (building the ZFS Storage Appliance at Sun) I got the nickname “Adam Leventhal, Hardware Engineer” for my preternatural ability to fit round pegs in square holes; this post catalogued some of those experiments.

2013 The Holistic Engineer. My thoughts on what constitutes a great engineer; this has become a frequently referenced guidepost within Delphix engineering.

2013 Delphix plus three years. Obviously I enjoy anniversaries. This was both a fun one to plan and write, and the type of advice I wish I had taken to heart years ago.

You said something about lessons?

The popularity of those posts about DTrace for Mac OS X and Linux had suggested to me that controversy is more interesting than data. While that may be true, I think the real driver was news. With most tech publications regurgitating press releases, people appreciate real investigation and real analysis. (Though Google Analytics does show that popularity is inversely proportional to time on site i.e. thorough reading.)

If you want people to read (and understand) your posts, run a draft through one of those online grade-level calculators. Don’t be proud of writing at a 12th grade level; rewrite until 6th graders can understand. For complex subjects that may be difficult, but edit for clarity. Simpler is better.

Everyone needs an editor. I find accepting feedback to be incredibly difficult — painful — but it yields a better result. Find someone you trust to provide the right kind of feedback.

Early on blogging seemed hokey. Today it still can feel hokey — dispatches that feel directed at no one in particular. But I’d encourage just about any engineer to start a blog. It forces you to organize your ideas in a different and useful way, and it connects you with the broader community of users, developers, employees, and customers. For the past ten years I’ve walked into many customers who now start the conversation aware of topics and technology I care about.

Finally, reading those old blog posts was painful. I got (slightly) better the only way I knew how: repetition. Get the first 100 posts out of the way so that you can move on to the next 100. Don’t worry about readership. Don’t worry about popularity. Interesting content will find an audience, but think about your reader. Just start writing.

Posted on June 17, 2014 at 9:24 am by ahl · Permalink · One Comment
In: Other · Tagged with: 

Enterprise support and the term abroad

Delphix custsignsomers include top companies across a wide range of industries, most of them executing around the clock. Should a problem arise they require support from Delphix around the clock as well. To serve our customers’ needs we’ve drawn from industry best-practices while recently mixing in an unconventional approach to providing the best possible customer service regardless of when a customer encounters a problem.

There are three common approaches to support: outsourcing, shifts, and “follow the sun”. Outsourcing is economical but quality and consistency suffer especially for difficult cases. Asking outstanding engineers to cover undesirable shifts is unappealing. An on-call rotation (shifts “lite”) may be more tolerable but can be inadequate — and stressful — in a crisis. Hiring a geographically dispersed team — whose natural work day “follows the sun” — provides a more durable solution but has its own challenges. Interviewing is tough. Training is tougher. And maintaining education and consistency across the globe is nearly impossible.

Live communication simplifies training. New support engineers learn faster with live — ideally local — mentors, experts on a wide range of relevant technologies. The team is more able to stay current on the product and tools by working collaboratively. In a traditional “follow the sun” model, the first support engineer in a new locale is doubly disadvantaged — the bulk of the team is unavailable during the work day, and there’s no local experienced team for collaboration.

At Delphix, we don’t outsource our support engineering. We do hire around the globe, and we do have an on-call schedule. We’ve also drawn inspiration from an innovative approach employed by Moneypenny, a UK-based call center. Moneypenny had resisted extending their service to off-hours because they didn’t want to incur the detrimental effects of shift work to employee’s health and attitude. They didn’t want to outsource work because they were afraid customer satisfaction would suffer. Instead they took the novel step of opening an Auckland office — 12 hours offset — and sending employees for 4-6 months on a voluntary basis.

I was idly listening to NPR in the car when I heard the BBC report on Moneypenny. Their customers and employees raved about the approach. It was such a simple and elegant solution to the problem of around the clock support; I pulled over to consider the implications for Delphix Support. The cost of sending a support engineer to a remote destination would be paltry compared with the negative consequences associated with other approaches to support: weak hires, inconsistent methodologies, insufficient mentorship, not to mention underserved, angry, or lost customers. And the benefits to customers and the rest of the team would again far exceed the expense.

We call it the Delphix Support “term abroad.” As with a term abroad in school, it’s an opportunity for one of our experienced support engineers to work in a foreign locale. Delphix provides lodging in a sufficiently remote timezone with the expectation of a fairly normal work schedule. As with Moneypenny, that means that Delphix is able to provide the same high level of technical support at all times of day. In addition, that temporarily remote engineer can help to build a local team by recruiting, interviewing, and mentoring.

David — the longest tenured member of the Delphix support team — recently returned from a term abroad to the UK where he joined Scott, a recent hire and UK native. Scott spent a month working with David and others at our Menlo Park headquarters. Then David joined Scott in the UK to continue his mentorship and training. Both worked cases that would have normally paged the on-call engineer. A day after arriving in the UK, in fact, David and Scott handled two cases that would have otherwise woken up an engineer based in the US.

Early results give us confidence that the term abroad is going to be a powerful and complementary tool. Delphix provides the same high quality support at all hours, while expanding globally and increasing the satisfaction of the team. And it makes Delphix Support an even more attractive place to work for those who want to opt in to a little global adventure.

Posted on June 13, 2014 at 12:03 am by ahl · Permalink · 2 Comments
In: Delphix

The Total Cost of Unmasked Data

Data breaches make headlines at a regular cadence. Each is a surprise, but they are not, as a whole, surprising. While the extensive and sophisticated Target breach stuck in the headlines, a significant breach at three South Korean credit card companies happened around the same time. The theft of personal information for 20m subscribers didn’t have near the level of sophistication. Developers and contractors were simply given copies of production databases filled with personal information that they shouldn’t have been able to access.

When talking to Delphix customers and prospects, those that handle personal or sensitive information (typically financial services or heath care) inevitably ask how Delphix can help with masking. Turning the question around, asking how they mask data today sucks the air out of the room. Some deflect, talking about relevant requirements and regulations; others, pontificate obliquely about solutions they’ve bought; no one unabashedly claims to be fully implemented and fully compliant.

Data masking is hard to deploy consistently. I hear it from (honest) customers, and from data masking vendors. The striking attribute of the South Korean breach was that the Economist and other non-technical news sources called out unmasked data as the root cause:

“In 2012 a law was passed requiring the encryption of most companies’ databases, yet the filched data were not encoded. The contractor should never have been given access to customer records, he says; dummy data would have sufficed.”

These were non-production database copies, used for development and testing. There was no need for employees or contractors to interact with sensitive data. Indeed, those companies have a legal obligation not to keep production data in their development environments. All three credit card companies, and the credit bureau are customers of vendors that provide masking solutions. The contractor who loaded data for 20m individuals onto a USB stick didn’t need the real data, and should never been granted access. As with the customers I talk to, data masking surely proved too difficult to roll out in a manner that was secure and didn’t slow development, so it was relegated to shelfware.

Delphix fully automates the creation of non-production environments. It integrates with masking tools from Axis, Informatica, IBM, and others to ensure that every one of those environments is masked as a matter of mechanism rather than a manual process. What is the cost of unimplemented data masking? Obviously there are the fines and negative press, the lawsuits, and the endless mea culpas. At these credit card companies though literally dozens of executives resigned for failing to secure data, from all three CEOs on down. And in all likelihood, they had data masking solutions on the shelf, cast aside as too hard to implement.

Posted on February 12, 2014 at 10:01 am by ahl · Permalink · Comments Closed
In: Delphix · Tagged with: 

The OpenZFS write throttle

In my last blog post, I wrote about the ZFS write throttle, and how we saw it lead to pathological latency variability on customer systems. Matt Ahrens, the co-founder of ZFS, and I set about to fix it in OpenZFS. While the solution we came to may seem obvious, we arrived at it only through a bit of wandering in a wide open solution space.

The ZFS write throttle was fundamentally flawed — the data indelibly supported this diagnosis. The cure was far less clear. The core problem involved the actual throttling mechanism, allowing many fast writes while stalling some writes nearly without bound, with some artificially delayed writes ostensibly to soften the landing. Further, the mechanism relied on an accurate calculation of the backend throughput — a problem in itself, but one we’ll set aside for the moment.

On a frictionless surface in a vacuum…

Even in the most rigorously contrived, predictable cases, the old write throttle would yield high variance in the latency of writes. Consider a backend that can handle an unwavering 100MB/s (or 1GB/s or 10GB/s — pick your number). For a client with 10 threads executing 8KB async writes (again to keep it simple) to hit 100MB/s, the average latency would be around 780µs — not unreasonable.

Here’s how that scenario would play out with the old write throttle assuming full quiesced and syncing transaction groups (you may want to refer to my last blog post for a refresher on the mechanism and some of the numbers). With a target of 5 seconds to write out its contents, the currently open transaction group would be limited to 500MB. Recall that after 7/8ths of the limit is consumed, the old write throttle starts inserting a 10ms delay, so the first 437.5MB would come sailing in, say, with an average latency of 780µs, but then the remaining writes would average at least 10ms (scheduling delay could drive this even higher). With this artificially steady rate, the delay would occur 7/8ths of the way into our 5 second window, with 1/8th of the total remaining. So with 5/8ths of a second left, and an average latency of 10ms, the client would be able to write only and additional 500KB worth of data. More simply: data would flow at 100MB/s most of the time, and at less than 1MB/s the rest.

In this example the system inserted far too much delay — indeed, no delay was needed. In another scenario it could just have easily inserted too little.

Consider a case where we would require writers to be throttled. This time, let’s say the client has 1000 threads, and — since it’s now relevant — let’s say we’re limited to the optimistic 10GbE speed of 1GB/s. In this case the client would hit the 7/8ths in less than a second. 1000 threads writing 8KB every 10ms still push data at 800MB/s so we’d hit the hard limit just a fraction of a second later. With the quota exhausted, all 1000 threads would then block for about 4 seconds. A backend that can do 100MB/s x 5 seconds = 500MB = 64,000 x 8KB; the latency of those 64,000 writes breaks down like this: 55000 super fast, 8000 at 10ms, and 1000 at 4 seconds. Note that the throughput would be only slightly higher than in the previous example; the average latency would be approximately 1000 times higher which is optimal and expected.

In this example we delayed way too little, and paid the price with enormous 4 second outliers.

How to throttle

Consistency is more important than the average. The VP of Systems at a major retailer recently told me that he’d take almost always take a higher average for lower variance. Our goal for OpenZFS was to have consistent latency without lowering the average (if we could improve the average, hey so much the better). Given the total amount of work, there is a certain amount of delay we’d need to insert. The ZFS write throttle does so unequally. Our job was to delay all writes a little bit rather than some a lot.

One of our first ideas was to delay according to measured throughput. As with the example above, let’s say that the measured throughput of the backend was 100MB/s. If the transaction group had been open for 500ms, and we had accumulated 55MB so far, the next write would be delayed for 50ms, enough time to reduce the average to 100MB/s.

Think of it like a diagonal line on a graph from 0MB at time zero to the maximum size (say, 500MB) at the end of the transaction group (say, 5s). As the accumulated data pokes above that line, subsequent writes would be delayed accordingly. If we hit the data limit per transaction group then writes would be delayed as before, but it should be less likely as long as we’ve measured the backend throughput accurately.

There were two problems with this solution. First, calculating the backend throughput isn’t possible to do accurately. Performance can fluctuate significantly due to the location of writes, intervening synchronous activity (e.g. reads), or even other workloads on a multitenant storage array. But even if we could calculate it correctly, ZFS can’t spend all its time writing user data; some time must be devoted to writing metadata and doing other housekeeping.

Size doesn’t matter

Erasing the whiteboard, we added one constraint and loosened another: don’t rely an estimation of backend throughput, and don’t worry too much about transaction group duration.

Rather than capping transaction groups to a particular size, we would limit the amount of system memory that could be dirty (modified) at any given time. As memory filled past a certain point we would start to delay writes proportionally.

OpenZFS didn’t have a mechanism to track the outstanding dirty data. Adding it was non-trivial as it required communication across the logical (DMU) and physical (SPA) boundaries to smoothly retire dirty data as physical IOs completed. Logical operations given data redundancy (mirrors, RAID-Z, and ditto blocks) have multiple associated physical IOs. Waiting for all of them to complete would lead to lurches in the measure of outstanding dirty data. Instead, we retire a fraction of the logical size each time a physical IO completes.

By using this same metric of outstanding dirty data, we observed that we could address a seemingly unrelated, but chronic problem observed in ZFS — so called “picket-fencing”, the extreme burstiness of writes that ZFS issues to its disks. ZFS has a fixed number of concurrent outstanding IOs it issues to a device. Instead the new IO scheduler would issues a variable number of writes proportional to the amount of dirty data. With data coming in at a trickle, OpenZFS would trickle data to the backend, issuing 1 IO at a time. As incoming data rate increased, the IO scheduler would work harder, scheduling more concurrent writes in order to keep up (up to a fixed limit). As noted above, if OpenZFS couldn’t keep up with the rate of incoming data, it would insert delays also proportional to the amount of outstanding dirty data.

Results

The goal was improved consistency with no increase in the average latency. The results of our tests speak for themselves (log-log scale).

Note the single-moded distribution of OpenZFS compared with the highly varied results from ZFS. You can see by the dashed lines that we managed to slightly improve the average latency (1.04ms v. 1.27ms).

OpenZFS now represents a significant improvement over ZFS with regard to consistency both of client write latency and of backend write operations. In addition, the new IO scheduler improves upon ZFS when it comes to tuning. The mysterious magic numbers and inscrutable tuneables of the old write throttle have been replaced with knobs that are comprehensible, and can be connected more directly with observed behavior. In the final post in this series, I’ll look at how to tune the OpenZFS write throttle.

Posted on February 10, 2014 at 3:55 am by ahl · Permalink · 8 Comments
In: ZFS · Tagged with: , ,

ZFS fundamentals: the write throttle

It’s no small feat to build a stable, modern filesystem. The more I work with ZFS, the more impressed I am with how much it got right, and how malleable it’s proved. It has evolved to fix shortcomings and accommodate underlying technological shifts. It’s not surprising though that even while its underpinnings have withstood the test of production use, ZFS occasionally still shows the immaturity of the tween that it is.

Even before the ZFS storage appliance launched in 2008, ZFS was heavily used and discussed Solaris and OpenSolaris communities, the frequent subject of praise and criticism. A common grievance was that write-heavy workloads would consume massive amounts of system memory… and then render the system unusable as ZFS dutifully deposited the new data onto the often anemic storage (often a single spindle for OpenSolaris users).

For workloads whose ability to generate new data far outstripped the throughput of persistent storage, it became clear that ZFS needed to impose some limits. ZFS should have effective limits on the amount of system memory devoted to “dirty” (modified) data. Transaction groups should be bounded to prevent high latency IO and administrative operations. At a high level, ZFS transaction groups are just collections of writes (transactions), and there can be three transaction groups active at any given time; for a more thorough treatment, check out last year’s installment of ZFS knowledge.

Write Throttle 1.0 (2008)

The proposed solution appealed to an intuitive understanding of the system. At the highest level, don’t let transaction groups grow indefinitely. When a transaction reached a prescribed size, ZFS would create a new transaction group; if three already existed, it would block waiting for the syncing transaction group to complete. Limiting the size of each transaction group yielded a number of benefits. ZFS would no longer consume vast amounts of system memory (quelling outcry from the user community). Administrative actions that execute at transaction group boundaries would be more responsive. And synchronous, latency-sensitive operations wouldn’t have to contend with a deluge of writes from the syncing transaction group.

So how big should transaction groups be? The solution included a target duration for writing out a transaction group (5 seconds). The size of each transaction group would be based on that time target and an inferred write bandwidth. Duration times bandwidth equals target size. The inferred bandwidth would be recomputed after each transaction group.

When the size limit for a transaction group was reached, new writes would wait for the next transaction group to open. This could be nearly instantaneous if there weren’t already three transaction groups active, or it could incur a significant delay. To ameliorate this, the write throttle would insert a 10ms delay for all new writes once 7/8th of the size had been consumed.

See the gory details in the git commit.

Commentary

That initial write throttle made a comprehensible, earnest effort to address some critical problems in ZFS. And, to a degree, it succeeded. Though the lack of rigorous ZFS performance testing at that time is reflected in the glaring deficiencies with that initial write throttle. A simple logic bug lingered for other two months, causing all writes to be delayed by 10ms, not just those executed after the transaction group had reached 7/8ths of its target capacity — trivial, yes, but debilitating and telling. The computation of the write throttle resulted in values that varied rapidly; eventually a slapdash effort at hysteresis was added.

Stepping back, the magic constants arouse concern. Why should transaction groups last 5 seconds? Yes, they should be large enough to amortize metadata updates within a transaction group, and they should not be so large that they cause administrative unresponsiveness. For the ZFS storage appliance we experimented with lower values in an effort to smooth out the periodic bursts of writes — an effect we refer to as “picket-fencing” for its appearance in our IO visualization interface. Even more glaring, where did the 7/8ths cutoff come from or the 10ms worth of delay? Even if the computed throughput was dead accurate, the algorithm would lead to ZFS unnecessarily delaying writes. At first blush, this scheme was not fatally flawed, but surely arbitrary, disconnected from real results, and nearly impossible to reason about on a complex system.

Problems

The write throttle demonstrated problems more severe than the widely observed picket-fencing. While ZFS attempted to build a stable estimate of write throughput capacity, the computed number would, in practice, swing wildly. As a result, ZFS would variously over-throttle and under-throttle. It would often insert the 10ms delay, but that delay was intended merely as a softer landing than the hard limit. Once reached, the hard limit — still the primary throttling mechanism — could impose delays well in excess of a second.

The graph below shows the frequency (count) and total contribution (time) for power-of-two IO latencies from a production system.

The latency frequencies clearly show a tri-modal distribution: writes that happen at the speed of software (much less than 1ms), writes that are delayed by the write throttle (tens of milliseconds), and writes that bump up against the transaction group size (hundred of milliseconds up to multiple seconds).

The total accumulated time for each latency bucket highlights the dramatic impact of outliers. The 110 operations taking a second or longer contribute more to the overall elapsed time than the time of the remaining 16,000+ operations.

A new focus

The first attempt at the write throttle addressed a critical need, but was guided by the need to patch a hole rather than an understanding of the fundamental problem. The rate at which ZFS can move data to persistent storage will vary for a variety of reasons: synchronous operations will consume bandwidth; not all writes impact storage in the same way — scattered writes to areas of high fragmentation may be slower than sequential writes. Regardless of the real, instantaneous throughput capacity, ZFS needs to pass on the effective cost — as measured in write latency — to the client. Write throttle 1.0 carved this cost into three tranches: writes early in a transaction group that pay nothing, those late in a transaction group that pay 10ms each, and those at the end that pick up the remainder of the bill.

If the rate of incoming data was less than the throughput capacity of persistent storage the client should be charged nothing — no delay should be inserted. The write throttle failed by that standard as well, delaying 10ms in situations that warranted no surcharge.

Ideally ZFS should throttle writes in a way that optimizes for minimized and consistent latency. As we developed a new write throttle, our objectives were low variance for write latency, and steady and consistent (rather than bursty) writes to persistent storage. In my next post I’ll describe the solution that Matt Ahrens and I designed for OpenZFS.

Posted on December 27, 2013 at 12:40 am by ahl · Permalink · Comments Closed
In: ZFS · Tagged with: , , , , ,

OpenZFS: the next phase of ZFS development

I’ve been watching ZFS from moments after its inception at the hands of Matt Ahrens and Jeff Bonwick, so I’m excited to see it enter its newest phase of development in OpenZFS. While ZFS has long been regarded as the hottest filesystem on 128 bits, and has shipped in many different products, what’s been most impressive to me about ZFS development has been the constant iteration and reinvention.

Before shipping in Solaris 10 update 2, major components of ZFS had already advanced to “2.0″ and “3.0″. I’ve been involved with several ZFS-related products: Solaris 10, the ZFS Storage Appliance (nee Sun Storage 7000), and the Delphix Engine. Each new product and each new use has stressed ZFS in new ways, but also brought renewed focus to development. I’ve come to realize that ZFS will never be completed. I thought I’d use this post to cover the many ways that ZFS had failed in the products I’ve worked on over the years — and it has failed spectacularly at time — but this distracted from the most important aspect of ZFS. For each new failure in each new product with each new use and each new workload ZFS has adapted and improved.

OpenZFS doesn’t need a caretaker community for a finished project; if that were the case, porting OpenZFS to Linux, FreeBSD, and Mac OS X would have been the end. Instead, it was the beginning. The need for the OpenZFS community grew out of the porting efforts who wanted the world’s most advanced filesystem on their platforms and in their products. I wouldn’t trust my customers’ data to a filesystem that hadn’t been through those trials and triumphs over more than a decade. I can’t wait to see the next phase of evolution that OpenZFS brings.

 

If you’re at LinuxCon today, stop by the talk by Matt Ahrens and Brian Behlendor for more on OpenZFS; follow @OpenZFS for all OpenZFS news.

Posted on September 17, 2013 at 2:00 am by ahl · Permalink · 4 Comments
In: ZFS · Tagged with: , , , , , ,

Delphix plus three years

Today marks my third anniversary of joining Delphix. Joining a startup, I knew there would be lots to learn — indeed there’s been a lesson nearly once-a-day. Here are my top three lessons from my first three years at a startup. Even if the points themselves should have been obvious to me, the degree of their impact certainly wasn’t.

3. Tar-Babies are everywhere

Generalists thrive at a startup — there are many more tasks and problems than there are people. The things you touch inevitably stick to you, for better or for worse. Early on I was the DTrace guy, and the OS guy, and the debugging guy. Later on I became the performance go-to, and upgrade guy, and (proudly) the git neck beard. But I was also lab manager, and the cabler (running cat 6 when we moved offices, and frenetically stripping wires with my teeth as we discovered a second wiring closet), and the real estate guy. When I got approval to open a San Francisco office I asked about the next steps — “figure it out”. And so it goes for pretty much everything that happens at a startup. At big companies roles are subdivided and specialists own their small domains. During my time at Sun I didn’t think about many of the things those people did: they seemingly just happened. Being at a startup makes you intimately aware of all the excruciating minutiae that make a company go.

The more you do the more you own. The answer is not to avoid doing work that needs to be done, but to actively and aggressively recruit people to take over tasks. The stickiness was surprising and the need to offload can be uncomfortable. But you’re asking people to take on tasks — important, trivial, or unglamorous — so you can take on some additional load.

Ownership is important, but you need to share the load or else these tar babies will drag you down.

2. Hiring über alles

It’s not surprising that it’s all about the people. The right people and the right culture are the hardest problems to solve. It was surprising how much work it is to recruit and hire the best folks. We have a great team at Delphix and I love working with them. The big lesson for me was that hiring is far and away the highest leverage thing you can do for your startup. Hiring great people, great engineers, is hard and time consuming. I can’t count the number of coffees, and beers I’ve had for Delphix — “first dates” with prospective hires.

College recruiting had been an important focus for me during my years at Sun. I had only been at Delphix a few weeks when I convinced our CEO that we should recruit from colleges, and left to interview at my alma mater, Brown University. Delphix had been more conservative on hiring; some people regarded college recruiting as a bit of a flier, but it has paid off in a huge way. Our first college hire, two years in, is now a clear engineering leader. We’ve expanded the program from just me going to just one school to a dozen engineers going to ten this fall. About a quarter of our development team joined Delphix straight out of college. The initial effort is high, and you then have to wait 6-9 months for them to show up. But done right it can be the most efficient way to hire great engineers.

Work your ass off to hire great people; they’ll repay you for the time you’ve spent. If you don’t feel like hiring is a major time suck you’re probably doing it wrong.

1. Everything is your fault

The greatest blessing for a startup is also the greatest curse: having customers. Once people are paying for and using your product they’ll find the problems with it and you’ll work all hours to fix them. The surprise was how many problems became Delphix problems. Our product has its tendrils throughout the datacenter. We touch databases, operating systems, systems, hypervisors, networks, SANs and storage. Any butterfly flapping its wings in the datacenter can result in a hurricane on Delphix. As both the new component and the new vendor, we now expect and encourage our customers to talk to us early when diagnosing a problem. Our excellent support team (see hiring above) have diagnosed problems as diverse as poor network performance from a damaged cat 6 cable to over-provisioned ESX servers and misconfigured init.ora files. Obviously they’ve also addressed problems in our own software. But we always take an expansive view of the Delphix solution and work with the customer to chase the problem wherever it leads us.

This realization has also informed the way we build our software. We not only build our software to be resilient to abnormalities and to detect and report problems, but we also use tools to find problems early. A couple of years ago customers might connect Delphix to poor-performing storage — but that would just look like Delphix performing poorly. Now we run a series of storage benchmarks during every installation and report a grade. We build paranoia into our software and into our sales and support processes.

As a startup it’s even more crucial to insulate ourselves against environmental problems, build facilities to detect problems everywhere, and own the total solution with customers.

Starting year four

I hope others can benefit from those lessons; it took me a while to fully realize them. I’m sure there will be many more as I start year four at Delphix. Leave a comment and tell me about your own startup successes, failures, and lessons learned.

Posted on September 13, 2013 at 3:47 pm by ahl · Permalink · 2 Comments
In: Delphix

Topics in post-mortem debugging

A couple of weeks ago, Joyent hosted A Midsummer Night’s Systems meetup, a fun event with talks ranging from Node.js fatwas to big data for Mario Kart 64. My colleague Jeremy Jones had recently done some amazing work, perfect for the meetup, but with his first child less than a day old, Jeremy allowed me to present in his stead. In this short video (16 minutes) I talk about Jeremy’s investigation of a nasty bug that necessitated the creation of two awesome post-mortem tools. The first is what I call jdump, a Volatility plugin that takes a VMware snapshot and produces an illumos kernel crash dump. The second is ::gcore, an mdb command that can extract a fully functioning core file from a kernel crash dump. Together, they let us at Delphix scoop up all the state we’d need for a diagnosis with minimal interruption even when there’s no hard failure. Jeremy’s tools are close to magic, and without them the problem was close to undebuggable.

Thanks, Jeremy for letting me present on your great work. And thanks to Deirdre Straughan and Joyent for the great event!

Posted on August 22, 2013 at 5:00 am by ahl · Permalink · Comments Closed
In: illumos

Delphix and Flash

I started working with flash in 2006 — fortunate timing as flash was just starting to take hold in the enterprise. I started asking customers I’d visit about flash. I’ll always remember the response from an early adopter when I asked about how he planned on using the new, expensive storage, “We just bought it, and we have no idea.” It was a solution in search of a problem — the garbage can model at play.

Flash has evolved significantly since then from a raw material used on its own to a component in systems of increasing complexity. I wrote recently about the various techniques being employed to get the most out of flash; all share the basic idea of trading compute and IOPS (abundant commodities) for capacity (still more expensive for flash than hard drives). The ideal use cases are the ones that benefit most from that trade-off, ones where compression and dedup consume cheap compute cycles rather than expensive space on the NAND flash. Flash storage is best with data that contains high degrees of redundancy that clever software can squeeze out. With those loose criteria, it’s been amazing to me how flash storage vendors have flocked to the VDI use case. It’s certainly well-suited — big on IOPS with nearly identical data from different Windows installs that’s easily compressed and deduped — but seemingly every flash vendor has decided that it’s one part — if not the part — of the market they want to address. Take a look at the information on VDI from various flash storage vendors: Fusion, Nimble, Pure Storage, Tegile, Tintri, Violin, Virident, Whiptailthe list goes on and on.

I worked extensively with flash until leaving Oracle in 2010 when I decided to leave for a start up. I ended up not sticking with flash precisely because it was — and is — such a crowded space. I’d happily bet on the space, but it was harder to pick one winner. One of the things that drew me to Delphix though was precisely its compatibility with flash. At Delphix we create virtual database copies by sharing blocks; think of it as dedup before the fact, or dedup but without the runtime tax. Creating a virtual copy happens almost instantaneously saving tremendous amounts of administration time, unblocking developers, and accelerating projects — hence our credo of agile data. Unlike storage-based snapshots, Delphix virtual copies are database aware, provisioning is fully integrated and automated. Those virtual copies also take up much less physical space, but with as many or more IOPS hitting the aggregate of those virtual copies. Sound familiar yet? One tenth the capacity with the same workload — let’s call it 10x greater IOPS intensity — is ideally suited for flash storage.

Flash storage is best when clever software can squeeze out redundancies; Delphix is that clever software for databases. Delphix customers are starting to combine our product with their flash storage purchases. An all-flash array that’s 5x the $/TB as disk storage suddenly becomes half the price of disk storage when combined with Delphix — with substantially better performance. We as an industry still haven’t realized the full potential of flash storage. Agile data through Delphix fills in another big piece of the flash picture.

Posted on May 6, 2013 at 4:28 am by ahl · Permalink · Comments Closed
In: Delphix, Flash

On Systems Software

A prospective new college hire recently related an odd comment from his professor: systems programming is dead. I was nonplussed; what could the professor have meant? Systems is clearly very much alive. Interesting and important projects march under the banner of systems. But as I tried to construct a less emotional rebuttal, I realized I lacked a crisp definition of what systems programming is.

Wikipedia defines systems software in the narrowest terms: the stuff that interacts with hardware. But that covers a tiny fraction of modern systems. So what is systems software? It depends on when you’re asking the question. At one time, the web server was the application; now it’s the systems software on which many web-facing applications are built. At one time a database was the application; now it’s systems software that supports a variety of custom and off-the-shelf applications. Before my time, shells were probably considered a bleeding edge application; now they’re systems software on which some of the lowest-level plumbing of modern operating systems are built.

Any layer on which people build applications of increasing complexity is systems software. Most software that endures the transition to systems software does so whether its authors intended it or not. People in the software industry often talk about standing on the shoulders of giants; the systems software accumulated and refined over decades are those giants.

Stable interfaces define systems software. The programs that consume those interfaces expect the underlying systems software to be perfect every time. Initially innovation might happen in the interfaces themselves — the concurrent model of Node.js is a great example. As software matures, the interfaces become commodified; innovation happens behind those stable interfaces. Systems is only “dead” at its edges. Interfaces might be flexible and well-designed, or sclerotic and poorly designed. Regardless, new or improved systems software can increase performance, enhance observability, or simply fit a different economic niche.

There are a few different types of systems software. First there’s supporting systems software, systems software written as necessary foundation for some new application. This is systems software written with a purpose and designed to solve an unsolved — or poorly solved — problem. Chronologically, examples include UNIX, compilers, and libraries like jQuery. You write it because you need it, but it’s solving a problem that’s likely not unique to your particular application.

Then there’s accidental systems software. Stick everything from Apache to Excel to the Bourne shell in that category. These didn’t necessarily set out to be the foundation on which increasingly complex software would be written, but they definitely are. I’m sure there were times when indoctrination into systems-hood was painful, where the authors wanted to change interfaces, but good systems software respects its consumers and carries them forward. Somewhat famously make preserved its arcane syntax because two consumers already existed. JavaScript started as a glorified web design tool; now it sits several layers beneath complex client-side applications. Even more recently, developers of Node.js (itself  JavaScript-based) changed a commonly used interface that broke many applications. Historical mistakes can be annoying to live with, but — as the Node.js team determined — compatibility trumps cleanliness.

The largest bucket is replacement systems software. Linux, Java, ZFS, and DTrace fall into this category. At the time of their development, each was a notionally compatible replacement for something that already existed. Linux, of course, reimplemented the UNIX design to provide a free, compatible alternative. Java set about building a better runtime (the stable interface being a binary provided to customers to execute) designed to abstract away the operating system entirely. ZFS represented a completely new way of thinking about filesystems, but it did so within the tight constraints of POSIX interfaces and storage hardware. DTrace added new and unique observability to most of the stable interfaces that applications build on.

Finally, there’s intentional systems software. This is systems software by design, but unlike supporting systems software, there’s no consumer. Intentional systems software takes an “if you build it, they will come” approach. This is highly prone to failure — without an existence proof that your software solves a problem and exposes the right interfaces, it’s very difficult to know if you’re building the right thing.

Why define these categories? Knowing which you’re working with can inform your decisions. If you’ve written accidental systems software that has had systems-ness thrust upon it, realize that future versions need to respect the consumers — or willfully cast them aside. When writing replacement systems software, recognize the constraints on the system, and know exactly where you’re constrained and where you can innovate (or consider if you don’t want to use the existing solution). If you’ve written supporting systems software, know that others will inevitably need solutions to the same problems. Either invest in maintaining it and keeping it best of breed; resign to the fact that it will need to be replaced as others invest in a different solution; or open source it and hope (or advocate) that it becomes that ubiquitous solution.

TL;DR?

What’s systems software? It is the increasingly complex, increasingly capable, increasingly diverse foundation on which applications are built. It’s that long and growing tail of the corpus of software at large. The interfaces might be static, but it’s a rich venue for innovation. As more and more critical applications build on an interface, the more value there is in improving the systems software beneath it. Systems software is defined by the constraints; it’s a mission and a mindset with unique challenges, and unique potential.

Posted on February 25, 2013 at 4:46 am by ahl · Permalink · 5 Comments
In: Software · Tagged with: