The sudden death and eternal life of Solaris

As had been rumored for a while, Oracle effectively killed Solaris on Friday. When I first saw this, I had assumed that this was merely a deep cut, but in talking to Solaris engineers still at Oracle, it is clearly much more than that. It is a cut so deep as to be fatal: the core Solaris engineering organization lost on the order of 90% of its people, including essentially all management.

Of note, among the engineers I have spoken with, I heard two things repeatedly: “this is the end” and (from those who managed to survive Friday) “I wish I had been laid off.” Gone is any of the optimism (however tepid) that I have heard over the years — and embarrassed apologies for Oracle’s behavior have been replaced with dismay about the clumsiness, ineptitude and callousness with which this final cut was handled. In particular, that employees who had given their careers to the company were told of their termination via a pre-recorded call — “robo-RIF’d” in the words of one employee — is both despicable and cowardly. To their credit, the engineers affected saw themselves as Sun to the end: they stayed to solve hard, interesting problems and out of allegiance to one another — not out of any loyalty to the broader Oracle. Oracle didn’t deserve them and now it doesn’t have them — they have been liberated, if in a depraved act of corporate violence.

Assuming that this is indeed the end of Solaris (and it certainly looks that way), it offers a time for reflection. Certainly, the demise of Solaris is at one level not surprising, but on the other hand, its very suddenness highlights the degree to which proprietary software can suffer by the vicissitudes of corporate capriciousness. Vulnerable to executive whims, shareholder demands, and a fickle public, organizations can simply change direction by fiat. And because — in the words of the late, great Roger Faulkner — “it is easier to destroy than to create,” these changes in direction can have lasting effect when they mean stopping (or even suspending!) work on a project. Indeed, any engineer in any domain with sufficient longevity will have one (or many!) stories of exciting projects being cancelled by foolhardy and myopic management. For software, though, these cancellations can be particularly gutting because (in the proprietary world, anyway) so many of the details of software are carefully hidden from the users of the product — and much of the innovation of a cancelled software project will likely die with the project, living only in the oral tradition of the engineers who knew it. Worse, in the long run — to paraphrase Keynes — proprietary software projects are all dead. However ubiquitous at their height, this lonely fate awaits all proprietary software.

There is, of course, another way — and befitting its idiosyncratic life and death, Solaris shows us this path too: software can be open source. In stark contrast to proprietary software, open source does not — cannot, even — die. Yes, it can be disused or rusty or fusty, but as long as anyone is interested in it at all, it lives and breathes. Even should the interest wane to nothing, open source software survives still: its life as machine may be suspended, but it becomes as literature, waiting to be discovered by a future generation. That is, while proprietary software can die in an instant, open source software perpetually endures by its nature — and thrives by the strength of its communities. Just as the existence of proprietary software can be surprisingly brittle, open source communities can be crazily robust: they can survive neglect, derision, dissent — even sabotage.

In this regard, I speak from experience: from when Solaris was open sourced in 2005, the OpenSolaris community survived all of these things. By the time Oracle bought Sun five years later in 2010, the community had decided that it needed true independence — illumos was born. And, it turns out, illumos was born at exactly the right moment: shortly after illumos was announced, Oracle — in what remains to me a singularly loathsome and cowardly act — silently re-proprietarized Solaris on August 13, 2010. We in illumos were indisputably on our own, and while many outsiders gave us no chance of survival, we ourselves had reason for confidence: after all, open source communities are robust because they are often united not only by circumstance, but by values, and in our case, we as a community never lost our belief in ZFS, Zones, DTrace and myriad other technologies like MDB, FMA and Crossbow.

Indeed, since 2010, illumos has thrived; illumos is not only the repository of record for technologies that have become cross-platform like OpenZFS, but we have also advanced our core technologies considerably, while still maintaining highest standards of quality. Learning some of the mistakes of OpenSolaris, we have a model that allows for downstream innovation, experimentation and differentiation. For example, Joyent’s SmartOS has always been focused on our need for a cloud hypervisor (causing us to develop big features like hardware virtualization and Linux binary compatibility), and it is now at the heart of a massive buildout for Samsung (who acquired Joyent a little over a year ago). For us at Joyent, the Solaris/illumos/SmartOS saga has been formative in that we have seen both the ill effects of proprietary software and the amazing resilience of open source software — and it very much informed our decision to open source our entire stack in 2014.

Judging merely by its tombstone, the life of Solaris can be viewed as tragic: born out of wedlock between Sun and AT&T and dying at the hands of a remorseless corporate sociopath a quarter century later. And even that may be overstating its longevity: Solaris may not have been truly born until it was made open source, and — certainly to me, anyway — it died the moment it was again made proprietary. But in that shorter life, Solaris achieved the singular: immortality for its revolutionary technologies. So while we can mourn the loss of the proprietary embodiment of Solaris (and we can certainly lament the coarse way in which its technologists were treated!), we can rejoice in the eternal life of its technologies — in illumos and beyond!

Posted on September 4, 2017 at 12:30 pm by bmc · Permalink · 38 Comments
In: Uncategorized

Reflections on Systems We Love

Last Tuesday, several months of preparation came to fruition in the inaugural Systems We Love. You never know what’s going to happen the first time you get a new kind of conference together (especially one as broad as this one!) but it was, in a word, amazing. The content was absolutely outstanding, with attendee after attendee praising the uniformly high quality. (For guided tours, check out both Ozan Onay’s excellent exegesis and David Cassel’s thorough New Stack story — and don’t miss Sarah Huffman’s incredible illustrations!) It was such a great conference that many were asking about when we would do it again — and there is already interest in replicating it elsewhere. As an engineer, this makes me slightly nervous as I believe that success often teaches you nothing: luck becomes difficult to differentiate from design. But at the risk of taunting the conference gods with the arrogance of a puny mortal, here’s some stuff I do think we did right:

Okay, so that’s a pretty long list of things that worked; what didn’t work so well? I would say that there was basically only a single issue: the packed schedule. We had 19 (!!) 20 minute talks, and there simply wasn’t time for the length or quantity of breaks that one might like. I think it worked out better than it sounds like it would (thanks to our excellent and varied presenters!), but it was nonetheless exhausting and I think everyone would have appreciated at least one more break. Still, there were essentially no complaints about the number of presentations, so we wouldn’t want to overshoot by slimming down too much; perhaps the optimal number is 16 talks spread over four sessions of four talks apiece?

So where to go from here? We know now that there is a ton of demand and a bunch of great content to match (I’m still bummed about the terrific submissions we turned away!), so we know that we can (and will) easily have this be an annual event. But it seems like we can do more: maybe an event on the east coast? Perhaps one in Europe? Maybe as a series of meetups in the style of Papers We Love? There are a lot of possibilities, so please let us know what you’d like to see!

Finally, I would like to reflect on the most personally satisfying bit of Systems We Love: simply by bringing so many like-minded people together in the same room and having them get to know one another, we know that lives have been changed; new connections have been made, new opportunities have been found, and new journeys have begun. We knew that this would happen in the abstract, but in recent days, we have seen it ourselves: in the new year, you will see new faces on the Joyent engineering team that we met at Systems We Love. (If it needs to be said, the love of systems is a unifying force across Joyent; if you find yourself captivated by the content and you’re contemplating a career change, we’re hiring!) Like most (if not all) of us, the direction of my life has been significantly changed by meeting or hearing the right person at the right moment; that we have helped facilitate similar changes in our own small way is intensely gratifying — and is very much at the heart of what Systems We Love is about!

Posted on December 21, 2016 at 12:16 pm by bmc · Permalink · Comments Closed
In: Uncategorized

Submitting to Systems We Love

We’ve been overwhelmed by the positive response to Systems We Love! As simple as this concept is, Systems We Love — like Papers We Love, !!Con and others that inspired it — has tapped into a current of enthusiasm. Adam Leventhal captured this zeitgeist in a Hacker News comment:

What catches our collective attention are systems we hate, systems that suck, systems that fail–or systems too new to know. It’s refreshing to consider systems established and clever enough to love. There are wheels we don’t need to reinvent, systems that can teach us.

Are you tantalized by Systems We Love but you don’t know what proposal to submit? For those looking for proposal guidance, my advice is simple: find the love. Just as every presentation title at !!Con must assert its enthusiasm by ending with two bangs, you can think of every talk at Systems We Love as beginning with an implicit “Why I love…” So instead of a lecture on, say, the innards of ZFS (and well you may love ZFS!), pick an angle on ZFS that you particularly love. Why do you love it or what do you love about it? Keep it personal: this isn’t about asserting the dominance of one system — this is about you and a system (or an aspect of a system) that you love.

Now, what if you don’t think you love anything at all? Especially if you write software for a living and you’ve been at it for a while, it can be easy to lose the love in the sluice of quotidian sewage that is a deployed system. But I would assert that beneath any sedimented cynicism there must be a core of love: think back to when you were first discovering software systems as your calling and to your initial awe of learning of how much more complicated these systems are than you realized (what a colleague of mine once called “the miracle of boot”) — surely there is something in that awe from which you draw (or at least, drew) inspiration! I acknowledge that this is the exception rather than the rule — that it feels like we are more often disappointed rather than pleasantly surprised — but this is the nature of the job: our work as software engineers takes us to the boundaries of systems that are emerging or otherwise don’t work properly rather than into the beautiful caverns deep below the surface. To phrase this in terms of an old essay of mine, we spend our time in systems that are grimy or fetid rather than immaculate — but Systems We Love is about the inspiration that we derive from those immaculate systems (or at least their immaculate aspects).

Finally, don’t set the bar too high for yourself: we are bound to have a complicated relationship with any system with which we spend significant time, and just because you love one aspect of a system doesn’t mean that other parts don’t enrage, troll or depress you! So just remember it’s not Systems We Know, Systems We Invented or Systems We Worship — it’s Systems We Love and we hope to see you there!

Posted on September 30, 2016 at 2:57 pm by bmc · Permalink · Comments Closed
In: Uncategorized

Systems We Love

One of the exciting trends of the past few years is the emergence of Papers We Love. I have long been an advocate of journal clubs, but I have also found that discussion groups can stagnate when confined to a fixed group or a single domain; by broadening the participants and encouraging presenters to select papers that appeal to the heart as well as the head, Papers We Love has developed a singular verve. Speaking personally, I have enjoyed the meetups that I have attended — and I was honored to be given the opportunity to present on Jails and Zones at Papers We Love NYC (for which, it must be said, I was flattered by the best introduction of all time). I found the crowd that gathered to be engaged and invigorating — and thought-provoking conversation went well into the night.

The energy felt at Papers We Love is in stark contrast to the academic venues in which computer science papers are traditionally presented, which I accentuated in a candid keynote at the USENIX Annual Technical Conference, pointing to PWL as a model that is much more amenable to cross-pollination of ideas between academics and practitioners. My keynote was fiery, and it may have landed on dry tinder: if Rik Farrow’s excellent summary of my talk is any indicator, the time is right for a broader conversation about how we publish rigorous work.

But for us practitioners, however well they are discussed, academic work remains somewhat ancillary: while papers are invaluable as a mechanism for the rigorous presentation of thinking, it is ultimately the artifacts that we develop — the systems themselves — that represent the tangible embodiment of our ideas. And for the systems that I am personally engaged in, I have found that getting together to them is inspiring and fruitful, e.g. the quadrennial dtrace.conf or the more regular OpenZFS developer summit. My experiences with Papers We Love and with these system-specific meetings caused me to ask on a whim if there would be interest in a one-day one-track conference that tried to capture the PWL zeitgeist but for systems — a “Systems We Love.”

While I had thrown this idea out somewhat casually, the response was too clear to ignore: there was most definitely interest — to the point of expectation that it would happen! And here at Joyent, a company for which love of systems is practically an organizing principle, the interest quickly boiled into a frothy fervor; we couldn’t not do this!

It took a little while to get the logistics down, but I’m very happy to report that Systems We Love is on: December 13th in San Francisco! To determine the program, I am honored to be joined by an extraordinary program committee: hailing from a wide-range of backgrounds, experience levels, and interests — and united by a shared love of systems. So: the call for proposals is open — and if you have a love of systems, we hope that you will consider submitting a proposal and/or joining us on December 13th!

Posted on September 26, 2016 at 2:55 pm by bmc · Permalink · Comments Closed
In: Uncategorized

Hacked by a bug?

Early this afternoon, I had just recorded a wide-ranging episode of Arrested DevOps with the incomparable Bridget Kromhout and noticed that I had a flurry of Twitter mentions, all in reaction to this tweet of mine. There was just one problem: I didn’t tweet it. With my account obviously hacked, I went into fight-or-flight mode and (thanks in no small part to Bridget’s calm presence) did the obvious things: I changed my Twitter password, revoked the privileges of all applications, and tried to assess the damage…

Other than the tweet, I (thankfully!) didn’t see any obvious additional damage: no crazy DMs or random follows or unfollows. In terms of figuring out where the malicious tweet had come from, the source of the tweet was “Twitter for Android” — but according to my login history, the last Twitter for Android login was from me during my morning commute about two-and-a-half hours before the tweet. (And according to Twitter, I have only used the one device to access my account.) The only intervening logins were two from Quora about an hour prior to the tweet. (Aside: WTF, Quora?! Revoked!)

Then there was the oddity of the tweet itself. There was no caption — just the two images from what I gathered to be Germany. Looking at the raw tweet, however, cleared up its source:

{
  "created_at": "Mon Sep 12 17:56:31 +0000 2016",
  "id": 775392664602554400,
  "id_str": "775392664602554369",
  "text": "https://t.co/pYKRhaAdvC",
  "truncated": false,
  "entities": {
    "hashtags": [],
    "symbols": [],
    "user_mentions": [],
    "urls": [],
    "media": [
      {
        "id": 775378240244449300,
        "id_str": "775378240244449280",
        "indices": [
          0,
          23
        ],
        "media_url": "http://pbs.twimg.com/media/CsKyZsBWgAAHgVq.jpg",
        "media_url_https": "https://pbs.twimg.com/media/CsKyZsBWgAAHgVq.jpg",
        "url": "https://t.co/pYKRhaAdvC",
        "display_url": "pic.twitter.com/pYKRhaAdvC",
        "expanded_url": "https://twitter.com/MattAndersonBBC/status/775378264772775936/photo/1",
        "type": "photo",
        "sizes": {
          "medium": {
            "w": 1200,
            "h": 1200,
            "resize": "fit"
          },
          "large": {
            "w": 2048,
            "h": 2048,
            "resize": "fit"
          },
          "thumb": {
            "w": 150,
            "h": 150,
            "resize": "crop"
          },
          "small": {
            "w": 680,
            "h": 680,
            "resize": "fit"
          }
        },
        "source_status_id": 775378264772776000,
        "source_status_id_str": "775378264772775936",
        "source_user_id": 1193503572,
        "source_user_id_str": "1193503572"
      }
    ]
  },
  "extended_entities": {
    "media": [
      {
        "id": 775378240244449300,
        "id_str": "775378240244449280",
        "indices": [
          0,
          23
        ],
        "media_url": "http://pbs.twimg.com/media/CsKyZsBWgAAHgVq.jpg",
        "media_url_https": "https://pbs.twimg.com/media/CsKyZsBWgAAHgVq.jpg",
        "url": "https://t.co/pYKRhaAdvC",
        "display_url": "pic.twitter.com/pYKRhaAdvC",
        "expanded_url": "https://twitter.com/MattAndersonBBC/status/775378264772775936/photo/1",
        "type": "photo",
        "sizes": {
          "medium": {
            "w": 1200,
            "h": 1200,
            "resize": "fit"
          },
          "large": {
            "w": 2048,
            "h": 2048,
            "resize": "fit"
          },
          "thumb": {
            "w": 150,
            "h": 150,
            "resize": "crop"
          },
          "small": {
            "w": 680,
            "h": 680,
            "resize": "fit"
          }
        },
        "source_status_id": 775378264772776000,
        "source_status_id_str": "775378264772775936",
        "source_user_id": 1193503572,
        "source_user_id_str": "1193503572"
      },
      {
        "id": 775378240248614900,
        "id_str": "775378240248614912",
        "indices": [
          0,
          23
        ],
        "media_url": "http://pbs.twimg.com/media/CsKyZsCWEAA4oOp.jpg",
        "media_url_https": "https://pbs.twimg.com/media/CsKyZsCWEAA4oOp.jpg",
        "url": "https://t.co/pYKRhaAdvC",
        "display_url": "pic.twitter.com/pYKRhaAdvC",
        "expanded_url": "https://twitter.com/MattAndersonBBC/status/775378264772775936/photo/1",
        "type": "photo",
        "sizes": {
          "small": {
            "w": 680,
            "h": 680,
            "resize": "fit"
          },
          "thumb": {
            "w": 150,
            "h": 150,
            "resize": "crop"
          },
          "medium": {
            "w": 1200,
            "h": 1200,
            "resize": "fit"
          },
          "large": {
            "w": 2048,
            "h": 2048,
            "resize": "fit"
          }
        },
        "source_status_id": 775378264772776000,
        "source_status_id_str": "775378264772775936",
        "source_user_id": 1193503572,
        "source_user_id_str": "1193503572"
      }
    ]
  },
  "source": "Twitter for Android",
  "in_reply_to_status_id": null,
  "in_reply_to_status_id_str": null,
  "in_reply_to_user_id": null,
  "in_reply_to_user_id_str": null,
  "in_reply_to_screen_name": null,
  "user": {
    "id": 173630577,
    "id_str": "173630577",
    "name": "Bryan Cantrill",
    "screen_name": "bcantrill",
    "location": "",
    "description": "Nom de guerre: Colonel Data Corruption",
    "url": "http://t.co/VyAyIJP8vR",
    "entities": {
      "url": {
        "urls": [
          {
            "url": "http://t.co/VyAyIJP8vR",
            "expanded_url": "http://dtrace.org/blogs/bmc",
            "display_url": "dtrace.org/blogs/bmc",
            "indices": [
              0,
              22
            ]
          }
        ]
      },
      "description": {
        "urls": []
      }
    },
    "protected": false,
    "followers_count": 10407,
    "friends_count": 1557,
    "listed_count": 434,
    "created_at": "Sun Aug 01 23:51:44 +0000 2010",
    "favourites_count": 2431,
    "utc_offset": -25200,
    "time_zone": "Pacific Time (US & Canada)",
    "geo_enabled": true,
    "verified": false,
    "statuses_count": 4808,
    "lang": "en",
    "contributors_enabled": false,
    "is_translator": false,
    "is_translation_enabled": false,
    "profile_background_color": "C0DEED",
    "profile_background_image_url": "http://abs.twimg.com/images/themes/theme1/bg.png",
    "profile_background_image_url_https": "https://abs.twimg.com/images/themes/theme1/bg.png",
    "profile_background_tile": false,
    "profile_image_url": "http://pbs.twimg.com/profile_images/618537697670397952/gW9iQsvF_normal.jpg",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/618537697670397952/gW9iQsvF_normal.jpg",
    "profile_link_color": "0084B4",
    "profile_sidebar_border_color": "C0DEED",
    "profile_sidebar_fill_color": "DDEEF6",
    "profile_text_color": "333333",
    "profile_use_background_image": true,
    "has_extended_profile": false,
    "default_profile": true,
    "default_profile_image": false,
    "following": false,
    "follow_request_sent": false,
    "notifications": false
  },
  "geo": null,
  "coordinates": null,
  "place": {
    "id": "5a110d312052166f",
    "url": "https://api.twitter.com/1.1/geo/id/5a110d312052166f.json",
    "place_type": "city",
    "name": "San Francisco",
    "full_name": "San Francisco, CA",
    "country_code": "US",
    "country": "United States",
    "contained_within": [],
    "bounding_box": {
      "type": "Polygon",
      "coordinates": [
        [
          [
            -122.514926,
            37.708075
          ],
          [
            -122.357031,
            37.708075
          ],
          [
            -122.357031,
            37.833238
          ],
          [
            -122.514926,
            37.833238
          ]
        ]
      ]
    },
    "attributes": {}
  },
  "contributors": null,
  "is_quote_status": false,
  "retweet_count": 2,
  "favorite_count": 9,
  "favorited": false,
  "retweeted": false,
  "possibly_sensitive": false,
  "possibly_sensitive_appealable": false,
  "lang": "und"
}

Note in particular that the media has a source_status_id_str of 775378264772775936; it’s from this tweet roughly an hour before mine from Matt Anderson, the BBC Culture editor who (I gather) is Berlin-based.

Why would someone who had just hacked my account burn it by tweeting an innocuous (if idiosyncratic) photo of campaign posters on the streets of Berlin?! Suddenly this is feeling less like I’ve been hacked, and more like I’m the victim of data corruption.

Some questions I have, that I don’t know enough about the Twitter API to answer: first, how are tweets created that refer to media entities from other tweets? i.e., is there something about that tweet that can give a better clue as to how it was generated? Does the fact that it’s geolocated to San Francisco (albeit with the broadest possible coordinates) indicate that it might have come from the Twitter client misbehaving on my phone? (I didn’t follow Matthew Anderson and my phone was on my desk when this was tweeted — so this would be the app going seriously loco.) And what I’m most dying to know: what other tweets refer to the photos from the tweet from Matthew? (I gather that DataSift can answer this question, but I’m not a DataSift customer and they don’t appear to have a free tier.) If there’s a server-side bug afoot here, it wouldn’t be surprising if I’m not the only one affected.

I’m not sure I’m ever going to know the answers to these questions, but I’m leaving the tweet up there in hopes that it will provide some clues — and with the belief that the villain in the story, if ever brought to justice, will be a member of the shadowy cabal that I have fought my entire career: busted software.

Posted on September 13, 2016 at 12:11 am by bmc · Permalink · Comments Closed
In: Uncategorized

dtrace.conf(16) wrap-up

Something that got a little lost in the excitement of Samsung’s recent acquisition of Joyent was dtrace.conf(16), our quadrennial (!) unconference on DTrace. The videos are up, and in the spirit of Adam Leventhal‘s excellent wrap-ups from dtrace.conf(08) and dtrace.conf(12), I wanted to provide a survey of the one-day conference and its content.

Once again, it was an eclectic mix of technologists — and once again, the day got kicked off with me providing an introduction to dtrace.conf and its history. (Just to save you the time filling out your Cantrill Presentation Bingo Card: you can find me punching myself at 16:19, me offering unsolicited personal medical history at 20:11, and me getting trolled by unikernels at 38:25.)

Some at the conference (okay, one) had never seen or used DTrace before, so to get brains warmed up, Max Bruning gave a quick introduction to DTrace — featuring a real problem. (The problem that Max examined is a process running on LX-branded zones on SmartOS.)

Next up was Graeme Jenkinson from the University of Cambridge on distributed tracing featuring CADETS, a system with a DTrace-inspired query language called Event Query.

We then started a troika of presentations on core DTrace improvements, starting with George Neville-Neil on core D language improvements, some of which have been prototyped and others of which represent a wish list. This led into Matt Ahrens (of ZFS fame) on D syntactic sugar: language extensions that Matt implemented for D to make it easier to write more complicated (and more maintainable) scripts. A singular point of pride for me personally is how much DTrace was used to implement and debug ZFS: Matt has used DTrace as much as anyone, and anything that he feels he needs to make his life easier is something that will almost certainly benefit all DTrace users. First among these: support for “if”/”else”, a change that since dtrace.conf has gone through code review and is poised to integrate into DTrace! The final presentation in this segment on core improvements was Joyent engineer Robert Mustacchi on CTF everywhere, outlining work that Robert and Adam did in 2013 to bring user-level CTF understanding to DTrace.

The next group of presentations focused more on how DTrace is used, kicking off with Eric Saxby on DTracing applications, with a particular focus on instrumenting Ruby programs using Chris Andrews‘ excellent libusdt. When instrumenting upstack, we found that it’s useful for DTrace to pull apart JSON — and Joyent engineer Joshua Clulow presented next on the DTrace JSON subroutine that he implemented a few years ago. (And because I know you’re dying to know: Josh’s presentation is on vtmc, terminal-based presentation software unsurprisingly of his own creation.) Wrapping up this section was Dan McDonald talking about the challenges of DTrace-based dynamic asserts: because of the ubiquity of asserts, we really need to add is-enabled probes to the kernel to support dynamic asserts — an improvement that is long overdue, and that we will hopefully implement before 2020!

In the penultimate group of presentations, we got into some system-specific instrumentation and challenges, starting with James Nugent on DTrace and Go. The problem he refers to as preventing “Go and DTrace from working very well together” is the fact that Go doesn’t preserve frame pointers on x86 — but the good news is that this has changed and frame pointers will be preserved starting in 1.7, making DTrace on Go much more useful! After James, Joyent engineer Dave Pacheco described his experiences of using DTrace and Postgres. For our Manta object storage system, Postgres is a critical component and understanding it dynamically and in production has proved essential for us. George Neville-Neil then took the stage again to discuss performance improvements with always-on instrumentation. (Notable quote: “this is being recorded, but I’ll say it anyway.”) Gordon Marler from Bloomberg discussed the challenges of instrumenting massive binaries with thousands of shared objects, consisting of multiple languages (C, C++ and — pause for effect — Fortran 77) and millions of symbols (!!) — which necessitated DTrace ustack performance improvements via some custom (and optimized) postprocessing.

The final group of presentations kicked off with Joyent engineer Alex Wilson and me presenting DTrace in the zone and the DTrace privilege model, which is an important (if ominous) precursor for a very interesting presentation: security researcher Ben Murphy describing his diabolically clever work on
DTrace exploitation.

As the last presentation of the day (as we felt it would be a good segue to drinks), George Neville-Neil led a brief discussion on what he calls OpenDTrace — but is really about sharing the DTrace code more effectively across systems. (DTrace itself is entirely open source, so “OpenDTrace” is something of a redundancy.) George kicked off an OpenDTrace organization on GitHub and it currently holds scripts and the DTrace toolkit, with the aspirations of potentially mirroring the OpenZFS effort to encourage cross-platform development and collaboration.

After George wrapped up, we celebrated the passing of another quadrennial in traditional DTrace fashion: with cans of Tecate and exhilarating rounds of Fishpong. And for you, dear reader, we have a bonus for you for managing to read this far: if you weren’t able to make it to the conference, we have a few extra dtrace.conf(16) t-shirts. To get one of these, e-mail us your size, address, and maybe a sentence or two on how you use or have used DTrace. Supplies are (obviously) limited; if you miss out, you’ll have to wait until the next dtrace.conf in 2020!

Posted on July 29, 2016 at 12:00 pm by bmc · Permalink · Comments Closed
In: Uncategorized

Unikernels are unfit for production

Recently, I made the mistake of rhetorically asking if I needed to spell out why unikernels are unfit for production. The response was overwhelming: whether people feel that unikernels are wrong-headed and are looking for supporting detail or are unikernel proponents and want to know what the counter-arguments could possibly be, there is clearly a desire to hear the arguments against running unikernels in production.

So, what’s the problem with unikernels? Let’s get a definition first: a unikernel is an application that runs entirely in the microprocessor’s privileged mode. (The exact nomenclature varies; on x86 this would be running at Ring 0.) That is, in a unikernel there is no application at all in a traditional sense; instead, application functionality has been pulled into the operating system kernel. (The idea that there is “no OS” serves to mislead; it is not that there isn’t an operating system but rather that the application has taken on the hardware-interfacing responsibilities of the operating system — it is “all OS”, if a crude and anemic one.) Before we discuss the challenges with this, it’s worth first exploring the motivations for unikernels — if only because they are so thin…

The primary reason to implement functionality in the operating system kernel is for performance: by avoiding a context switch across the user-kernel boundary, operations that rely upon transit across that boundary can be made faster. In the case of unikernels, these arguments are specious on their face: between the complexity of modern platform runtimes and the performance of modern microprocessors, one does not typically find that applications are limited by user-kernel context switches. And as shaky as they may be, these arguments are further undermined by the fact that unikernels very much rely on hardware virtualization to achieve any multi-tenancy whatsoever. As I have expanded on in the past, virtualizing at the hardware layer carries with it an inexorable performance tax: by having the system that can actually see the hardware (namely, the hypervisor) isolated from the system that can actually see the app (the guest operating system) efficiencies are lost with respect to hardware utilization (e.g., of DRAM, NICs, CPUs, I/O) that no amount of willpower and brute force can make up. But it’s not worth dwelling on performance too much; let’s just say that the performance arguments to be made in favor of unikernels have some well-grounded counter-arguments and move on.

The other reason given by unikernel proponents is that unikernels are “more secure”, but it’s unclear what the intellectual foundation for this argument actually is. Yes, unikernels often run less software (and thus may have less attack surface) — but there is nothing about unikernels in principle that leads to less software. And yes, unikernels often run new or different software (and are therefore not vulnerable to the OpenSSL vuln-of-the-week) but this security-through-obscurity argument could be made for running any new, abstruse system. The security arguments also seem to whistle past the protection boundary that unikernels very much depend on: the protection boundary between guest OS’s afforded by the underlying hypervisor. Hypervisor vulnerabilities emphatically exist; one cannot play up Linux kernel vulnerabilities as a silent menace while simultaneously dismissing hypervisor vulnerabilities as imaginary. To the contrary, by depriving application developers of the tools of a user protection boundary, the principle of least privilege is violated: any vulnerability in an application tautologically roots the unikernel. In the world of container-based deployment, this takes a thorny problem — secret management — and makes it much nastier (and with much higher stakes). At best, unikernels amount to security theater, and at worst, a security nightmare.

The final reason often given by proponents of unikernels is that they are small — but again, there is nothing tautologically small about unikernels! Speaking personally, I have done kernel implementation on small kernels and big ones; you can certainly have lean systems without resorting to the equivalent of a gastric bypass with a unikernel! (I am personally a huge fan of Alpine Linux as a very lean user-land substrate for Linux apps and/or Docker containers.) And to the degree that unikernels don’t contain much code, it seems more by infancy (and, for the moment, irrelevancy) than by design. But it would be a mistake to measure the size of a unikernel only in terms of its code, and here too unikernel proponents ignore the details of the larger system: because a unikernel runs as a guest operating system, the DRAM allocated by the hypervisor for that guest is consumed in its entirety — even if the app itself isn’t making use of it. Because running out of memory remains one of the most pernicious of application failure modes (especially in dynamic environments), memory sizing tends to be overengineered in that requirements are often blindly doubled or otherwise slopped up. In the unikernel model, any such slop is lost — nothing else can use it because the hypervisor doesn’t know that it isn’t, in fact, in use. (This is in stark contrast to containers in which memory that isn’t used by applications is available to be used by other containers, or by the system itself.) So here again, the argument for unikernels becomes much more nuanced (if not rejected entirely) when the entire system is considered.

So those are the reasons for unikernels: perhaps performance, a little security theater, and a software crash diet. As tepid as they are, these reasons constitute the end of the good news from unikernels. Everything else from here on out is bad news: costs that must be borne to get to those advantages, however flimsy.

The disadvantages of unikernels start with the mechanics of an application itself. When the operating system boundary is obliterated, one may have eliminated the interface for an application to interact with the real world of the network or persistent storage — but one certainly hasn’t forsaken the need for such an interace! Some unikernels (like OSv and Rumprun) take the approach of implementing a “POSIX-like” interface to minimize disruption to applications. Good news: apps kinda work! Bad news: did we mention that they need to be ported? And here’s hoping that your app’s “POSIX-likeness” doesn’t extend to fusty old notions like creating a process: there are no processes in unikernels, so if your app depends on this (ubiquitous, four-decades-old) construct, you’re basically hosed. (Or worse than hosed.)

If this approach seems fringe, things get much further afield with language-specific unikernels like MirageOS that deeply embed a particular language runtime. On the one hand, allowing implementation only in a type-safe language allows for some of the acute reliability problems of unikernels to be circumvented. On the other hand, hope everything you need is in OCaml!

So there are some issues getting your app to work, but let’s say you’re past all this: either the POSIX surface exposed by your unikernel of choice is sufficient for your app (or platform), or it’s already written in OCaml or Erlang or Haskell or whatever. Should you have apps that can be unikernel-borne, you arrive at the most profound reason that unikernels are unfit for production — and the reason that (to me, anyway) strikes unikernels through the heart when it comes to deploying anything real in production: Unikernels are entirely undebuggable. There are no processes, so of course there is no ps, no htop, no strace — but there is also no netstat, no tcpdump, no ping! And these are just the crude, decades-old tools. There is certainly nothing modern like DTrace or MDB. From a debugging perspective, to say this is primitive understates it: this isn’t paleolithic — it is precambrian. As one who has spent my career developing production systems and the tooling to debug them, I find the implicit denial of debugging production systems to be galling, and symptomatic of a deeper malaise among unikernel proponents: total lack of operational empathy. Production problems are simply hand-waved away — services are just to be restarted when they misbehave. This attitude — even when merely implied — is infuriating to anyone who has ever been responsible for operating a system. (And lest you think I’m an outlier on this issue, listen to the applause in my DockerCon 2015 talk after I emphasized the need to debug systems rather than restart them.) And if it needs to be said, this attitude is angering because it is wrong: if a production app starts to misbehave because of a non-fatal condition like (say) listen drops, restarting the app is inducing disruption at the worst possible time (namely, when under high load) and doesn’t drive at all towards the root cause of the problem (an insufficient backlog).

Now, could one implement production debugging tooling in unikernels? In a word, no: debugging tooling very often crosses the user-kernel boundary, and is most effective when leveraging the ad hoc queries that the command line provides. The organs that provide this kind of functionality have been deliberately removed from unikernels in the name of weight loss; any unikernel that provides sufficiently sophisticated debugging tooling to be used in production would be violating its own dogma. Unikernels are unfit for production not merely as implemented but as conceived: they cannot be understood when they misbehave in production — and by their own assertions, they never will be able to be.

All of this said, I do find some common ground with proponents of unikernels: I agree that the container revolution demands a much leaner, more secure and more efficient run-time than a shared Linux guest OS running on virtual hardware — and at Joyent, our focus over the past few years has been delivering exactly that with SmartOS and Triton. While we see a similar problem as unikernel proponents, our approach is fundamentally different: instead of giving up on the notion of secure containers running on a multi-tenant substrate, we took the already-secure substrate of zones and added to it the ability to natively execute Linux binaries. That is, we chose to leverage advances in operating systems rather than deny their existence, bringing to Linux and Docker not only secure on-the-metal containers, but also critical advances like ZFS, Crossbow and (yes) DTrace. This merits a final reemphasis: our focus on production systems is reflected in everything we do, but most especially in our extensive tooling for debugging production systems — and by bringing this tooling to the larger world of Linux containers, Triton has already allowed for production debugging that we never before would have thought possible!

In the fullness of time, I think that unikernels will be most productive as a negative result: they will primarily serve to demonstrate the impracticality of their approach for production systems. As such, they will join transactional memory and the M-to-N scheduling model as breathless systems software fads that fell victim to the merciless details of reality. But you needn’t take my word for it: as I intimated in my tweet, undebuggable production systems are their own punishment — just kindly inflict them upon yourself and not the rest of us!

Posted on January 22, 2016 at 8:48 am by bmc · Permalink · Comments Closed
In: Uncategorized

Bringing clarity to containers

At the beginning of the year, I laid down a few predictions. While I refuse on principle to engage in Stephen O’Grady-style self-flagellation, I do think it’s worth revisiting the headliner prediction, namely that 2015 is the year of the container. I said at the time that it wasn’t particularly controversial, and I don’t think it’s controversial now: 2015 was the year of the container, and one need look no further than the explosion of container conferences with container camps and container summits and container cons.

My second prediction was marginally more subtle: that the impedence mismatch between containers in development and containers in production would be a wellspring of innovation. If anything, this understated the case: the wellspring turned out to be more like an open sluice, and 2015 saw the world flooded with multiple ways of doing seemingly everything when it comes to containers. That all of these technologies and frameworks are open source have served to accelerate them all, and mutations abound (Hypernetes, anyone?).

On the one hand this is great, as we all benefit by so many people exploring so many different ideas. But on the other hand, the flurry of choice can become a blizzard of confusion — especially when and where there is seemingly overlap between two technologies. (Or worse, when two overlapping and opinionated technologies disagree ardently on those opinions!) This slide from Karl Isenberg of Mesosphere at KubeCon last month captured it; the point is neither the specific technologies (as Karl noted, plenty are missing) and nor is it about the specific layers (many would likely quibble with some of the details of Karl’s taxonomy) but rather about the explosion of abstraction (and concomitant confusion) in this domain.

One of the biggest challenges that we have in containers heading into 2016 is that this confusion now presents significant head winds for early-adopters and second-movers alike. This has become so acute that I posed a question to KubeCon attendees: are we at or near Peak Confusion in the container space? The conclusion among everyone I spoke with (vendors, developers, operators and others) was that we’re nowhere near Peak Confusion — with many even saying that confusion is still accelerating. (!) Even for those of us who have been in containers for years, this has been a little terrifying — and I can imagine for those entirely new to containers, it’s downright paralyzing.

So, what’s to be done? I think much of the responsibility lies with the industry: instead of viewing containers as new territory for conquest, we must take it upon ourselves to assure for users an interoperable and composable future — one in which technologies can differentiate themselves based on the qualities of their implementation rather than the voraciousness of their appetite. Lest this sound utopian, it is this same ethos that underlies our modern internet, as facilitated by the essential work of the Internet Engineering Task Force (IETF). Thanks to the IETF and its ethos of “rough consensus and running code” we ended up with the interoperable internet. (Indeed, this text itself was brought to you by RFC 791, RFC 793, RFC 1034, and RFC 2616 — among many, many others.)

As for an entity that can potentially serve an IETF-like role for container-based computing, I look with guarded optimism to today’s lauch of the Cloud-native Computing Foundation. Joyent has been involved with the CNCF since its inception, and based on what we’ve seen so far, we see great promise for it in 2016 and beyond. We believe that by elucidating component boundaries and by fostering open source projects that share the values of interoperability and composability, the CNCF can combine the best attributes of both the IETF and the Apache Foundation: rough consensus and running, open source software that allows elastic, container-deployed, service-oriented infrastructure. If the CNCF can do this it will (we believe) serve a vital mission for practitioners: displace confusion with clarity — and therefore accelerate our collective cloud-native future!

Posted on December 17, 2015 at 3:21 pm by bmc · Permalink · Comments Closed
In: Uncategorized

Requests for discussion

One of the exciting challenges of being an all open source company is figuring out how to get design conversations out of the lunch time discussion and the private IRC/Jabber/Slack channels and into the broader community. There are many different approaches to this, and the most obvious one is to simply use whatever is used for issue tracking. Issue trackers don’t really fit the job, however: they don’t allow for threading; they don’t really allow for holistic discussion; they’re not easily connected with a single artifact in the repository, etc. In short, even on projects with modest activity, using issue tracking for design discussions causes the design discussions to be drowned out by the defects of the day — and on projects with more intense activity, it’s total mayhem.

So if issue tracking doesn’t fit, what’s the right way to have an open source design discussion? Back in the day at Sun, we had the Software Development Framework (SDF), which was a decidedly mixed bag. While it was putatively shrink-to-fit, in practice it felt too much like a bureaucratic hurdle with concomitant committees and votes and so on — and it rarely yielded productive design discussion. That said, we did like the artifacts that it produced, and even today in the illumos community we find that we go back to the Platform Software Architecture Review Committee (PSARC) archives to understand why things were done a particular way. (If you’re looking for some PSARC greatest hits, check out PSARC 2002/174 on zones, PSARC 2002/188 on least privilege or PSARC 2005/471 on branded zones.)

In my experience, the best part of the SDF was also the most elemental: it forced things to be written down in a forum reserved for architectural discussions, which alone forced some basic clarity on what was being built and why. At Joyent, we have wanted to capture this best element of the SDF without crippling ourselves with process — and in particular, we have wanted to allow engineers to write down their thinking while it is still nascent, such that it can be discussed when there is still time to meaningfully change it! This thinking, as it turns out, is remarkably close to the original design intent of the IETF’s Request for Comments, as expressed in RFC 3:

The content of a note may be any thought, suggestion, etc. related to the software or other aspect of the network. Notes are encouraged to be timely rather than polished. Philosophical positions without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable. The minimum length for a note is one sentence.

These standards (or lack of them) are stated explicitly for two reasons. First, there is a tendency to view a written statement as ipso facto authoritative, and we hope to promote the exchange and discussion of considerably less than authoritative ideas. Second, there is a natural hesitancy to publish something unpolished, and we hope to ease this inhibition.

We aren’t the only ones to be inspired by the IETF’s venerable RFCs, and the language communities in particular seem to be good at this: Java has Java Specification Requests, Python has Python Enhancement Proposals, Perl has the (oddly named) Perl 6 apocalypses, and Rust has Rust RFCs. But the other systems software communities have been nowhere near as structured about their design discussions, and you are hard-pressed to find similar constructs for operating systems, databases, container management systems, etc.

Encouraged by what we’ve seen by the language communities, we wanted to introduce RFCs for the open source system software that we lead — but because we deal so frequently with RFCs in the IETF context, we wanted to avoid the term “RFC” itself: IETF RFCs tend to be much more formalized than the original spirit, and tend to describe an agreed-upon protocol rather than nascent ideas. So to avoid confusion with RFCs while still capturing some of what they were trying to solve, we have started a Requests for Discussion (RFD) repository for the open source projects that we lead. We will announce an RFD on the mailing list that serves the community (e.g., sdc-discuss) to host the actual discussion, with a link to the corresponding directory in the repo that will host artifacts from the discussion. We intend to kick off RFDs for the obvious things like adding new endpoints, adding new commands, adding new services, changing the behavior of endpoints and commands, etc. — but also for the less well-defined stuff that captures earlier thinking.

Finally, for the RFD that finally got us off the mark on doing this, see RFD 1: Triton Container Naming Service. Discussion very much welcome!

Posted on September 16, 2015 at 12:07 pm by bmc · Permalink · One Comment
In: Uncategorized

Software: Immaculate, fetid and grimy

Once, long ago, there was an engineer who broke the operating system particularly badly. Now, if you’ve implemented important software for any serious length of time, you’ve seriously screwed up at least once — but this was notable for a few reasons. First, the change that the engineer committed was egregiously broken: the machine that served as our building’s central NFS server wasn’t even up for 24 hours running the change before the operating system crashed — an outcome so bad that the commit was unceremoniously reverted (which we called a “backout”). Second, this wasn’t the first time that the engineer had been backed out; being backed out was serious, and that this had happened before was disconcerting. But most notable of all: instead of taking personal responsibility for it, the engineer had the audacity to blame the subsystem that had been the subject of the change. Now on the one hand, this wasn’t entirely wrong: the change had been complicated and the subsystem that was being modified was a bit of a mess — and it was arguably a preexisting issue that had merely been exposed by the change. But on the other hand, it was the change that exposed it: the subsystem might have been brittle with respect to such changes, but it had at least worked correctly prior to it. My conclusion was that the problem wasn’t the change per se, but rather the engineer’s decided lack of caution when modifying such a fragile subsystem. While the recklessness that had become a troubling pattern for this particular engineer, it seemed that there was a more abstract issue: how does one safely make changes to a large, complicated, mature software system?

Hoping to channel my frustration into something positive, I wrote up an essay on the challenges of developing Solaris, and sent it out to everyone doing work on the operating system. The taxonomy it proposed turned out to be useful and embedded itself in our engineering culture — but the essay itself remained private (it pre-dated blogs.sun.com by several years). When we opened the operating system some years later, the essay was featured on opensolaris.org. But as that’s obviously been ripped down, and because the taxonomy seems to hold as much as ever, I think it’s worth reiterating; what follows is a polished (and lightly updated) version of the original essay.

In my experience, large software systems — be they proprietary or open source — have a complete range of software quality within their many subsystems.

Immaculate

Some subsystems you find are beautiful works of engineering — they are squeaky clean, well-designed and well-crafted. These subsystems are a joy to work in but (and here’s the catch) by virtue of being well-designed and well-implemented, they generally don’t need a whole lot of work. So you’ll get to use them, appreciate them, and be inspired by them — but you probably won’t spend much time modifying them. (And because these subsystems are such a pleasure to work in, you may find that the engineer who originally did the work is still active in some capacity — or that there is otherwise a long line of engineers eager to do any necessary work in such a rewarding environment.)

Fetid

Other subsystems are cobbled-together piles of junk — reeking garbage barges that have been around longer than anyone remembers, floating from one release to the next. These subsystems have little-to-no comments (or what comments they have are clearly wrong), are poorly designed, needlessly complex, badly implemented and virtually undebuggable. There are often parts that work by accident, and unused or little-used parts that simply never worked at all. They manage to survive for one or more of the following reasons:

If you find yourself having to do work in one of these subsystems, you must exercise extreme caution: you will need to write as many test cases as you can think of to beat the snot out of your modification, and you will need to perform extensive self-review. You can try asking around for assistance, but you’ll quickly discover that no one is around who understands the subsystem. Your code reviewers probably won’t be able to help much either — maybe you’ll find one or two people that have had the same misfortune that you find yourself experiencing, but it’s more likely that you will have to explain most aspects of the subsystem to your reviewers. You may discover as you work in the subsystem that maintaining it is simply untenable — and it may be time to consider rewriting the subsystem from scratch. (After all, most of the subsystems that are in the first category replaced subsystems that were in the second.) One should not come to this decision too quickly — rewriting a subsystem from scratch is enormously difficult and time-consuming. Still, don’t rule it out a priori.

Even if you decide not to rewrite such a subsystem, you should improve it while you’re there in manners that don’t introduce excessive risk. For example, if something took you a while to figure out, don’t hesitate to add a block comment to explain your discoveries. And if it was a pain in the ass to debug, you should add the debugging support that you found lacking. This will make it slightly easier on the next engineer — and it will make it easier on you when you need to debug your own modifications.

Grimy

Most subsystems, however, don’t actually fall neatly into either of these categories — they are somewhere in the middle. That is, they have parts that are well thought-out, or design elements that are sound, but they are also littered with implicit intradependencies within the subsystem or implicit interdependencies with other subsystems. They may have debugging support, but perhaps it is incomplete or out of date. Perhaps the subsystem effectively met its original design goals, but it has been extended to solve a new problem in a way that has left it brittle or overly complex. Many of these subsystems have been fixed to the point that they work reliably — but they are delicate and they must be modified with care.

The majority of work that you will do on existing code will be to subsystems in this last category. You must be very cautious when making changes to these subsystems. Sometimes these subsystems have local experts, but many changes will go beyond their expertise. (After all, part of the problem with these subsystems is that they often weren’t designed to accommodate the kind of change you might want to make.) You must extensively test your change to the subsystem. Run your change in every environment you can get your hands on, and don’t be be content that the software seems to basically work — you must beat the hell out of it. Obviously, you should run any tests that might apply to the subsystem, but you must go further. Sometimes there is a stress test available that you may run, but this is not a substitute for writing your own tests. You should review your own changes extensively. If it’s multithreaded, are you obeying all of the locking rules? (What are the locking rules, anyway?) Are you building implicit new dependencies into the subsystem? Are you using interfaces in a new way that may present some new risk? Are the interfaces that the subsystem exports being changed in a way that violates an implicit assumption that one of the consumers was making? These are not questions with easy answers, and you’ll find that it will often be grueling work just to gain confidence that you are not breaking or being broken by anything else.

If you think you’re done, review your changes again. Then, print your changes out, take them to a place where you can concentrate, and review them yet again. And when you review your own code, review it not as someone who believes that the code is right, but as someone who is certain that the code is wrong: review the code as if written by an archrival who has dared you to find anything wrong with it. As you perform your self-review, look for novel angles from which to test your code. Then test and test and test.

It can all be summed up by asking yourself one question: have you reviewed and tested your change every way that you know how? You should not even contemplate pushing until your answer to this is an unequivocal YES.. Remember: you are (or should be!) always empowered as an engineer to take more time to test your work. This is true of every engineering team that I have ever or would ever work on, and it’s what makes companies worth working for: engineers that are empowered to do the Right Thing.

Production quality all the time

You should assume that once you push, the rest of the world will be running your code in production. If the software that you’re developing matters, downtime induced by it will be painful and expensive. But if the software matters so much, who would be so far out of their mind as to run your changes so shortly after they integrate? Because software isn’t (or shouldn’t be) fruit that needs to ripen as it makes its way to market — it should be correct when it’s integrated. And if we don’t demand production quality all the time, we are concerned that we will be gripped by the Quality Death Spiral. The Quality Death Spiral is much more expensive than a handful of outages, so it’s worth the risk — but you must do your part by delivering production quality all the time.

Does this mean that you should contemplate ritual suicide if you introduce a serious bug? Of course not — everyone who has made enough modifications to delicate, critical subsystems has introduced a change that has induced expensive downtime somewhere. We know that this will be so because writing system software is just so damned tricky and hard. Indeed, it is because of this truism that you must demand of yourself that you not integrate a change until you are out of ideas of how to test it. Because you will one day introduce a bug of such subtlety that it will seem that no one could have caught it.

And what do you do when that awful, black day arrives? Here’s a quick coping manual from those of us who have been there:

But most importantly, you must ask yourself: what could I have done differently? If you honestly don’t know, ask a fellow engineer to help you. We’ve all been there, and we want to make sure that you are able to learn from it. Once you have an answer, take solace in it; no matter how bad you feel for having introduced a problem, you can know that the experience has improved you as an engineer — and that’s the most anyone can ask for.

Posted on September 3, 2015 at 4:42 pm by bmc · Permalink · One Comment
In: Uncategorized