The Observation Deck

Search
Close this search box.

Two years ago, I had a blog entry describing falling in love with Rust. Of course, a relationship with a technology is like any other relationship: as novelty and infatuation wears off, it can get on a longer term (and often more realistic and subdued) footing — or it can begin to fray. So well one might ask: how is Rust after the honeymoon?

By way of answering that, I should note that about a year ago (and a year into my relationship with Rust) we started Oxide. On the one hand, the name was no accident — we saw Rust playing a large role in our future. But on the other, we hadn’t yet started to build in earnest, so it was really more pointed question than assertion: where might Rust fit in a stack that stretches from the bowels of firmware through a hypervisor and control plane and into the lofty heights of REST APIs?

The short answer from an Oxide perspective is that Rust has proven to be a really good fit — remarkably good, honestly — at more or less all layers of the stack. You can expect much, much more to come from Oxide on this (we intend to open source more or less everything we’re building), but for a teaser of the scope, you can see it in the work of Oxide engineers: see Cliff’s blog, Adam and Dave’s talk on Dropshot, Jess on using Dropshot within Oxide, Laura on Rust macros, and Steve Klabnik on why he joined Oxide. (Requisite aside: we’re hiring!)

So Rust is going really well for us at Oxide, but for the moment I want to focus on more personal things — reasons that I personally have enjoyed implementing in Rust. These run the gamut: some are tiny but beautiful details that allow me to indulge in the pleasure of the craft; some are much more profound features that represent important advances in the state of the art; and some are bodies of software developed by the Rust community, notable as much for their reflection of who is attracted to Rust (and why) as for the artifacts themselves. It should also be said that I stand by absolutely everything I said two years ago; this is not as a replacement for that list, but rather a supplement to it. Finally, this list is highly incomplete; there’s a lot to love about Rust and this shouldn’t be thought of as any way exhaustive!

1. no_std

When developing for embedded systems — and especially for the flotilla of microcontrollers that surround a host CPU on the kinds of servers we’re building at Oxide — memory use is critical. Historically, C has been the best fit for these applications just because it so lean: by providing essentially nothing other than the portable assembler that is the language itself, it avoids the implicit assumptions (and girth) of a complicated runtime. But the nothing that C provides reflects history more than minimalism; it is not an elegant nothing, but rather an ill-considered nothing that leaves those who build embedded systems building effectively everything themselves — and in a language that does little to help them write correct software.

Meanwhile, having been generally designed around modern machines with seemingly limitless resources, higher-level languages and environments are simply too full-featured to fit into (say) tens of kilobytes or into the (highly) constrained environment of a microcontroller. And even where one could cajole these other languages into the embedded use case, it has generally been as a reimplementation, leaving developers on a fork that isn’t necessarily benefiting from development in the underlying language.

Rust has taken a different approach: a rich, default standard library but also a first-class mechanism for programs to opt out of that standard library. By marking themselves as no_std, programs confine themselves to the functionality found in libcore. This functionality, in turn, makes no system assumptions — and in particular, performs no heap allocations. This is not easy for a system to do; it requires extraordinary discipline by those developing it (who must constantly differentiate between core functionality and standard functionality) and a broad empathy with the constraints of embedded software. Rust is blessed with both, and the upshot is remarkable: a safe, powerful language that can operate in the highly constrained environment of a microcontroller — with binaries every bit as small as those generated by C. This makes no_std — as Cliff has called it — the killer feature of embedded Rust, without real precedence or analogue.

2. {:#x?}

Two years ago, I mentioned that I love format!, and in particular the {:?} format specifier. What took me longer to discover was {:#?}, which formats a structure but also pretty-prints it (i.e., with newlines and indentation). This can be coupled with {:#x} to yield {:#x?} which pretty-prints a structure in hex. So this:

    println!("dumping {:#x?}", region);

Becomes this:

dumping Region {
    daddr: Some(
        0x4db8,
    ),
    base: 0x10000,
    size: 0x8000,
    attr: RegionAttr {
        read: true,
        write: false,
        execute: true,
        device: false,
        dma: false,
    },
    task: Task(
        0x0,
    ),
}

My fingers now type {:#x?} by default, and hot damn is it ever nice!

3. Integer literal syntax

Okay, another small one: I love the Rust integer literal syntax! In hardware-facing systems, we are often expressing things in terms of masks that ultimately map to binary. It is beyond me why C thought to introduce octal and hexadecimal but not binary in their literal syntax; Rust addresses this gap with the same “0b” prefix as found in some non-standard C compiler extensions. Additionally, Rust allows for integer literals to be arbitrarily intra-delimited with an underscore character. Taken together, this allows for a mask consisting of bits 8 through 10 and bit 12 (say) to be expressed as 0b0000_1011_1000_0000 — which to me is clearer as to its intent and less error prone than (say) 0xb80 or 0b101110000000.

And as long as we’re on the subject of integer literals: I also love that the types (and the suffix that denotes a literal’s type) explicitly encode bit width and signedness. Instead of dealing with the implicit signedness and width of char, short, long and long long, we have u8, u16, u32, u64, etc. Much clearer!

4. DWARF support

Debugging software — and more generally, the debuggability of software systems — is in my marrow; it may come as no surprise that one of the things that I personally have been working on is the debugger for a de novo Rust operating system that we’re developing. To be useful, debuggers need help from the compiler in the way of type information — but this information has been historically excruciating to extract, especially in production systems. (Or as Robert phrased it concisely years ago: “the compiler is the enemy.”) And while DWARF is the de facto standard, it is only as good as the compiler’s willingness to supply it.

Given how much debuggability can (sadly) lag development, I wasn’t really sure what I would find with respect to Rust, but I have been delighted to discover thorough DWARF support. This is especially important for Rust because it (rightfully) makes extensive use of inlining; without DWARF support to make sense of this inlining, it can be hard to make any sense of the generated assembly. I have been able to use the DWARF information to build some pretty powerful Rust-based tooling — with much promise on the horizon. (You can see an early study for this work in Tockilator.)

5. Gimli and Goblin

Lest I sound like I am heaping too much praise on DWARF, let me be clear that DWARF is historically acutely painful to deal with. The specification (to the degree that one can call it that) is an elaborate mess, and the format itself seems to go out of its way to inflict pain on those who would consume it. Fortunately, the Gimli crate that consumes DWARF is really good, having made it easy to build DWARF-based tooling. (I have found that whenever I am frustrated with Gimli, I am, in fact, frustrated with some strange pedantry of DWARF — which Gimli rightfully refuses to paper over.)

In addition to Gimli, I have also enjoyed using Goblin to consume ELF. ELF — in stark contrast to DWARF — is tight and crisp (and the traditional C-based tooling for ELF is quite good), but it was nice nonetheless that Goblin makes it so easy to zing through an ELF binary.

6. Data-bearing enums

Enums — that is, the “sum” class of algebraic types — are core to Rust, and give it the beautiful error handling that I described falling in love with two years ago. Algebraic types allow much more than just beautiful error handling, e.g. Rust’s ubiquitous Option type, which allows for sentinel values to be eliminated from one’s code — and with it some significant fraction of defects. But it’s one thing to use these constructs, and another to begin to develop algebraic types for one’s own code, and I have found the ability for enums to optionally bear data to be incredibly useful. In particular, when parsing a protocol, one is often taking a stream of bytes and turning it into one of several different kinds of things; it is really, really nice to have the type system guide how software should consume the protocol. For example, here’s an enum that I defined when parsing data from ARM’s Embedded Trace Macrocell signal protocol:

#[derive(Copy, Clone, Debug)]
pub enum ETM3Header {
    BranchAddress { addr: u8, c: bool },
    ASync,
    CycleCount,
    ISync,
    Trigger,
    OutOfOrder { tag: u8, size: u8 },
    StoreFailed,
    ISyncCycleCount,
    OutOfOrderPlaceholder { a: bool, tag: u8 },
    VMID,
    NormalData { a: bool, size: u8 },
    Timestamp { r: bool },
    DataSuppressed,
    Ignore,
    ValueNotTraced { a: bool },
    ContextID,
    ExceptionExit,
    ExceptionEntry,
    PHeaderFormat1 { e: u8, n: u8 },
    PHeaderFormat2 { e0: bool, e1: bool },
}

That variants can have wildly different types (and that some can bear data while others don’t — and some can be structured, while others are tuples) allows for the type definition to closely match the specification, and helps higher-level software consume the protocol correctly.

7. Ternary operations

In C, the ternary operator allows for a terse conditional expression that can be used as an rvalue, e.g.:

	x = is_foo ? foo : bar;

This is equivalent to:

	if (is_foo) {
		x = foo;
	} else {
		x = bar;
	}

This construct is particularly valuable when not actually assigning to an lvalue, but when (for example) returning a value or passing a parameter. And indeed, I would estimate that a plurality — if not a majority — of my lifetime-use of the ternary operator has been in arguments to printf.

While Rust has no ternary operator per se, it is expression-oriented: statements have values. So the above example becomes:

	x = if is_foo { foo } else { bar };

That’s a bit more verbose than its C equivalent (though I personally like its explicitness), but it really starts to shine when things can marginally more complicated: nested ternary operators get gnarly in C, but they are easy to follow as simple nested if-then-else statements in Rust. And (of course) match is an expression as well — and I found that I often use match where I would have used a ternary operator in C, with the added benefit that I am forced to deal with every case. As a concrete example, take this code that is printing a slice of little-endian bytes as an 8-bit, 16-bit, or 32-bit quantity depending on a size parameter:

    print!("{:0width$x} ",
        match size {
            1 => line[i - offs] as u32,
            2 => u16::from_le_bytes(slice.try_into().unwrap()) as u32,
            4 => u32::from_le_bytes(slice.try_into().unwrap()) as u32,
            _ => {
                panic!("invalid size");
            }
        },
        width = size * 2
    );

For me, this is all of the power of the ternary operator, but without its pitfalls!

An interesting footnote on this: Rust once had the C-like ternary operator, but removed it, as the additional syntax didn’t carry its weight. This pruning in Rust’s early days — the idea that syntax should carry its weight by bringing unique expressive power — has kept Rust from the fate of languages that suffered from debilitating addictions to new syntax and concomitant complexity overdose; when there is more than one way to do it for absolutely everything, a language becomes so baroque as to become write-only!

8. paste!

This is a small detail, but one that took me a little while to find. As I described in my blog entry two years ago, I have historically made heavy use of the C preprocessor. One (arcane) example of this is the ## token concatenation operator, which I have needed only rarely — but found essential in those moments. (Here’s a concrete example.) As part of a macro that I was developing, I found that I needed the equivalent for Rust, and was delighted to find David Tolnay’s paste crate. paste! was exactly what I needed — and more testament to both the singular power of Rust’s macro system and David’s knack for build singularly useful things with it!

9. unsafe

A great strength of Rust is its safety — but something I also appreciate about it is the escape hatch offered via unsafe, whereby certain actions are permitted that are otherwise disallowed. It should go without saying that one should not use unsafe without good reason — but such good reasons can and do exist, and I appreciate that Rust trusts the programmer enough to allow them to take their safety into their own hands. Speaking personally, most of my own uses of unsafe have boiled down to accesses to register blocks on a microcontroller: on the one hand, unsafe because they dereference arbitrary memory — but on the other, safe by inspection. That said, the one time I had to write unsafe code that actually felt dangerous (namely, in dealing with an outrageously unsafe C library), I was definitely in a heightened state of alert! Indeed, my extreme caution around unsafe code reflects how much Rust has changed my disposition: after nearly three decades working in C, I thought I appreciated its level of unsafety, but the truth is I had just become numb to it; to implement in Rust is to eat the fruit from the tree of knowledge of unsafe programs — and to go back to unsafe code is to realize that you were naked all along!

10. Multi-platform support

When Steve Klabnik joined Oxide, we got not only an important new addition to the team, but a new platform as well: Steve is using Windows as his daily driver, in part because of his own personal dedication to keeping Rust multi-platform. While I’m not sure that anything could drive me personally to use Windows (aside: MS-DOS robbed me of my childhood), I do strongly believe in platform heterogeneity. I love that Rust forces the programmer to really think about implicitly platform-specific issues: Rust refuses to paper over the cracks in computing’s foundation for sake of expediency. If this can feel unnecessarily pedantic (can’t I just have a timestamp?!), it is in multi-platform support where this shines: software that I wrote just… worked on Windows. (And where it didn’t, it was despite Rust’s best efforts: when a standard library gives you first-class support to abstract the path separator, you have no one to blame but yourself if you hard-code your own!)

Making and keeping Rust multi-platform is hard work for everyone involved; but as someone who is currently writing Rust for multiple operating systems (Linux, illumos and — thanks to Steve — Windows) and multiple ISAs (e.g., x86-64, ARM Thumb-2), I very much appreciate that this is valued by the Rust community!

11. anyhow! + RUST_BACKTRACE

In my original piece, I praised the error handling of Rust, and that is certainly truer than ever: I simply cannot imagine going back to a world without algebraic types for error handling. The challenge that remained was that there were several conflicting crates building different error types and supporting routines, resulting in some confusion as to best practice. All of this left me — like many — simply rolling my own via Box<dyn Error>, which works well enough, but it doesn’t really help a thorny question: when an error emerges deep within a stack of composed software, where did it actually come from?

Enter David Tolnay (again!) and his handy anyhow! crate, which pulls together best practices and ties that into the improvements in the std::error::Error trait to yield a crate that is powerful without being imposing. Now, when an error emerges from within a stack of software, we can get a crisp chain of causality, e.g.:

readmem failed: A core architecture specific error occurred

Caused by:
    0: Failed to read register CSW at address 0x00000000
    1: Didn't receive any answer during batch processing: [Read(AccessPort(0), 0)]

And we can set RUST_BACKTRACE to get a full backtrace where an error actually originates — which is especially useful when a failure emerges from a surprising place, like this one from a Drop implementation in probe-rs:

Stack backtrace:
   0: probe_rs::probe::daplink::DAPLink::process_batch
   1: probe_rs::probe::daplink::DAPLink::batch_add
   2: ::read_register
   3: probe_rs::architecture::arm::communication_interface::ArmCommunicationInterface::read_ap_register
   4: probe_rs::architecture::arm::memory::adi_v5_memory_interface::ADIMemoryInterface::read_word_32
   5: <probe_rs::architecture::arm::memory::adi_v5_memory_interface::ADIMemoryInterface as probe_rs::memory::MemoryInterface>::read_word_32
   6: ::get_available_breakpoint_units
   7: <core::iter::adapters::ResultShunt<I> as core::iter::traits::iterator::Iterator>::next
   8: <alloc::vec::Vec as alloc::vec::SpecFromIter>::from_iter
   9: ::drop
  10: core::ptr::drop_in_place
  11: main
  12: std::sys_common::backtrace::__rust_begin_short_backtrace
  13: std::rt::lang_start::{{closure}}
  14: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
  15: main
  16: __libc_start_main
  17: _start) })

12. asm!

When writing software at the hardware/software interface, there is inevitably some degree of direct machine interaction that must be done via assembly. Historically, I have done this via dedicated .s files — which are inconvenient, but explicit.

Over the years, compilers added the capacity to drop assembly into C, but the verb here is apt: the resulting assembly was often dropped on its surrounding C like a Looney Tunes anvil, with the interface between the two often being ill-defined, compiler-dependent or both. Rust took this approach at first too, but it suffered from all of the historical problems of inline assembly — which in Rust’s case meant being highly dependent on LLVM implementation details. This in turn meant that it was unlikely to ever become stabilized, which would relegate those who need inline assembly to forever be on nightly Rust.

Fortunately, Amanieu d’Antras took on this gritty problem, and landed a new asm! syntax. The new syntax is a pleasure to work with, and frankly Rust has now leapfrogged C in terms of ease and robustness of integrating inline assembly!

13. String continuations

Okay, this is another tiny one, but meaningful for me and one that took me too long to discover. So first, something to know about me: I am an eighty column purist. For me, this has nothing to do with punchcards or whatnot, but rather with type readability, which tends to result in 50-100 characters per line — and generally about 70 or so. (I would redirect rebuttals to your bookshelf, where most any line of most any page of most any book will show this to be more or less correct.) So I personally embrace the “hard 80”, and have found that the rework that that can sometimes require results in more readable, better factored code. There is, however, one annoying exception to this: when programmatically printing a string that is itself long, one is left with much less horizontal real estate to work with! In C, this is a snap: string literals without intervening tokens are automatically concatenated, so the single literal can be made by multiple literals across multiple lines. But in Rust, string literals can span multiple lines (generally a feature!), so splitting the line will also embed the newline and any leading whitespace. e.g.:

    println!(
        "...government of the {p}, by the {p}, for the {p},
        shall not perish from the earth.",
        p = "people"
    );

Results in a newline and some leading whitespace that represent the structure of the program, not the desired structure of the string:

...government of the people, by the people, for the people,
        shall not perish from the earth.

I have historically worked around this by using the concat! macro to concatenate two (or more) static strings, which works well enough, but looks pretty clunky, e.g.:

    println!(
        concat!(
            "...government of the {p}, by the {p}, for the {p}, ",
            "shall not perish from the earth."
        ),
        p = "people"
    );

As it turns out, I was really overthinking it, though it took an embarrassingly long time to discover: Rust has support for continuation of string literals! If a line containing a string literal ends in a backslash, the literal continues on the next line, with the newline and any leading whitespace elided. This is one of those really nice things that Rust lets us have; the above example becomes:

    println!(
        "...government of the {p}, by the {p}, for the {p}, \
        shall not perish from the earth.",
        p = "people"
    );

So much cleaner!

14. --pretty=expanded and cargo expand

In C — especially C that makes heavy use of the preprocessor — the -E option can be invaluable: it stops the compilation after the preprocessing phase and dumps the result to standard output. Rust, as it turns out has an equivalent in the --pretty=expanded unstable compiler option. The output out of this can be a little hard on the eyes, so you want to send it through rustfmt — but the result can be really enlightening as to how things actually work. Take, for example, the following program:

fn main() {
    println!("{} has been quite a year!", 2020);
}

Here is the --pretty=expanded output:

$ rustc -Z unstable-options --pretty=expanded year.rs | rustfmt --emit stdout
#![feature(prelude_import)]
#![no_std]
#[prelude_import]
use std::prelude::v1::*;
#[macro_use]
extern crate std;
fn main() {
    {
        ::std::io::_print(::core::fmt::Arguments::new_v1(
            &["", " has been quite a year!\n"],
            &match (&2020,) {
                (arg0,) => [::core::fmt::ArgumentV1::new(
                    arg0,
                    ::core::fmt::Display::fmt,
                )],
            },
        ));
    };
}

As an aside, format_args! is really magical — and a subject that really merits its own blog post from someone with more expertise on the subject. (Yes, this is the Rust blogging equivalent of Chekhov’s gun!)

With so many great David Tolnay crates, it’s fitting we end on one final piece of software from him: cargo expand is a pleasant wrapper around --pretty=expanded that (among other things) allows you to only dump a particular function.

The perfect marriage?

All of this is not to say that Rust is perfect; there are certainly some minor annoyances (rustfmt: looking at you!), and some forthcoming features that I eagerly await (e.g., safe transmutes, const generics). And in case it needs to be said: just because Rust makes it easier to write robust software doesn’t mean that it makes it impossible to write shoddy software!

Dwelling on the imperfections, though, would be a mistake. When getting into a long-term relationship with anything — be it a person, or a company, or a technology — it can be tempting to look at its surface characteristics: does this person, or company or technology have attributes that I do or don’t like? And those are important, but they can be overemphasized: because things change over time, we sometimes look too much at what things are rather than what guides them. And in this regard, my relationship with Rust feels particularly sound: it feels like my values and Rust’s values are a good fit for one another — and that my growing relationship with Rust will be one of the most important of my career!

On Sunday afternoon, I was on the phone with one of my Oxide co-founders, Steve Tuck. He and I were both trying to grapple with the brazen state-sponsored violence that we were witnessing: the murder of George Floyd and the widespread, brutal, and shameless suppression of those who were demonstrating against it.

Specifically, we were struggling with a problem that (bluntly) a lot of white people struggle with: how to say what when. An earlier conversation with Steve had inspired me to publicly say something that I have long believed: that on social media, you should amplify your listening by following voices that you don’t otherwise hear. I had tweeted that out, and Steve and I were talking about it. In particular, Steve was wondering about telling people to see 13th, a 2016 documentary by Ava DuVernay that he had found very moving when he had seen it last year. I had taken my kids to see DuVernay’s superlative Selma years prior, and I had heard about 13th (and I had recalled my wife saying she wanted to watch it), but it was — like many things — “in the queue.” Steve was emphatic though: “you need to watch it.”

If I may, an aside: we — we all, but especially white Americans — run away from difficult subjects. With our viewing habits in particular, there are things that we know we absolutely should watch that we just… don’t. The reasons are often mundane: because the time isn’t right; because we’re tired; because we want to be uplifted; because we’re ashamed. For my wife and me, this was Hotel Rwanda, a Netflix DVD (!) that we had sitting on top of the DVD player for literally years, not having the heart to send it back — and yet never in the mood to watch it.

Fortunately, 13th was not to share Hotel Rwanda‘s fate: not only is Steve persuasive, it felt especially apropos as my wife and I were looking for ways to talk about the George Floyd murder and subsequent uprising with our kids — so I got off the phone with Steve, my wife and I got the family together (pretty easy when living in self-isolation!), and we watched it.

I don’t know that I’ve ever seen a documentary as important. More than anything, it is revealing: it connects dots that I didn’t realize were connected — and it shows, with a stunning diversity of voices (Angela Davis and Newt Gingrich?!), just how deeply racist our criminal justice system is. Not that it isn’t also horrifying in its revelations: my kids gasped several times, including learning of the stunning growth in incarceration — and of the jaw-dropping fraction of inmates that never stood trial for their crime. And there was plenty of horrifying education beyond the figures; I knew a prison/industrial complex was out there, but I never imagined that it could be such a deliberate, pernicious octopus.

I’m not going to spoil any more of it for you; if you haven’t already seen it, you need to watch 13th — and you need to watch it now.

Actually, I’m going to phrase it even more bluntly: it is just 100 minutes (very deliberately not longer, as it turns out; watch the excellent conversation between Ava DuVerney and Oprah Winfrey for details); you more or less can’t leave your home; and America is boiling over on this very issue. So let’s put it another way: if you are unwilling to watch this now, be honest with yourself and realize that you are never going to watch it. And if you are never going to watch it, please spare us all the black squares and the trending hashtags: if you do not have 100 minutes to give to this most important topic at this most critical moment, you are not actually willing to do anything at all.

So you need to watch it, but is watching a documentary really the answer? Well, no, of course not — but that isn’t to say film can’t change the collective mind: The Day After famously informed Ronald Reagan’s views on the dangers of nuclear war, at a(nother) time when humanity felt perilously close to the brink. And I do believe that 13th could have that kind of power — so I would ask you to not only watch it, but use your voice (and, for many of you, your privilege) to get others to do the same.

Once you’ve watched it — and once you’ve gotten your friends and family to watch it — the tough work begins: we need to not merely reform this system, we need to rethink it entirely. And for this, you want to get your resources to the non-profits taking this on (there are a bunch out there; one in particular that I would recommend is The Equal Justice Initiative). Thanks to Ava DuVernay for making such a singularly important film — and thank you in advance for doing her (and us all!) the service of watching it!

Over the summer, I described preparing for my next expedition. I’m thrilled to announce that the expedition is now plotted, the funds are raised, and the bags are packed: together with Steve Tuck and Jess Frazelle, we have started Oxide Computer Company.

Starting a computer company may sound crazy (and you would certainly be forgiven a double-take!), but it stems from a belief that I hold in my marrow: that hardware and software should each be built with the other in mind. For me, this belief dates back a quarter century: when I first came to Sun Microsystems in the mid-1990s, it was explicitly to work on operating system kernel development at a computer company — at a time when that very idea was iconoclastic. And when we started Fishworks a decade later, the belief in fully integrated software and hardware was so deeply rooted into our endeavor as to be eponymous: it was the “FISH” in “Fishworks.” In working at a cloud computing company over the past decade, economic realities forced me to suppress this belief to a degree — but it now burns hotter than ever after having endured the consequences of a world divided: in running a cloud, our most vexing problems emanated from the deepest bowels of the stack, when hardware and (especially) firmware operated at cross purposes with our systems software.

As I began to think about what was next, I was haunted by the pain and futility of trying to build a cloud with PC-era systems. At the same time, seeing the kinds of solutions that the hyperscalers had developed for themselves had always left me with equal parts admiration and frustration: their rack-level designs are a clear win — why are these designs cloistered among so few? And even in as much as the hardware could be found through admirable efforts like the Open Compute Project, the software necessary to realize its full potential has remained cruelly unavailable.

Alongside my inescapable technical beliefs has been a commercial one: even as the world is moving (or has moved) to elastic, API-driven computing, there remain good reasons to run on one’s own equipment! Further, as cloud-borne SaaS companies mature from being strictly growth focused to being more margin focused, it seems likely that more will consider buying machines instead of always renting them.

It was in the confluence of these sentiments that an idea began to take shape: the world needed a company to develop and deliver integrated, hyperscaler-class infrastructure to the broader market — that we needed to start a computer company. The “we” here is paramount: in Steve and Jess, I feel blessed to not only share a vision of our future, but to have diverse perspectives on how infrastructure is designed, built, sold, operated and run. And most important of all (with the emphasis itself being a reflection of hard-won wisdom), we three share deeply-held values: we have the same principled approach, with shared aspirations for building the kind of company that customers will love to buy from — and employees will be proud to work for.

Together, as we looked harder at the problem, we saw the opportunity more and more clearly: the rise of open firmware and the broadening of the Open Compute Project made this more technically feasible than ever; the sharpening desire among customers for a true cloud-like on-prem experience (and the neglect those customers felt in the market) made it more in demand than ever. With accelerating conviction that we would build a company to do this, we needed a name — and once we hit on Oxide, we knew it was us: oxides form much of the earth’s crust, giving a connotation of foundation; silicon, the element that is the foundation of all of computing, is found in nature in its oxide; and (yes!) iron oxide is also known as Rust, a programming language we see playing a substantial role for us. Were there any doubt, that Oxide can also be pseudo-written in hexadecimal — as 0x1de — pretty much sealed the deal!

There was just one question left, and it was an existential one: could we find an investor who saw what we saw in Oxide? Fortunately, the answer to this question had been emphatic and unequivocal: in the incredible team at Eclipse Ventures, we found investors that not only understood the space and the market, but also the challenges of solving hard technical problems. And we are deeply honored to have Eclipse’s singular Pierre Lamond joining us on our board; we can imagine no better a start for a new computer company!

So while there is a long and rocky path ahead, we are at last underway on our improbable journey! If you haven’t yet, read Jess’s blog on Oxide being born on a garage. If you find yourself battling the problems we’re aiming to fix, please join our mailing list. If you are a technologist who feels this problem in your bones as we do, consider joining us. And if nothing else, and you would like to hear some terrific stories of life at the hardware/software interface, check out our incredible podcast On the Metal!

When I was first interviewing with Joyent in July 2010, I recall telling then-CTO Mark Mayo that I was trying to make a decision for the next seven years of my career. Mark nodded sagely at this, assuring me that Joyent was the right move. Shortly after coming to Joyent, I became amazed that Mark had managed to keep a straight face during our conversation: as a venture-funded startup, Joyent lived on a wildly accelerated time scale; seven years into the future for a startup is like seventy for a more established company. (Or, to put it more bluntly, startups with burn and runway are generally default dead and very unlikely to live to see their seventh birthday.)

But Mark also wasn’t wrong: again and again, Joyent beat the odds, in part because of the incredible team that our idiosyncratic company attracted. We saw trends like node.js, containers and data-centric computing long before our peers, attracting customers that were themselves foward-looking technologists. This made for an interesting trip, but not a smooth one: being ahead of the market is as much of a curse as a blessing, and Joyent lived many different lives over the past nine years. Indeed, the company went through so much that somewhere along the way one of our colleagues observed that the story of Joyent could only be told as a musical — an observation so profoundly true that any Joyeur will chuckle to themselves as they are reminded of it.

During one period of particularly intense change, we who endured developed a tradition befitting a company whose story needs musical theater to tell it: at company-wide gatherings, we took to playing a game that we called “ex-Joyeur.” We would stand in a circle, iterating around it; upon one’s turn, one must name an ex-Joyeur who has not already been named — or sit down. Lest this sound like bad attitude, it in fact wasn’t: it was an acknowledgement of the long strange trip that any venture-funded startup takes — a celebration of the musical’s exhaustive and curious dramatis personæ, and a way to remember those who had been with us at darker hours. (And in fact, some of the best players at ex-Joyeur were newer employees who managed to sleuth out long-forgotten colleagues!) Not surprisingly, games of ex-Joyeur were routinely interrupted with stories of the just-named individual — often fond, but always funny.

I bring all of this up because today is my last day at Joyent. After nine years, I will go from a Joyeur to an ex-Joyeur — from a player to a piece on the board. I have had too many good times to mention, and enough stories to last several lifetimes. I am deeply grateful for the colleagues and communities with whom I shared acts in the musical. So many have gone on to do terrific things; I know our paths will cross again, and I already look forward to the reunion. And to those customers who took a chance on us — and especially to Samsung, who acquired Joyent in 2016 — thank you, from the bottom of my heart for believing in us; I hope we have done right by you.

As for me personally, I’m going to take slightly more of a break than the three days I took in 2010; as a technologist, I am finding myself as excited as ever by snow-capped mountains on a distant horizon, and I look forward to taking my time to plot my next seven-year expedition!

Long ago as an undergraduate, I found myself back home on a break from school, bored and with eyes wandering idly across a family bookshelf. At school, I had started to find a calling in computing systems, and now in the den, an old book suddenly caught my eye: Tracy Kidder’s The Soul of a New Machine. Taking it off the shelf, the book grabbed me from its first descriptions of Tom West, captivating me with the epic tale of the development of the Eagle at Data General. I — like so many before and after me — found the book to be life changing: by telling the stories of the people behind the machine, the book showed the creative passion among engineers that might otherwise appear anodyne, inspiring me to chart a course that might one day allow me to make a similar mark.

Since reading it over two decades ago, I have recommended The Soul of a Machine at essentially every opportunity, believing that it is a part of computing’s literary foundation — that it should be considered our Odyssey. Recently, I suggested it as beach reading to Jess Frazelle, and apparently with perfect timing: when I saw the book at the top of her vacation pile, I knew a fuse had been lit. I was delighted (though not at all surprised) to see Jess livetweet her admiration of the book, starting with the compelling prose, the lucid technical explanations and the visceral anecdotes — but then moving on to the deeper technical inspiration she found in the book. And as she reached the book’s crescendo, Jess felt its full power, causing her to reflect on the nature of engineering motivation.

Excited to see the effect of the book on Jess, I experienced a kind of reflected recommendation: I was inspired to (re-)read my own recommendation! Shortly after I started reading, I began to realize that (contrary to what I had been telling myself over the years!) I had not re-read the book in full since that first reading so many years ago. Rather, over the years I had merely revisited those sections that I remembered fondly. On the one hand, these sections are singular: the saga of engineers debugging a nasty I-cache data corruption issue; the young engineer who implements the simulator in an impossibly short amount of time because no one wanted to tell him that he was being impossibly ambitious; the engineer who, frustrated with a nanosecond-scale timing problem in the ALU that he designed, moved to a commune in Vermont, claiming a desire to deal with “no unit of time shorter than a season”. But by limiting myself to these passages, I was succumbing to the selection bias of my much younger self; re-reading the book now from start to finish has given new parts depth and meaning. Aspects that were more abstract to me as an undergraduate — from the organizational rivalries and absurdities of the industry to the complexities of West’s character and the tribulations of the team down the stretch — are now deeply evocative of concrete episodes of my own career.

As a particularly visceral example: early in the book, a feud between two rival projects boils over in an argument at a Howard Johnson’s that becomes known among the Data General engineers as “the big shoot-out at HoJo’s.” Kidder has little more to say about it (the organizational civil war serves merely as a backdrop for Eagle), and I don’t recall this having any effect on me when I first read it — but reading it now, it resonates with a grim familiarity. In any engineering career of sufficient length, a beloved project will at some point be shelved or killed — and the moment will be sufficiently traumatic to be seared into collective memory and memorialized in local lore. So if you haven’t yet had your own shoot-out at HoJo’s, it is regrettably coming; may your career be blessed with few such firefights!

Another example of a passage with new relevance pertains to Tom West developing his leadership style on Eagle:

That fall West had put a new term in his vocabulary. It was trust. “Trust is risk, and risk avoidance is the name of the game in business,” West said once, in praise of trust. He would bind his team with mutual trust, he had decided. When a person signed up to do a job for him, he would in turn trust that person to accomplish it; he wouldn’t break it down into little pieces and make the task small, easy and dull.

I can’t imagine that this paragraph really affected me much as an undergraduate, but reading it now I view it as a one-paragraph crash-course in engineering leadership; those who deeply internalize it will find themselves blessed with high-functioning, innovative engineering teams. And lest it seem obvious, it is in fact more subtle than it looks: West is right that trust is risk — and risk-averse organizations can really struggle to foster mutual trust, despite its obvious advantages.

This passage also serves as a concrete example of the deeper currents that the book captures: it is not merely about the specific people or the machine they built, but about why we build things — especially things that so ardently resist being built. Kidder doesn’t offer a pat answer, though the engineers in the book repeatedly emphasize that their motivations are not so simple as ego or money. In my experience, engineers take on problems for lots of reasons: the opportunity to advance the state of the art; the potential to innovate; the promise of making a difference in everyday lives; the opportunity to learn about new technologies. Over the course of the book, each of these can be seen at times among the Eagle team. But, in my experience (and reflected in Kidder’s retelling of Eagle), when the problem is challenging and success uncertain, the endurance of engineers cannot be explained by these factors alone; regardless of why they start, engineers persist because they are bonded by a shared mission. In this regard, engineers endure not just for themselves, but for one another; tackling thorny problems is very much a team sport!

More than anything, my re-read of the book leaves me with the feeling that on teams that have a shared mission, mutual trust and ample grit, incredible things become possible. Over my career, I have had the pleasure of experiencing this several times, and they form my fondest memories: teams in which individuals banded together to build something larger than the sum of themselves. So The Soul of a New Machine serves to remind us that the soul of what we build is, above all, shared — that we do not endeavor alone but rather with a group of like-minded individuals.

Thanks to Jess for inspiring my own re-read of this classic — and may your read (or re-read!) of the book be as invigorating!

There was a tremendous amount of reaction to and discussion about my blog entry on the midlife crisis in open source. As part of this discussion on HN, Jay Kreps of Confluent took the time to write a detailed response — which he shortly thereafter elevated into a blog entry.

Let me be clear that I hold Jay in high regard, as both a software engineer and an entrepreneur — and I appreciate the time he took to write a thoughtful response. That said, there are aspects of his response that I found troubling enough to closely re-read the Confluent Community License — and that in turn has led me to a deeply disturbing realization about what is potentially going on here.

Here is what Jay said that I found troubling:

The book analogy is not accurate; for starters, copyright does not apply to physical books and intangibles like software or digital books in the same way.

Now, what Jay said is true to a degree in that (as with many different kind of expression), software has code specific to it; this can be found in 17 U.S.C. § 117. But the fact that Jay also made reference to digital books was odd; digital books really have nothing to do with software (or not any more so than any other kind of creative expression). That said, digital books and proprietary software do actually share one thing in common, though it’s horrifying: in both cases their creators have maintained that you don’t actually own the copy you paid for. That is, unlike a book, you don’t actually buy a copy of a digital book, you merely acquire a license to use their book under their terms. But how do they do this? Because when you access the digital book, you click “agree” on a license — an End User License Agreement (EULA) — that makes clear that you don’t actually own anything. The exact language varies; take (for example) VMware’s end user license agreement:

2.1 General License Grant. VMware grants to You a non-exclusive, non-transferable (except as set forth in Section 12.1 (Transfers; Assignment) license to use the Software and the Documentation during the period of the license and within the Territory, solely for Your internal business operations, and subject to the provisions of the Product Guide. Unless otherwise indicated in the Order, licenses granted to You will be perpetual, will be for use of object code only, and will commence on either delivery of the physical media or the date You are notified of availability for electronic download.

That’s a bit wordy and oblique; in this regard, Microsoft’s Windows 10 license is refreshingly blunt:

(2)(a) License. The software is licensed, not sold. Under this agreement, we grant you the right to install and run one instance of the software on your device (the licensed device), for use by one person at a time, so long as you comply with all the terms of this agreement.

That’s pretty concise: “The software is licensed, not sold.” So why do this at all? EULAs are an attempt to get out of copyright law — where the copyright owner is quite limited in the rights afforded to them as to how the content is consumed — and into contract law, where there are many fewer such limits. And EULAs have accordingly historically restricted (or tried to restrict) all sorts of uses like benchmarking, reverse engineering, running with competitive products (or, say, being used by a competitor to make competitive products), and so on.

Given the onerous restrictions, it is not surprising that EULAs are very controversial. They are also legally dubious: when you are forced to click through or (as it used to be back in the day) forced to unwrap a sealed envelope on which the EULA is printed to get to the actual media, it’s unclear how much you are actually “agreeing” to — and it may be considered a contract of adhesion. And this is just one of many legal objections to EULAs.

Suffice it to say, EULAs have long been considered open source poison, so with Jay’s frightening reference to EULA’d content, I went back to the Confluent Community License — and proceeded to kick myself for having missed it all on my first quick read. First, there’s this:

This Confluent Community License Agreement Version 1.0 (the “Agreement”) sets forth the terms on which Confluent, Inc. (“Confluent”) makes available certain software made available by Confluent under this Agreement (the “Software”). BY INSTALLING, DOWNLOADING, ACCESSING, USING OR DISTRIBUTING ANY OF THE SOFTWARE, YOU AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO SUCH TERMS AND CONDITIONS, YOU MUST NOT USE THE SOFTWARE. IF YOU ARE RECEIVING THE SOFTWARE ON BEHALF OF A LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE THE ACTUAL AUTHORITY TO AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT ON BEHALF OF SUCH ENTITY.

You will notice that this looks nothing like any traditional source-based license — but it is exactly the kind of boilerplate that you find on EULAs, terms-of-service agreements, and other contracts that are being rammed down your throat. And then there’s this:

1.1 License. Subject to the terms and conditions of this Agreement, Confluent hereby grants to Licensee a non-exclusive, royalty-free, worldwide, non-transferable, non-sublicenseable license during the term of this Agreement to: (a) use the Software; (b) prepare modifications and derivative works of the Software; (c) distribute the Software (including without limitation in source code or object code form); and (d) reproduce copies of the Software (the “License”).

On the one hand looks like the opening of open source licenses like (say) the Apache Public License (albeit missing important words like “perpetual” and “irrevocable”), but the next two sentences are the difference that are the focus of the license:

Licensee is not granted the right to, and Licensee shall not, exercise the License for an Excluded Purpose. For purposes of this Agreement, “Excluded Purpose” means making available any software-as-a-service, platform-as-a-service, infrastructure-as-a-service or other similar online service that competes with Confluent products or services that provide the Software.

But how can you later tell me that I can’t use my copy of the software because it competes with a service that Confluent started to offer? Or is that copy not in fact mine? This is answered in section 3:

Confluent will retain all right, title, and interest in the Software, and all intellectual property rights therein.

Okay, so my copy of the software isn’t mine at all. On the one hand, this is (literally) proprietary software boilerplate — but I was given the source code and the right to modify it; how do I not own my copy? Again, proprietary software is built on the notion that — unlike the book you bought at the bookstore — you don’t own anything: rather, you license the copy that is in fact owned by the software company. And again, as it stands, this is dubious, and courts have ruled against “licensed, not sold” software. But how can a license explicitly allow me to modify the software and at the same time tell me that I don’t own the copy that I just modified?! And to be clear: I’m not asking who owns the copyright (that part is clear, as it is for open source) — I’m asking who owns the copy of the work that I have modified? How can one argue that I don’t own the copy of the software that I downloaded, modified and built myself?!

This prompts the following questions, which I also asked Jay via Twitter:

  1. If I git clone software covered under the Confluent Community License, who owns that copy of the software?
  2. Do you consider the Confluent Community License to be a contract?
  3. Do you consider the Confluent Community License to be a EULA?

To Confluent: please answer these questions, and put the answers in your FAQ. Again, I think it’s fine for you to be an open core company; just make this software proprietary and be done with it. (And don’t let yourself be troubled about the fact that it was once open source; there is ample precedent for reproprietarizing software.) What I object to (and what I think others object to) is trying to be at once open and proprietary; you must pick one.

To GitHub: Assuming that this is in fact a EULA, I think it is perilous to allow EULAs to sit in public repositories. It’s one thing to have one click through to accept a license (though again, that itself is dubious), but to say that a git clone is an implicit acceptance of a contract that happens to be sitting somewhere in the repository beggars belief. With efforts like choosealicense.com, GitHub has been a model in guiding projects with respect to licensing; it would be helpful for GitHub’s counsel to weigh in on their view of this new strain of source-available proprietary software and the degree to which it comes into conflict with GitHub’s own terms of service.

To foundations concerned with software liberties, including the Apache Foundation, the Linux Foundation, the Free Software Foundation, the Electronic Frontier Foundation, the Open Source Initiative, and the Software Freedom Conservancy: the open source community needs your legal review on this! I don’t think I’m being too alarmist when I say that this is potentially a dangerous new precedent being set; it would be very helpful to have your lawyers offer their perspectives on this, even if they disagree with one another. We seem to be in some terrible new era of frankenlicenses, where the worst of proprietary licenses are bolted on to the goodwill created by open source licenses; we need your legal voices before these creatures destroy the village!

Recent Posts

November 26, 2023
November 18, 2023
November 27, 2022
October 11, 2020
July 31, 2019
December 16, 2018
September 18, 2018
December 21, 2016
September 30, 2016
September 26, 2016
September 13, 2016
July 29, 2016
December 17, 2015
September 16, 2015
January 6, 2015
November 10, 2013
September 3, 2013
June 7, 2012
September 15, 2011
August 15, 2011
March 9, 2011
September 24, 2010
August 11, 2010
July 30, 2010
July 25, 2010
March 10, 2010
November 26, 2009
February 19, 2009
February 2, 2009
November 10, 2008
November 3, 2008
September 3, 2008
July 18, 2008
June 30, 2008
May 31, 2008
March 16, 2008
December 18, 2007
December 5, 2007
November 11, 2007
November 8, 2007
September 6, 2007
August 21, 2007
August 2, 2007
July 11, 2007
May 20, 2007
March 19, 2007
October 12, 2006
August 17, 2006
August 7, 2006
May 1, 2006
December 13, 2005
November 16, 2005
September 13, 2005
September 9, 2005
August 21, 2005
August 16, 2005

Archives