Twitter Spaces, a few weeks in

As a kid, I listened to a lot of talk radio. This was in the 80s, before the internet — and before the AM dial became fringe. I have fond memories of falling asleep to the likes of Bruce Williams who just gave damned good, level-headed advice. It was, at essence, both optimistic and temperate: a cool head to help people work through a tough spot.

Nothing really replaced the call-in show: talk radio devolved into poisonous echo chambers, while social networking gave people other outlets (too many!) to have conversations. But these online conversations — the written word — lack something: Twitter’s tight form gives us the hot take, blogs (ahem!) give us the longer form, but it both cases, the conversations that result seem to either lose their sizzle or go thermonuclear. Video is really great for some stuff (I have spoken about the rise of video and the preservation of oral tradition in software engineering) — but the discussion quality there is even worse. (When did YouTube comments become such a hellhole?!) And of course, TikTok has given us social, short-form video, which can be very entertaining, but isn’t really designed to induce any meaningful discussion.

And of course, I love podcasts, and one of the reasons I was excited to start a company was to get an excuse to make the podcast that we ourselves always wanted. Podcasts are particularly great because you can do something else while listening to them: they fill in that time when you’re doing the dishes or picking up the kids or whatever. But podcasts obviously miss the interactivity entirely. (Or mostly: let us not so quickly forget TWiV reading my letter on-air!)

Into all of this, enter Clubhouse and the rise of social audio. I had immediate Bruce Williams flashbacks, and was excited to participate. But my choice of device makes me unwelcome in Clubhouse’s eyes: their lack of an Android app is clearly not a mere temporary gap, but rather a deliberate deprioritization. (I certainly honor a young company’s need to focus, but how else can one describe an exceedingly well-capitalized company adding in-app payments before addressing 75% of the market?!) To me, the deprioritization of Android reflects Clubhouse’s deeper problem: it is fundamentally elitist and exclusionary. Avid Clubhouse users may claim that this was not always so, but it says it right there on the tin: it is, at the end of the day, a clubhouse — not a cafe or a town square. I personally have no interest in participating in venues that deliberately limit the participants: I want to engage with the broadest possible cross-section, not some subset that has been manicured by circumstance or technology choice. (For this same reason, I do not give talks unless the talk will be recorded and made freely available.)

Given all of this, you can imagine my enthusiasm for Twitter Spaces: it captures the promise that I see in the medium — but addresses many of the problems with Clubhouse’s implementation. I have been participating in them for the past few weeks and (excitingly!) found last week that I have the ability to create Spaces. Here are my takeaways so far:

All in all, very positive and promising! Of course, there are still lots of things to stub your toe on; this is still new, after all, and lots of stuff doesn’t work quite right. And there’s also a lot to be figured out by those of us who will use the medium (reminder: retweets were invented by users, not by Twitter!). For my part from a hosting perspective, I am going to take some inspiration from talk radio and experiment with both regularity and time-bounds: Adam and I are going to host a Space again tomorrow (Monday) at 5p Pacific, keeping it again to about an hour. (We appear to be aided in our time bounds last week by a memory leak that caused my app to abort after about an hour!) So if you’re interested, drop on by — we’d love to hear what you have to say, which indeed is very much the whole point!

Posted on May 2, 2021 at 7:15 pm by bmc · Permalink · Comments Closed
In: Uncategorized

Compensation as a reflection of values

Compensation: the word alone is enough to trigger a fight-or-flight reaction in many. But we in technology have the good fortune of being in a well-compensated domain, so why does this issue induce such anxiety when our basic needs are clearly covered? If it needs to be said, it’s because compensation isn’t merely about the currency we redeem in exchange for our labors, but rather it is a proxy for how we are valued in a larger organization. This, in turn, brings us to our largest possible questions for ourselves, around things like meaning and self-worth.

So when we started Oxide — as in any new endeavor — compensation was an issue we had to deal with directly. First, there was the thorny issue of how we founders would compensate ourselves. Then, of course, came the team we wished to hire: hybrid local and remote, largely experienced to start (on account of Oxide’s outrageously ambitious mission), and coming from a diverse set of backgrounds and experiences. How would we pay people in different geographies? How could we responsibly recruit experienced folks, many of whom have families and other financial obligations that can’t be addressed with stock options? How could we avoid bringing people’s compensation history — often a reflection of race, gender, class, and other factors rather than capability — with them?

We decided to do something outlandishly simple: take the salary that Steve, Jess, and I were going to pay ourselves, and pay that to everyone. The three of us live in the San Francisco Bay Area, and Steve and I each have three kids; we knew that the dollar figure that would allow us to live without financial distress — which we put at $175,000 a year — would be at least universally adequate for the team we wanted to build. And we mean everyone literally: as of this writing we have 23 employees, and that’s what we all make.

Now, because compensation is the hottest of all hot buttons, it can be fairly expected that many people will have a reaction to this. Assuming you’ve made it to this sentence it means you are not already lighting us up in your local comments section (thank you!), and I want to promise in return that we know some likely objections, and we’ll address those. But before we do, we want to talk about the benefits of transparent uniform compensation, because they are, in a word, profound.

Broadly, our compensation model embodies our mission, principles, and values. First and foremost, we believe that our compensation model reflects our principles of honesty, integrity, and decency. To flip it around: sadly, we have seen extant comp structures in the industry become breeding grounds for dishonesty, deceit, and indecency. Beyond our principles, our comp model is a tangible expression of several of our values in particular:

These are (some of!) the overwhelming positives; what about those objections?

Of these objections, several are of the ilk that this cannot endure at arbitrary scale. This may be true — our compensation may well not be uniform in perpetuity — but we believe wholeheartedly that our values will endure. So if and when the uniformity of our compensation needs to change, we fully expect that it will remain transparent — and that we as a team will discuss it candidly and empathetically. In this regard, we take inspiration from companies that have pioneered transparent compensation. It is very interesting to, for example, look at how Buffer’s compensation has changed over the years. Their approach is different from ours in the specifics, but they are a kindred spirit with respect to underlying values — and their success with transparent compensation gives us confidence that, whatever changes must come with time, we will be able to accommodate them without sacrificing what is important to us!

Finally, a modest correction. The $175,000 isn’t quite true — or at least not anymore. I had forgotten that when we did our initial planning, we had budgeted modest comp increases after the first year, so it turns out, we all got a raise to $180,250 in December! I didn’t know it was coming (and nor did anyone else); Steve just announced it in the All Hands: no three-hundred-and-sixty degree reviews, no stack ranking, no OKRs, no skip-levels, no numerical grades — just a few more organic raspberries in everyone’s shopping basket. Never has a change in compensation felt so universally positive!

Posted on March 4, 2021 at 10:00 pm by bmc · Permalink · Comments Closed
In: Uncategorized

Rust after the honeymoon

Two years ago, I had a blog entry describing falling in love with Rust. Of course, a relationship with a technology is like any other relationship: as novelty and infatuation wears off, it can get on a longer term (and often more realistic and subdued) footing — or it can begin to fray. So well one might ask: how is Rust after the honeymoon?

By way of answering that, I should note that about a year ago (and a year into my relationship with Rust) we started Oxide. On the one hand, the name was no accident — we saw Rust playing a large role in our future. But on the other, we hadn’t yet started to build in earnest, so it was really more pointed question than assertion: where might Rust fit in a stack that stretches from the bowels of firmware through a hypervisor and control plane and into the lofty heights of REST APIs?

The short answer from an Oxide perspective is that Rust has proven to be a really good fit — remarkably good, honestly — at more or less all layers of the stack. You can expect much, much more to come from Oxide on this (we intend to open source more or less everything we’re building), but for a teaser of the scope, you can see it in the work of Oxide engineers: see Cliff’s blog, Adam and Dave’s talk on Dropshot, Jess on using Dropshot within Oxide, Laura on Rust macros, and Steve Klabnik on why he joined Oxide. (Requisite aside: we’re hiring!)

So Rust is going really well for us at Oxide, but for the moment I want to focus on more personal things — reasons that I personally have enjoyed implementing in Rust. These run the gamut: some are tiny but beautiful details that allow me to indulge in the pleasure of the craft; some are much more profound features that represent important advances in the state of the art; and some are bodies of software developed by the Rust community, notable as much for their reflection of who is attracted to Rust (and why) as for the artifacts themselves. It should also be said that I stand by absolutely everything I said two years ago; this is not as a replacement for that list, but rather a supplement to it. Finally, this list is highly incomplete; there’s a lot to love about Rust and this shouldn’t be thought of as any way exhaustive!

1. no_std

When developing for embedded systems — and especially for the flotilla of microcontrollers that surround a host CPU on the kinds of servers we’re building at Oxide — memory use is critical. Historically, C has been the best fit for these applications just because it so lean: by providing essentially nothing other than the portable assembler that is the language itself, it avoids the implicit assumptions (and girth) of a complicated runtime. But the nothing that C provides reflects history more than minimalism; it is not an elegant nothing, but rather an ill-considered nothing that leaves those who build embedded systems building effectively everything themselves — and in a language that does little to help them write correct software.

Meanwhile, having been generally designed around modern machines with seemingly limitless resources, higher-level languages and environments are simply too full-featured to fit into (say) tens of kilobytes or into the (highly) constrained environment of a microcontroller. And even where one could cajole these other languages into the embedded use case, it has generally been as a reimplementation, leaving developers on a fork that isn’t necessarily benefiting from development in the underlying language.

Rust has taken a different approach: a rich, default standard library but also a first-class mechanism for programs to opt out of that standard library. By marking themselves as no_std, programs confine themselves to the functionality found in libcore. This functionality, in turn, makes no system assumptions — and in particular, performs no heap allocations. This is not easy for a system to do; it requires extraordinary discipline by those developing it (who must constantly differentiate between core functionality and standard functionality) and a broad empathy with the constraints of embedded software. Rust is blessed with both, and the upshot is remarkable: a safe, powerful language that can operate in the highly constrained environment of a microcontroller — with binaries every bit as small as those generated by C. This makes no_std — as Cliff has called it — the killer feature of embedded Rust, without real precedence or analogue.

2. {:#x?}

Two years ago, I mentioned that I love format!, and in particular the {:?} format specifier. What took me longer to discover was {:#?}, which formats a structure but also pretty-prints it (i.e., with newlines and indentation). This can be coupled with {:#x} to yield {:#x?} which pretty-prints a structure in hex. So this:

    println!("dumping {:#x?}", region);

Becomes this:

dumping Region {
    daddr: Some(
        0x4db8,
    ),
    base: 0x10000,
    size: 0x8000,
    attr: RegionAttr {
        read: true,
        write: false,
        execute: true,
        device: false,
        dma: false,
    },
    task: Task(
        0x0,
    ),
}

My fingers now type {:#x?} by default, and hot damn is it ever nice!

3. Integer literal syntax

Okay, another small one: I love the Rust integer literal syntax! In hardware-facing systems, we are often expressing things in terms of masks that ultimately map to binary. It is beyond me why C thought to introduce octal and hexadecimal but not binary in their literal syntax; Rust addresses this gap with the same “0b” prefix as found in some non-standard C compiler extensions. Additionally, Rust allows for integer literals to be arbitrarily intra-delimited with an underscore character. Taken together, this allows for a mask consisting of bits 8 through 10 and bit 12 (say) to be expressed as 0b0000_1011_1000_0000 — which to me is clearer as to its intent and less error prone than (say) 0xb80 or 0b101110000000.

And as long as we’re on the subject of integer literals: I also love that the types (and the suffix that denotes a literal’s type) explicitly encode bit width and signedness. Instead of dealing with the implicit signedness and width of char, short, long and long long, we have u8, u16, u32, u64, etc. Much clearer!

4. DWARF support

Debugging software — and more generally, the debuggability of software systems — is in my marrow; it may come as no surprise that one of the things that I personally have been working on is the debugger for a de novo Rust operating system that we’re developing. To be useful, debuggers need help from the compiler in the way of type information — but this information has been historically excruciating to extract, especially in production systems. (Or as Robert phrased it concisely years ago: “the compiler is the enemy.”) And while DWARF is the de facto standard, it is only as good as the compiler’s willingness to supply it.

Given how much debuggability can (sadly) lag development, I wasn’t really sure what I would find with respect to Rust, but I have been delighted to discover thorough DWARF support. This is especially important for Rust because it (rightfully) makes extensive use of inlining; without DWARF support to make sense of this inlining, it can be hard to make any sense of the generated assembly. I have been able to use the DWARF information to build some pretty powerful Rust-based tooling — with much promise on the horizon. (You can see an early study for this work in Tockilator.)

5. Gimli and Goblin

Lest I sound like I am heaping too much praise on DWARF, let me be clear that DWARF is historically acutely painful to deal with. The specification (to the degree that one can call it that) is an elaborate mess, and the format itself seems to go out of its way to inflict pain on those who would consume it. Fortunately, the Gimli crate that consumes DWARF is really good, having made it easy to build DWARF-based tooling. (I have found that whenever I am frustrated with Gimli, I am, in fact, frustrated with some strange pedantry of DWARF — which Gimli rightfully refuses to paper over.)

In addition to Gimli, I have also enjoyed using Goblin to consume ELF. ELF — in stark contrast to DWARF — is tight and crisp (and the traditional C-based tooling for ELF is quite good), but it was nice nonetheless that Goblin makes it so easy to zing through an ELF binary.

6. Data-bearing enums

Enums — that is, the “sum” class of algebraic types — are core to Rust, and give it the beautiful error handling that I described falling in love with two years ago. Algebraic types allow much more than just beautiful error handling, e.g. Rust’s ubiquitous Option type, which allows for sentinel values to be eliminated from one’s code — and with it some significant fraction of defects. But it’s one thing to use these constructs, and another to begin to develop algebraic types for one’s own code, and I have found the ability for enums to optionally bear data to be incredibly useful. In particular, when parsing a protocol, one is often taking a stream of bytes and turning it into one of several different kinds of things; it is really, really nice to have the type system guide how software should consume the protocol. For example, here’s an enum that I defined when parsing data from ARM’s Embedded Trace Macrocell signal protocol:

#[derive(Copy, Clone, Debug)]
pub enum ETM3Header {
    BranchAddress { addr: u8, c: bool },
    ASync,
    CycleCount,
    ISync,
    Trigger,
    OutOfOrder { tag: u8, size: u8 },
    StoreFailed,
    ISyncCycleCount,
    OutOfOrderPlaceholder { a: bool, tag: u8 },
    VMID,
    NormalData { a: bool, size: u8 },
    Timestamp { r: bool },
    DataSuppressed,
    Ignore,
    ValueNotTraced { a: bool },
    ContextID,
    ExceptionExit,
    ExceptionEntry,
    PHeaderFormat1 { e: u8, n: u8 },
    PHeaderFormat2 { e0: bool, e1: bool },
}

That variants can have wildly different types (and that some can bear data while others don’t — and some can be structured, while others are tuples) allows for the type definition to closely match the specification, and helps higher-level software consume the protocol correctly.

7. Ternary operations

In C, the ternary operator allows for a terse conditional expression that can be used as an rvalue, e.g.:

	x = is_foo ? foo : bar;

This is equivalent to:

	if (is_foo) {
		x = foo;
	} else {
		x = bar;
	}

This construct is particularly valuable when not actually assigning to an lvalue, but when (for example) returning a value or passing a parameter. And indeed, I would estimate that a plurality — if not a majority — of my lifetime-use of the ternary operator has been in arguments to printf.

While Rust has no ternary operator per se, it is expression-oriented: statements have values. So the above example becomes:

	x = if is_foo { foo } else { bar };

That’s a bit more verbose than its C equivalent (though I personally like its explicitness), but it really starts to shine when things can marginally more complicated: nested ternary operators get gnarly in C, but they are easy to follow as simple nested if-then-else statements in Rust. And (of course) match is an expression as well — and I found that I often use match where I would have used a ternary operator in C, with the added benefit that I am forced to deal with every case. As a concrete example, take this code that is printing a slice of little-endian bytes as an 8-bit, 16-bit, or 32-bit quantity depending on a size parameter:

    print!("{:0width$x} ",
        match size {
            1 => line[i - offs] as u32,
            2 => u16::from_le_bytes(slice.try_into().unwrap()) as u32,
            4 => u32::from_le_bytes(slice.try_into().unwrap()) as u32,
            _ => {
                panic!("invalid size");
            }
        },
        width = size * 2
    );

For me, this is all of the power of the ternary operator, but without its pitfalls!

An interesting footnote on this: Rust once had the C-like ternary operator, but removed it, as the additional syntax didn’t carry its weight. This pruning in Rust’s early days — the idea that syntax should carry its weight by bringing unique expressive power — has kept Rust from the fate of languages that suffered from debilitating addictions to new syntax and concomitant complexity overdose; when there is more than one way to do it for absolutely everything, a language becomes so baroque as to become write-only!

8. paste!

This is a small detail, but one that took me a little while to find. As I described in my blog entry two years ago, I have historically made heavy use of the C preprocessor. One (arcane) example of this is the ## token concatenation operator, which I have needed only rarely — but found essential in those moments. (Here’s a concrete example.) As part of a macro that I was developing, I found that I needed the equivalent for Rust, and was delighted to find David Tolnay’s paste crate. paste! was exactly what I needed — and more testament to both the singular power of Rust’s macro system and David’s knack for build singularly useful things with it!

9. unsafe

A great strength of Rust is its safety — but something I also appreciate about it is the escape hatch offered via unsafe, whereby certain actions are permitted that are otherwise disallowed. It should go without saying that one should not use unsafe without good reason — but such good reasons can and do exist, and I appreciate that Rust trusts the programmer enough to allow them to take their safety into their own hands. Speaking personally, most of my own uses of unsafe have boiled down to accesses to register blocks on a microcontroller: on the one hand, unsafe because they dereference arbitrary memory — but on the other, safe by inspection. That said, the one time I had to write unsafe code that actually felt dangerous (namely, in dealing with an outrageously unsafe C library), I was definitely in a heightened state of alert! Indeed, my extreme caution around unsafe code reflects how much Rust has changed my disposition: after nearly three decades working in C, I thought I appreciated its level of unsafety, but the truth is I had just become numb to it; to implement in Rust is to eat the fruit from the tree of knowledge of unsafe programs — and to go back to unsafe code is to realize that you were naked all along!

10. Multi-platform support

When Steve Klabnik joined Oxide, we got not only an important new addition to the team, but a new platform as well: Steve is using Windows as his daily driver, in part because of his own personal dedication to keeping Rust multi-platform. While I’m not sure that anything could drive me personally to use Windows (aside: MS-DOS robbed me of my childhood), I do strongly believe in platform heterogeneity. I love that Rust forces the programmer to really think about implicitly platform-specific issues: Rust refuses to paper over the cracks in computing’s foundation for sake of expediency. If this can feel unnecessarily pedantic (can’t I just have a timestamp?!), it is in multi-platform support where this shines: software that I wrote just… worked on Windows. (And where it didn’t, it was despite Rust’s best efforts: when a standard library gives you first-class support to abstract the path separator, you have no one to blame but yourself if you hard-code your own!)

Making and keeping Rust multi-platform is hard work for everyone involved; but as someone who is currently writing Rust for multiple operating systems (Linux, illumos and — thanks to Steve — Windows) and multiple ISAs (e.g., x86-64, ARM Thumb-2), I very much appreciate that this is valued by the Rust community!

11. anyhow! + RUST_BACKTRACE

In my original piece, I praised the error handling of Rust, and that is certainly truer than ever: I simply cannot imagine going back to a world without algebraic types for error handling. The challenge that remained was that there were several conflicting crates building different error types and supporting routines, resulting in some confusion as to best practice. All of this left me — like many — simply rolling my own via Box<dyn Error>, which works well enough, but it doesn’t really help a thorny question: when an error emerges deep within a stack of composed software, where did it actually come from?

Enter David Tolnay (again!) and his handy anyhow! crate, which pulls together best practices and ties that into the improvements in the std::error::Error trait to yield a crate that is powerful without being imposing. Now, when an error emerges from within a stack of software, we can get a crisp chain of causality, e.g.:

readmem failed: A core architecture specific error occurred

Caused by:
    0: Failed to read register CSW at address 0x00000000
    1: Didn't receive any answer during batch processing: [Read(AccessPort(0), 0)]

And we can set RUST_BACKTRACE to get a full backtrace where an error actually originates — which is especially useful when a failure emerges from a surprising place, like this one from a Drop implementation in probe-rs:

Stack backtrace:
   0: probe_rs::probe::daplink::DAPLink::process_batch
   1: probe_rs::probe::daplink::DAPLink::batch_add
   2: ::read_register
   3: probe_rs::architecture::arm::communication_interface::ArmCommunicationInterface::read_ap_register
   4: probe_rs::architecture::arm::memory::adi_v5_memory_interface::ADIMemoryInterface::read_word_32
   5: <probe_rs::architecture::arm::memory::adi_v5_memory_interface::ADIMemoryInterface as probe_rs::memory::MemoryInterface>::read_word_32
   6: ::get_available_breakpoint_units
   7: <core::iter::adapters::ResultShunt<I> as core::iter::traits::iterator::Iterator>::next
   8: <alloc::vec::Vec as alloc::vec::SpecFromIter>::from_iter
   9: ::drop
  10: core::ptr::drop_in_place
  11: main
  12: std::sys_common::backtrace::__rust_begin_short_backtrace
  13: std::rt::lang_start::{{closure}}
  14: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once
  15: main
  16: __libc_start_main
  17: _start) })

12. asm!

When writing software at the hardware/software interface, there is inevitably some degree of direct machine interaction that must be done via assembly. Historically, I have done this via dedicated .s files — which are inconvenient, but explicit.

Over the years, compilers added the capacity to drop assembly into C, but the verb here is apt: the resulting assembly was often dropped on its surrounding C like a Looney Tunes anvil, with the interface between the two often being ill-defined, compiler-dependent or both. Rust took this approach at first too, but it suffered from all of the historical problems of inline assembly — which in Rust’s case meant being highly dependent on LLVM implementation details. This in turn meant that it was unlikely to ever become stabilized, which would relegate those who need inline assembly to forever be on nightly Rust.

Fortunately, Amanieu d’Antras took on this gritty problem, and landed a new asm! syntax. The new syntax is a pleasure to work with, and frankly Rust has now leapfrogged C in terms of ease and robustness of integrating inline assembly!

13. String continuations

Okay, this is another tiny one, but meaningful for me and one that took me too long to discover. So first, something to know about me: I am an eighty column purist. For me, this has nothing to do with punchcards or whatnot, but rather with type readability, which tends to result in 50-100 characters per line — and generally about 70 or so. (I would redirect rebuttals to your bookshelf, where most any line of most any page of most any book will show this to be more or less correct.) So I personally embrace the “hard 80″, and have found that the rework that that can sometimes require results in more readable, better factored code. There is, however, one annoying exception to this: when programmatically printing a string that is itself long, one is left with much less horizontal real estate to work with! In C, this is a snap: string literals without intervening tokens are automatically concatenated, so the single literal can be made by multiple literals across multiple lines. But in Rust, string literals can span multiple lines (generally a feature!), so splitting the line will also embed the newline and any leading whitespace. e.g.:

    println!(
        "...government of the {p}, by the {p}, for the {p},
        shall not perish from the earth.",
        p = "people"
    );

Results in a newline and some leading whitespace that represent the structure of the program, not the desired structure of the string:

...government of the people, by the people, for the people,
        shall not perish from the earth.

I have historically worked around this by using the concat! macro to concatenate two (or more) static strings, which works well enough, but looks pretty clunky, e.g.:

    println!(
        concat!(
            "...government of the {p}, by the {p}, for the {p}, ",
            "shall not perish from the earth."
        ),
        p = "people"
    );

As it turns out, I was really overthinking it, though it took an embarrassingly long time to discover: Rust has support for continuation of string literals! If a line containing a string literal ends in a backslash, the literal continues on the next line, with the newline and any leading whitespace elided. This is one of those really nice things that Rust lets us have; the above example becomes:

    println!(
        "...government of the {p}, by the {p}, for the {p}, \
        shall not perish from the earth.",
        p = "people"
    );

So much cleaner!

14. --pretty=expanded and cargo expand

In C — especially C that makes heavy use of the preprocessor — the -E option can be invaluable: it stops the compilation after the preprocessing phase and dumps the result to standard output. Rust, as it turns out has an equivalent in the --pretty=expanded unstable compiler option. The output out of this can be a little hard on the eyes, so you want to send it through rustfmt — but the result can be really enlightening as to how things actually work. Take, for example, the following program:

fn main() {
    println!("{} has been quite a year!", 2020);
}

Here is the --pretty=expanded output:

$ rustc -Z unstable-options --pretty=expanded year.rs | rustfmt --emit stdout
#![feature(prelude_import)]
#![no_std]
#[prelude_import]
use std::prelude::v1::*;
#[macro_use]
extern crate std;
fn main() {
    {
        ::std::io::_print(::core::fmt::Arguments::new_v1(
            &["", " has been quite a year!\n"],
            &match (&2020,) {
                (arg0,) => [::core::fmt::ArgumentV1::new(
                    arg0,
                    ::core::fmt::Display::fmt,
                )],
            },
        ));
    };
}

As an aside, format_args! is really magical — and a subject that really merits its own blog post from someone with more expertise on the subject. (Yes, this is the Rust blogging equivalent of Chekhov’s gun!)

With so many great David Tolnay crates, it’s fitting we end on one final piece of software from him: cargo expand is a pleasant wrapper around --pretty=expanded that (among other things) allows you to only dump a particular function.

The perfect marriage?

All of this is not to say that Rust is perfect; there are certainly some minor annoyances (rustfmt: looking at you!), and some forthcoming features that I eagerly await (e.g., safe transmutes, const generics). And in case it needs to be said: just because Rust makes it easier to write robust software doesn’t mean that it makes it impossible to write shoddy software!

Dwelling on the imperfections, though, would be a mistake. When getting into a long-term relationship with anything — be it a person, or a company, or a technology — it can be tempting to look at its surface characteristics: does this person, or company or technology have attributes that I do or don’t like? And those are important, but they can be overemphasized: because things change over time, we sometimes look too much at what things are rather than what guides them. And in this regard, my relationship with Rust feels particularly sound: it feels like my values and Rust’s values are a good fit for one another — and that my growing relationship with Rust will be one of the most important of my career!

Posted on October 11, 2020 at 10:21 am by bmc · Permalink · 9 Comments
In: Uncategorized

The singular urgency of Ava DuVernay’s 13th

On Sunday afternoon, I was on the phone with one of my Oxide co-founders, Steve Tuck. He and I were both trying to grapple with the brazen state-sponsored violence that we were witnessing: the murder of George Floyd and the widespread, brutal, and shameless suppression of those who were demonstrating against it.

Specifically, we were struggling with a problem that (bluntly) a lot of white people struggle with: how to say what when. An earlier conversation with Steve had inspired me to publicly say something that I have long believed: that on social media, you should amplify your listening by following voices that you don’t otherwise hear. I had tweeted that out, and Steve and I were talking about it. In particular, Steve was wondering about telling people to see 13th, a 2016 documentary by Ava DuVernay that he had found very moving when he had seen it last year. I had taken my kids to see DuVernay’s superlative Selma years prior, and I had heard about 13th (and I had recalled my wife saying she wanted to watch it), but it was — like many things — “in the queue.” Steve was emphatic though: “you need to watch it.”

If I may, an aside: we — we all, but especially white Americans — run away from difficult subjects. With our viewing habits in particular, there are things that we know we absolutely should watch that we just… don’t. The reasons are often mundane: because the time isn’t right; because we’re tired; because we want to be uplifted; because we’re ashamed. For my wife and me, this was Hotel Rwanda, a Netflix DVD (!) that we had sitting on top of the DVD player for literally years, not having the heart to send it back — and yet never in the mood to watch it.

Fortunately, 13th was not to share Hotel Rwanda‘s fate: not only is Steve persuasive, it felt especially apropos as my wife and I were looking for ways to talk about the George Floyd murder and subsequent uprising with our kids — so I got off the phone with Steve, my wife and I got the family together (pretty easy when living in self-isolation!), and we watched it.

I don’t know that I’ve ever seen a documentary as important. More than anything, it is revealing: it connects dots that I didn’t realize were connected — and it shows, with a stunning diversity of voices (Angela Davis and Newt Gingrich?!), just how deeply racist our criminal justice system is. Not that it isn’t also horrifying in its revelations: my kids gasped several times, including learning of the stunning growth in incarceration — and of the jaw-dropping fraction of inmates that never stood trial for their crime. And there was plenty of horrifying education beyond the figures; I knew a prison/industrial complex was out there, but I never imagined that it could be such a deliberate, pernicious octopus.

I’m not going to spoil any more of it for you; if you haven’t already seen it, you need to watch 13th — and you need to watch it now.

Actually, I’m going to phrase it even more bluntly: it is just 100 minutes (very deliberately not longer, as it turns out; watch the excellent conversation between Ava DuVerney and Oprah Winfrey for details); you more or less can’t leave your home; and America is boiling over on this very issue. So let’s put it another way: if you are unwilling to watch this now, be honest with yourself and realize that you are never going to watch it. And if you are never going to watch it, please spare us all the black squares and the trending hashtags: if you do not have 100 minutes to give to this most important topic at this most critical moment, you are not actually willing to do anything at all.

So you need to watch it, but is watching a documentary really the answer? Well, no, of course not — but that isn’t to say film can’t change the collective mind: The Day After famously informed Ronald Reagan’s views on the dangers of nuclear war, at a(nother) time when humanity felt perilously close to the brink. And I do believe that 13th could have that kind of power — so I would ask you to not only watch it, but use your voice (and, for many of you, your privilege) to get others to do the same.

Once you’ve watched it — and once you’ve gotten your friends and family to watch it — the tough work begins: we need to not merely reform this system, we need to rethink it entirely. And for this, you want to get your resources to the non-profits taking this on (there are a bunch out there; one in particular that I would recommend is The Equal Justice Initiative). Thanks to Ava DuVernay for making such a singularly important film — and thank you in advance for doing her (and us all!) the service of watching it!

Posted on June 2, 2020 at 10:18 pm by bmc · Permalink · Comments Closed
In: Uncategorized

The soul of a new computer company

Over the summer, I described preparing for my next expedition. I’m thrilled to announce that the expedition is now plotted, the funds are raised, and the bags are packed: together with Steve Tuck and Jess Frazelle, we have started Oxide Computer Company.

Starting a computer company may sound crazy (and you would certainly be forgiven a double-take!), but it stems from a belief that I hold in my marrow: that hardware and software should each be built with the other in mind. For me, this belief dates back a quarter century: when I first came to Sun Microsystems in the mid-1990s, it was explicitly to work on operating system kernel development at a computer company — at a time when that very idea was iconoclastic. And when we started Fishworks a decade later, the belief in fully integrated software and hardware was so deeply rooted into our endeavor as to be eponymous: it was the “FISH” in “Fishworks.” In working at a cloud computing company over the past decade, economic realities forced me to suppress this belief to a degree — but it now burns hotter than ever after having endured the consequences of a world divided: in running a cloud, our most vexing problems emanated from the deepest bowels of the stack, when hardware and (especially) firmware operated at cross purposes with our systems software.

As I began to think about what was next, I was haunted by the pain and futility of trying to build a cloud with PC-era systems. At the same time, seeing the kinds of solutions that the hyperscalers had developed for themselves had always left me with equal parts admiration and frustration: their rack-level designs are a clear win — why are these designs cloistered among so few? And even in as much as the hardware could be found through admirable efforts like the Open Compute Project, the software necessary to realize its full potential has remained cruelly unavailable.

Alongside my inescapable technical beliefs has been a commercial one: even as the world is moving (or has moved) to elastic, API-driven computing, there remain good reasons to run on one’s own equipment! Further, as cloud-borne SaaS companies mature from being strictly growth focused to being more margin focused, it seems likely that more will consider buying machines instead of always renting them.

It was in the confluence of these sentiments that an idea began to take shape: the world needed a company to develop and deliver integrated, hyperscaler-class infrastructure to the broader market — that we needed to start a computer company. The “we” here is paramount: in Steve and Jess, I feel blessed to not only share a vision of our future, but to have diverse perspectives on how infrastructure is designed, built, sold, operated and run. And most important of all (with the emphasis itself being a reflection of hard-won wisdom), we three share deeply-held values: we have the same principled approach, with shared aspirations for building the kind of company that customers will love to buy from — and employees will be proud to work for.

Together, as we looked harder at the problem, we saw the opportunity more and more clearly: the rise of open firmware and the broadening of the Open Compute Project made this more technically feasible than ever; the sharpening desire among customers for a true cloud-like on-prem experience (and the neglect those customers felt in the market) made it more in demand than ever. With accelerating conviction that we would build a company to do this, we needed a name — and once we hit on Oxide, we knew it was us: oxides form much of the earth’s crust, giving a connotation of foundation; silicon, the element that is the foundation of all of computing, is found in nature in its oxide; and (yes!) iron oxide is also known as Rust, a programming language we see playing a substantial role for us. Were there any doubt, that Oxide can also be pseudo-written in hexadecimal — as 0x1de — pretty much sealed the deal!

There was just one question left, and it was an existential one: could we find an investor who saw what we saw in Oxide? Fortunately, the answer to this question had been emphatic and unequivocal: in the incredible team at Eclipse Ventures, we found investors that not only understood the space and the market, but also the challenges of solving hard technical problems. And we are deeply honored to have Eclipse’s singular Pierre Lamond joining us on our board; we can imagine no better a start for a new computer company!

So while there is a long and rocky path ahead, we are at last underway on our improbable journey! If you haven’t yet, read Jess’s blog on Oxide being born on a garage. If you find yourself battling the problems we’re aiming to fix, please join our mailing list. If you are a technologist who feels this problem in your bones as we do, consider joining us. And if nothing else, and you would like to hear some terrific stories of life at the hardware/software interface, check out our incredible podcast On the Metal!

Posted on December 2, 2019 at 5:30 am by bmc · Permalink · 2 Comments
In: Uncategorized

Ex-Joyeur

When I was first interviewing with Joyent in July 2010, I recall telling then-CTO Mark Mayo that I was trying to make a decision for the next seven years of my career. Mark nodded sagely at this, assuring me that Joyent was the right move. Shortly after coming to Joyent, I became amazed that Mark had managed to keep a straight face during our conversation: as a venture-funded startup, Joyent lived on a wildly accelerated time scale; seven years into the future for a startup is like seventy for a more established company. (Or, to put it more bluntly, startups with burn and runway are generally default dead and very unlikely to live to see their seventh birthday.)

But Mark also wasn’t wrong: again and again, Joyent beat the odds, in part because of the incredible team that our idiosyncratic company attracted. We saw trends like node.js, containers and data-centric computing long before our peers, attracting customers that were themselves foward-looking technologists. This made for an interesting trip, but not a smooth one: being ahead of the market is as much of a curse as a blessing, and Joyent lived many different lives over the past nine years. Indeed, the company went through so much that somewhere along the way one of our colleagues observed that the story of Joyent could only be told as a musical — an observation so profoundly true that any Joyeur will chuckle to themselves as they are reminded of it.

During one period of particularly intense change, we who endured developed a tradition befitting a company whose story needs musical theater to tell it: at company-wide gatherings, we took to playing a game that we called “ex-Joyeur.” We would stand in a circle, iterating around it; upon one’s turn, one must name an ex-Joyeur who has not already been named — or sit down. Lest this sound like bad attitude, it in fact wasn’t: it was an acknowledgement of the long strange trip that any venture-funded startup takes — a celebration of the musical’s exhaustive and curious dramatis personæ, and a way to remember those who had been with us at darker hours. (And in fact, some of the best players at ex-Joyeur were newer employees who managed to sleuth out long-forgotten colleagues!) Not surprisingly, games of ex-Joyeur were routinely interrupted with stories of the just-named individual — often fond, but always funny.

I bring all of this up because today is my last day at Joyent. After nine years, I will go from a Joyeur to an ex-Joyeur — from a player to a piece on the board. I have had too many good times to mention, and enough stories to last several lifetimes. I am deeply grateful for the colleagues and communities with whom I shared acts in the musical. So many have gone on to do terrific things; I know our paths will cross again, and I already look forward to the reunion. And to those customers who took a chance on us — and especially to Samsung, who acquired Joyent in 2016 — thank you, from the bottom of my heart for believing in us; I hope we have done right by you.

As for me personally, I’m going to take slightly more of a break than the three days I took in 2010; as a technologist, I am finding myself as excited as ever by snow-capped mountains on a distant horizon, and I look forward to taking my time to plot my next seven-year expedition!

Posted on July 31, 2019 at 10:30 am by bmc · Permalink · 9 Comments
In: Uncategorized

Reflecting on The Soul of a New Machine

Long ago as an undergraduate, I found myself back home on a break from school, bored and with eyes wandering idly across a family bookshelf. At school, I had started to find a calling in computing systems, and now in the den, an old book suddenly caught my eye: Tracy Kidder’s The Soul of a New Machine. Taking it off the shelf, the book grabbed me from its first descriptions of Tom West, captivating me with the epic tale of the development of the Eagle at Data General. I — like so many before and after me — found the book to be life changing: by telling the stories of the people behind the machine, the book showed the creative passion among engineers that might otherwise appear anodyne, inspiring me to chart a course that might one day allow me to make a similar mark.

Since reading it over two decades ago, I have recommended The Soul of a Machine at essentially every opportunity, believing that it is a part of computing’s literary foundation — that it should be considered our Odyssey. Recently, I suggested it as beach reading to Jess Frazelle, and apparently with perfect timing: when I saw the book at the top of her vacation pile, I knew a fuse had been lit. I was delighted (though not at all surprised) to see Jess livetweet her admiration of the book, starting with the compelling prose, the lucid technical explanations and the visceral anecdotes — but then moving on to the deeper technical inspiration she found in the book. And as she reached the book’s crescendo, Jess felt its full power, causing her to reflect on the nature of engineering motivation.

Excited to see the effect of the book on Jess, I experienced a kind of reflected recommendation: I was inspired to (re-)read my own recommendation! Shortly after I started reading, I began to realize that (contrary to what I had been telling myself over the years!) I had not re-read the book in full since that first reading so many years ago. Rather, over the years I had merely revisited those sections that I remembered fondly. On the one hand, these sections are singular: the saga of engineers debugging a nasty I-cache data corruption issue; the young engineer who implements the simulator in an impossibly short amount of time because no one wanted to tell him that he was being impossibly ambitious; the engineer who, frustrated with a nanosecond-scale timing problem in the ALU that he designed, moved to a commune in Vermont, claiming a desire to deal with “no unit of time shorter than a season”. But by limiting myself to these passages, I was succumbing to the selection bias of my much younger self; re-reading the book now from start to finish has given new parts depth and meaning. Aspects that were more abstract to me as an undergraduate — from the organizational rivalries and absurdities of the industry to the complexities of West’s character and the tribulations of the team down the stretch — are now deeply evocative of concrete episodes of my own career.

As a particularly visceral example: early in the book, a feud between two rival projects boils over in an argument at a Howard Johnson’s that becomes known among the Data General engineers as “the big shoot-out at HoJo’s.” Kidder has little more to say about it (the organizational civil war serves merely as a backdrop for Eagle), and I don’t recall this having any effect on me when I first read it — but reading it now, it resonates with a grim familiarity. In any engineering career of sufficient length, a beloved project will at some point be shelved or killed — and the moment will be sufficiently traumatic to be seared into collective memory and memorialized in local lore. So if you haven’t yet had your own shoot-out at HoJo’s, it is regrettably coming; may your career be blessed with few such firefights!

Another example of a passage with new relevance pertains to Tom West developing his leadership style on Eagle:

That fall West had put a new term in his vocabulary. It was trust. “Trust is risk, and risk avoidance is the name of the game in business,” West said once, in praise of trust. He would bind his team with mutual trust, he had decided. When a person signed up to do a job for him, he would in turn trust that person to accomplish it; he wouldn’t break it down into little pieces and make the task small, easy and dull.

I can’t imagine that this paragraph really affected me much as an undergraduate, but reading it now I view it as a one-paragraph crash-course in engineering leadership; those who deeply internalize it will find themselves blessed with high-functioning, innovative engineering teams. And lest it seem obvious, it is in fact more subtle than it looks: West is right that trust is risk — and risk-averse organizations can really struggle to foster mutual trust, despite its obvious advantages.

This passage also serves as a concrete example of the deeper currents that the book captures: it is not merely about the specific people or the machine they built, but about why we build things — especially things that so ardently resist being built. Kidder doesn’t offer a pat answer, though the engineers in the book repeatedly emphasize that their motivations are not so simple as ego or money. In my experience, engineers take on problems for lots of reasons: the opportunity to advance the state of the art; the potential to innovate; the promise of making a difference in everyday lives; the opportunity to learn about new technologies. Over the course of the book, each of these can be seen at times among the Eagle team. But, in my experience (and reflected in Kidder’s retelling of Eagle), when the problem is challenging and success uncertain, the endurance of engineers cannot be explained by these factors alone; regardless of why they start, engineers persist because they are bonded by a shared mission. In this regard, engineers endure not just for themselves, but for one another; tackling thorny problems is very much a team sport!

More than anything, my re-read of the book leaves me with the feeling that on teams that have a shared mission, mutual trust and ample grit, incredible things become possible. Over my career, I have had the pleasure of experiencing this several times, and they form my fondest memories: teams in which individuals banded together to build something larger than the sum of themselves. So The Soul of a New Machine serves to remind us that the soul of what we build is, above all, shared — that we do not endeavor alone but rather with a group of like-minded individuals.

Thanks to Jess for inspiring my own re-read of this classic — and may your read (or re-read!) of the book be as invigorating!

Posted on February 10, 2019 at 5:20 pm by bmc · Permalink · 11 Comments
In: Uncategorized

A EULA in FOSS clothing?

There was a tremendous amount of reaction to and discussion about my blog entry on the midlife crisis in open source. As part of this discussion on HN, Jay Kreps of Confluent took the time to write a detailed response — which he shortly thereafter elevated into a blog entry.

Let me be clear that I hold Jay in high regard, as both a software engineer and an entrepreneur — and I appreciate the time he took to write a thoughtful response. That said, there are aspects of his response that I found troubling enough to closely re-read the Confluent Community License — and that in turn has led me to a deeply disturbing realization about what is potentially going on here.

Here is what Jay said that I found troubling:

The book analogy is not accurate; for starters, copyright does not apply to physical books and intangibles like software or digital books in the same way.

Now, what Jay said is true to a degree in that (as with many different kind of expression), software has code specific to it; this can be found in 17 U.S.C. § 117. But the fact that Jay also made reference to digital books was odd; digital books really have nothing to do with software (or not any more so than any other kind of creative expression). That said, digital books and proprietary software do actually share one thing in common, though it’s horrifying: in both cases their creators have maintained that you don’t actually own the copy you paid for. That is, unlike a book, you don’t actually buy a copy of a digital book, you merely acquire a license to use their book under their terms. But how do they do this? Because when you access the digital book, you click “agree” on a license — an End User License Agreement (EULA) — that makes clear that you don’t actually own anything. The exact language varies; take (for example) VMware’s end user license agreement:

2.1 General License Grant. VMware grants to You a non-exclusive, non-transferable (except as set forth in Section 12.1 (Transfers; Assignment) license to use the Software and the Documentation during the period of the license and within the Territory, solely for Your internal business operations, and subject to the provisions of the Product Guide. Unless otherwise indicated in the Order, licenses granted to You will be perpetual, will be for use of object code only, and will commence on either delivery of the physical media or the date You are notified of availability for electronic download.

That’s a bit wordy and oblique; in this regard, Microsoft’s Windows 10 license is refreshingly blunt:

(2)(a) License. The software is licensed, not sold. Under this agreement, we grant you the right to install and run one instance of the software on your device (the licensed device), for use by one person at a time, so long as you comply with all the terms of this agreement.

That’s pretty concise: “The software is licensed, not sold.” So why do this at all? EULAs are an attempt to get out of copyright law — where the copyright owner is quite limited in the rights afforded to them as to how the content is consumed — and into contract law, where there are many fewer such limits. And EULAs have accordingly historically restricted (or tried to restrict) all sorts of uses like benchmarking, reverse engineering, running with competitive products (or, say, being used by a competitor to make competitive products), and so on.

Given the onerous restrictions, it is not surprising that EULAs are very controversial. They are also legally dubious: when you are forced to click through or (as it used to be back in the day) forced to unwrap a sealed envelope on which the EULA is printed to get to the actual media, it’s unclear how much you are actually “agreeing” to — and it may be considered a contract of adhesion. And this is just one of many legal objections to EULAs.

Suffice it to say, EULAs have long been considered open source poison, so with Jay’s frightening reference to EULA’d content, I went back to the Confluent Community License — and proceeded to kick myself for having missed it all on my first quick read. First, there’s this:

This Confluent Community License Agreement Version 1.0 (the “Agreement”) sets forth the terms on which Confluent, Inc. (“Confluent”) makes available certain software made available by Confluent under this Agreement (the “Software”). BY INSTALLING, DOWNLOADING, ACCESSING, USING OR DISTRIBUTING ANY OF THE SOFTWARE, YOU AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO SUCH TERMS AND CONDITIONS, YOU MUST NOT USE THE SOFTWARE. IF YOU ARE RECEIVING THE SOFTWARE ON BEHALF OF A LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE THE ACTUAL AUTHORITY TO AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT ON BEHALF OF SUCH ENTITY.

You will notice that this looks nothing like any traditional source-based license — but it is exactly the kind of boilerplate that you find on EULAs, terms-of-service agreements, and other contracts that are being rammed down your throat. And then there’s this:

1.1 License. Subject to the terms and conditions of this Agreement, Confluent hereby grants to Licensee a non-exclusive, royalty-free, worldwide, non-transferable, non-sublicenseable license during the term of this Agreement to: (a) use the Software; (b) prepare modifications and derivative works of the Software; (c) distribute the Software (including without limitation in source code or object code form); and (d) reproduce copies of the Software (the “License”).

On the one hand looks like the opening of open source licenses like (say) the Apache Public License (albeit missing important words like “perpetual” and “irrevocable”), but the next two sentences are the difference that are the focus of the license:

Licensee is not granted the right to, and Licensee shall not, exercise the License for an Excluded Purpose. For purposes of this Agreement, “Excluded Purpose” means making available any software-as-a-service, platform-as-a-service, infrastructure-as-a-service or other similar online service that competes with Confluent products or services that provide the Software.

But how can you later tell me that I can’t use my copy of the software because it competes with a service that Confluent started to offer? Or is that copy not in fact mine? This is answered in section 3:

Confluent will retain all right, title, and interest in the Software, and all intellectual property rights therein.

Okay, so my copy of the software isn’t mine at all. On the one hand, this is (literally) proprietary software boilerplate — but I was given the source code and the right to modify it; how do I not own my copy? Again, proprietary software is built on the notion that — unlike the book you bought at the bookstore — you don’t own anything: rather, you license the copy that is in fact owned by the software company. And again, as it stands, this is dubious, and courts have ruled against “licensed, not sold” software. But how can a license explicitly allow me to modify the software and at the same time tell me that I don’t own the copy that I just modified?! And to be clear: I’m not asking who owns the copyright (that part is clear, as it is for open source) — I’m asking who owns the copy of the work that I have modified? How can one argue that I don’t own the copy of the software that I downloaded, modified and built myself?!

This prompts the following questions, which I also asked Jay via Twitter:

  1. If I git clone software covered under the Confluent Community License, who owns that copy of the software?

  2. Do you consider the Confluent Community License to be a contract?
  3. Do you consider the Confluent Community License to be a EULA?

To Confluent: please answer these questions, and put the answers in your FAQ. Again, I think it’s fine for you to be an open core company; just make this software proprietary and be done with it. (And don’t let yourself be troubled about the fact that it was once open source; there is ample precedent for reproprietarizing software.) What I object to (and what I think others object to) is trying to be at once open and proprietary; you must pick one.

To GitHub: Assuming that this is in fact a EULA, I think it is perilous to allow EULAs to sit in public repositories. It’s one thing to have one click through to accept a license (though again, that itself is dubious), but to say that a git clone is an implicit acceptance of a contract that happens to be sitting somewhere in the repository beggars belief. With efforts like choosealicense.com, GitHub has been a model in guiding projects with respect to licensing; it would be helpful for GitHub’s counsel to weigh in on their view of this new strain of source-available proprietary software and the degree to which it comes into conflict with GitHub’s own terms of service.

To foundations concerned with software liberties, including the Apache Foundation, the Linux Foundation, the Free Software Foundation, the Electronic Frontier Foundation, the Open Source Initiative, and the Software Freedom Conservancy: the open source community needs your legal review on this! I don’t think I’m being too alarmist when I say that this is potentially a dangerous new precedent being set; it would be very helpful to have your lawyers offer their perspectives on this, even if they disagree with one another. We seem to be in some terrible new era of frankenlicenses, where the worst of proprietary licenses are bolted on to the goodwill created by open source licenses; we need your legal voices before these creatures destroy the village!

Posted on December 16, 2018 at 6:01 pm by bmc · Permalink · 7 Comments
In: Uncategorized

Open source confronts its midlife crisis

Midlife is tough: the idealism of youth has faded, as has inevitably some of its fitness and vigor. At the same time, the responsibilities of adulthood have grown: the kids that were such a fresh adventure when they were infants and toddlers are now grappling with their own transition into adulthood — and you try to remind yourself that the kids that you have sacrificed so much for probably don’t actually hate your guts, regardless of that post they just liked on the ‘gram. Making things more challenging, while you are navigating the turbulence of teenagers, your own parents are likely entering life’s twilight, needing help in new ways from their adult children. By midlife, in addition to the singular joys of life, you have also likely experienced its terrible sorrows: death, heartbreak, betrayal. Taken together, the fading of youth, the growth in responsibility and the endurance of misfortune can lead to cynicism or (worse) drastic and poorly thought-out choices. Add in a little fear of mortality and some existential dread, and you have the stuff of which midlife crises are made…

I raise this not because of my own adventures at midlife, but because it is clear to me that open source — now several decades old and fully adult — is going through its own midlife crisis. This has long been in the making: for years, I (and others) have been critical of service providers’ parastic relationship with open source, as cloud service providers turn open source software into a service offering without giving back to the communities upon which they implicitly depend. At the same time, open source has been (rightfully) entirely unsympathetic to the proprietary software models that have been burned to the ground — but also seemingly oblivious as to the larger economic waves that have buoyed them.

So it seemed like only a matter of time before the companies built around open source software would have to confront their own crisis of confidence: open source business models are really tough, selling software-as-a-service is one of the most natural of them, the cloud service providers are really good at it — and their commercial appetites seem boundless. And, like a new cherry red two-seater sports car next to a minivan in a suburban driveway, some open source companies are dealing with this crisis exceptionally poorly: they are trying to restrict the way that their open source software can be used. These companies want it both ways: they want the advantages of open source — the community, the positivity, the energy, the adoption, the downloads — but they also want to enjoy the fruits of proprietary software companies in software lock-in and its concomitant monopolistic rents. If this were entirely transparent (that is, if some bits were merely being made explicitly proprietary), it would be fine: we could accept these companies as essentially proprietary software companies, albeit with an open source loss-leader. But instead, these companies are trying to license their way into this self-contradictory world: continuing to claim to be entirely open source, but perverting the license under which portions of that source are available. Most gallingly, they are doing this by hijacking open source nomenclature. Of these, the laughably named commons clause is the worst offender (it is plainly designed to be confused with the purely virtuous creative commons), but others (including CockroachDB’s Community License, MongoDB’s Server Side Public License, and Confluent’s Community License) are little better. And in particular, as it apparently needs to be said: no, “community” is not the opposite of “open source” — please stop sullying its good name by attaching it to licenses that are deliberately not open source! But even if they were more aptly named (e.g. “the restricted clause” or “the controlled use license” or — perhaps most honest of all — “the please-don’t-put-me-out-of-business-during-the-next-reInvent-keynote clause”), these licenses suffer from a serious problem: they are almost certainly asserting rights that the copyright holder doesn’t in fact have.

If I sell you a book that I wrote, I can restrict your right to read it aloud for an audience, or sell a translation, or write a sequel; these restrictions are rights afforded the copyright holder. I cannot, however, tell you that you can’t put the book on the same bookshelf as that of my rival, or that you can’t read the book while flying a particular airline I dislike, or that you aren’t allowed to read the book and also work for a company that competes with mine. (Lest you think that last example absurd, that’s almost verbatim the language in the new Confluent Community (sic) License.) I personally think that none of these licenses would withstand a court challenge, but I also don’t think it will come to that: because the vendors behind these licenses will surely fear that they wouldn’t survive litigation, they will deliberately avoid inviting such challenges. In some ways, this netherworld is even worse, as the license becomes a vessel for unverifiable fear of arbitrary liability.

Legal dubiousness aside, as with that midlife hot rod, the licenses aren’t going to address the underlying problem. To be clear, the underlying problem is not the licensing, it’s that these companies don’t know how to make money — they want open source to be its own business model, and seeing that the cloud service providers have an entirely viable business model, they want a piece of the action. But as a result of these restrictive riders, one of two things will happen with respect to a cloud services provider that wants to build a service offering around the software:

  1. The cloud services provider will build their service not based on the software, but rather on another open source implementation that doesn’t suffer from the complication of a lurking company with brazenly proprietary ambitions.

  2. The cloud services provider will build their service on the software, but will use only the truly open source bits, reimplementing (and keeping proprietary) any of the surrounding software that they need.

In the first case, the victory is strictly pyrrhic: yes, the cloud services provider has been prevented from monetizing the software — but the software will now have less of the adoption that is the lifeblood of a thriving community. In the second case, there is no real advantage over the current state of affairs: the core software is still being used without the open source company being explicitly paid for it. Worse, the software and its community have been harmed: where one could previously appeal to the social contract of open source (namely, that cloud service providers have a social responsibility to contribute back to the projects upon which they depend), now there is little to motivate such reciprocity. Why should the cloud services provider contribute anything back to a company that has declared war on it? (Or, worse, implicitly accused it of malfeasance.) Indeed, as long as fights are being picked with them, cloud service providers will likely clutch their bug fixes in the open core as a differentiator, cackling to themselves over the gnarly race conditions that they have fixed of which the community is blissfully unaware. Is this in any way a desired end state?

So those are the two cases, and they are both essentially bad for the open source project. Now, one may notice that there is a choice missing, and for those open source companies that still harbor magical beliefs, let me put this to you as directly as possible: cloud services providers are emphatically not going to license your proprietary software. I mean, you knew that, right? The whole premise with your proprietary license is that you are finding that there is no way to compete with the operational dominance of the cloud services providers; did you really believe that those same dominant cloud services providers can’t simply reimplement your LDAP integration or whatever? The cloud services providers are currently reproprietarizing all of computing — they are making their own CPUs for crying out loud! — reimplementing the bits of your software that they need in the name of the service that their customers want (and will pay for!) won’t even move the needle in terms of their effort.

Worse than all of this (and the reason why this madness needs to stop): licenses that are vague with respect to permitted use are corporate toxin. Any company that has been through an acquisition can speak of the peril of the due diligence license audit: the acquiring entity is almost always deep pocketed and (not unrelatedly) risk averse; the last thing that any company wants is for a deal to go sideways because of concern over unbounded liability to some third-party knuckle-head. So companies that engage in license tomfoolery are doing worse than merely not solving their own problem: they are potentially poisoning the wellspring of their own community.

So what to do? Those of us who have been around for a while — who came up in the era of proprietary software and saw the merciless transition to open source software — know that there’s no way to cross back over the Rubicon. Open source software companies need to come to grips with that uncomfortable truth: their business model isn’t their community’s problem, and they should please stop trying to make it one. And while they’re at it, it would be great if they could please stop making outlandish threats about the demise of open source; they sound like shrieking proprietary software companies from the 1990s, warning that open source will be ridden with nefarious backdoors and unspecified legal liabilities. (Okay, yes, a confession: just as one’s first argument with their teenager is likely to give their own parents uncontrollable fits of smug snickering, those of us who came up in proprietary software may find companies decrying the economically devastating use of their open source software to be amusingly ironic — but our schadenfreude cups runneth over, so they can definitely stop now.)

So yes, these companies have a clear business problem: they need to find goods and services that people will exchange money for. There are many business models that are complementary with respect to open source, and some of the best open source software (and certainly the least complicated from a licensing drama perspective!) comes from companies that simply needed the software and open sourced it because they wanted to build a community around it. (There are many examples of this, but the outstanding Envoy and Jaeger both come to mind — the former from Lyft, the latter from Uber.) In this regard, open source is like a remote-friendly working policy: it’s something that you do because it makes economic and social sense; even as it’s very core to your business, its not a business model in and of itself.

That said, it is possible to build business models around the open source software that is a company’s expertise and passion! Even though the VC that led the last round wants to puke into a trashcan whenever they hear it, business models like “support”, “services” and “training” are entirely viable! (That’s the good news; the bad news is that they may not deliver the up-and-to-the-right growth that these companies may have promised in their pitch deck — and they may come at too low a margin to pay for large teams, lavish perks, or outsized exits.) And of course, making software available as a service is also an entirely viable business model — but I’m pretty sure they’ve heard about that one in the keynote.

As part of their quest for a business model, these companies should read Adam Jacob’s excellent blog entry on sustainable free and open source communities. Adam sees what I see (and Stephen O’Grady sees and Roman Shaposhnik sees), and he has taken a really positive action by starting the Sustainable Free and Open Source Communities project. This project has a lot to be said for it: it explicitly focuses on building community; it emphasizes social contracts; it seeks longevity for the open source artifacts; it shows the way to viable business models; it rejects copyright assignment to a corporate entity. Adam’s efforts can serve to clear our collective head, and to focus on what’s really important: the health of the communities around open source. By focusing on longevity, we can plainly see restrictive licensing as the death warrant that it is, shackling the fate of a community to that of a company. (Viz. after the company behind AGPL-licensed RethinkDB capsized, it took the Linux Foundation buying the assets and relicensing them to rescue the community.) Best of all, it’s written by someone who has built a business that has open source software at its heart. Adam has endured the challenges of the open core model, and is refreshingly frank about its economic and psychic tradeoffs. And if he doesn’t make it explicit, Adam’s fundamental optimism serves to remind us, too, that any perceived “danger” to open source is overblown: open source is going to endure, as no company is going to be able to repeal the economics of software. That said, as we collectively internalize that open source is not a business model on its own, we will likely see fewer VC-funded open source companies (though I’m honestly not sure that that’s a bad thing).

I don’t think that it’s an accident that Adam, Stephen, Roman and I see more or less the same thing and are more or less the same age: not only have we collectively experienced many sides of this, but we are at once young enough to still recall our own idealism, yet old enough to know that coercion never endures in the limit. In short, this too shall pass — and in the end, open source will survive its midlife questioning just as people in midlife get through theirs: by returning to its core values and by finding rejuvenation in its communities. Indeed, we can all find solace in the fact that while life is finite, our values and our communities survive us — and that our engagement with them is our most important legacy.

Posted on December 14, 2018 at 10:50 pm by bmc · Permalink · 7 Comments
In: Uncategorized

Assessing software engineering candidates

Note: This blog entry reproduces RFD 151. Comments are encouraged in the discussion for RFD 151.

How does one assess candidates for software engineering positions? This is an age-old question without a formulaic answer: software engineering is itself too varied to admit a single archetype.

Most obviously, software engineering is intellectually challenging; it demands minds that not only enjoy the thrill of solving puzzles, but can also stay afloat in a sea of numbing abstraction. This raw capacity, however, is insufficient; there are many more nuanced skills that successful software engineers must posess. For example, software engineering is an almost paradoxical juxtaposition of collaboration and isolation: successful software engineers are able to work well with (and understand the needs of!) others, but are also able to focus intensely on their own. This contrast extends to the conveyance of ideas, where they must be able to express their own ides well enough to persuade others, but also be able to understand and be persuaded by the ideas of others — and be able to implement all of these on their own. They must be able to build castles of imagination, and yet still understand the constraints of a grimy reality: they must be arrogant enough to see the world as it isn’t, but humble enough to accpet the world as it is. Each of these is a balance, and for each, long-practicing software engineers will cite colleagues who have been ineffective because they have erred too greatly on one side or another.

The challenge is therefore to assess prospective software engineers, without the luxury of firm criteria. This document is an attempt to pull together accumulated best practices; while it shouldn’t be inferred to be overly prescriptive, where it is rigid, there is often a painful lesson behind it.

In terms of evaluation mechanism: using in-person interviewing alone can be highly unreliable and can select predominantly for surface aspects of a candidate’s personality. While we advocate (and indeed, insist upon) interviews, they should come relatively late in the process; as much assessment as possible should be done by allowing the candidate to show themselves as software engineers truly work: on their own, in writing.

Traits to evaluate

How does one select for something so nuanced as balance, especially when the road ahead is unknown? We must look at a wide-variety of traits, presented here in the order in which they are traditionally assessed:

Aptitude

As the ordering implies, there is a temptation in traditional software engineering hiring to focus on aptitude exclusively: to use an interview exclusively to assess a candidate’s pure technical pulling power. While this might seem to be a reasonable course, it in fact leads down the primrose path to pop quizzes about algorithms seen primarily in interview questions. (Red-black trees and circular linked list detection: looking at you.) These assessments of aptitude are misplaced: software engineering is not, in fact, a spelling bee, and one’s ability to perform during an arbitrary oral exam may or may not correlate to one’s ability to actually develop production software. We believe that aptitude is better assessed where software engineers are forced to exercise it: based on the work that they do on their own. As such, candidates should be asked to provide three samples of their works: a code sample, a writing sample, and an analysis sample.

Code sample

Software engineers are ultimately responsible for the artifacts that they create, and as such, a code sample can be the truest way to assess a candidate’s ability.

Candidates should be guided to present code that they believe best reflects them as a software engineer. If this seems too broad, it can be easily focused: what is some code that you’re proud of and/or code that took you a while to get working?

If candidates do not have any code samples because all of their code is proprietary, they should write some: they should pick something that they have always wanted to write but have been needing an excuse — and they should go write it! On such a project, the guideline to the candidate should be to spend at least (say) eight hours on it, but no more than twenty-four — and over no longer than a two week period.

If the candidate is to write something de novo and/or there is a new or interesting technology that the organization is using, it may be worth guiding the candidate to use it (e.g., to write it in a language that the team has started to use, or using a component that the team is broadly using). This constraint should be uplifting to the candidate (e.g., “You may have wanted to explore this technology; here’s your chance!”). At Joyent in the early days of node.js, this was what we called “the node test”, and it yielded many fun little projects — and many great engineers.

Writing sample

Writing good code and writing good prose seem to be found together in the most capable software engineers. That these skills are related is perhaps unsurprising: both types of writing are difficult; both require one to create wholly new material from a blank page; both demand the ability to revise and polish.

To assess a candidate’s writing ability, they should be asked to provide a writing sample. Ideally, this will be technical writing, e.g.:

If a candidate has all of these, they should be asked to provide one of each; if a candidate has none of them, they should be asked to provide a writing sample on something else entirely, e.g. a thesis, dissertation or other academic paper.

Analysis sample

Part of the challenge of software engineering is dealing with software when it doesn’t, in fact, work correctly. At this moment, a software engineer must flip their disposition: instead of an artist creating something new, they must become a scientist, attempting to reason about a foreign world. In having candidates only write code, analytical skills are often left unexplored. And while this can be explored conversationally (e.g., asking for “debugging war stories” is a classic — and often effective — interview question), an oral description of recalled analysis doesn’t necessarily allow the true depths of a candidate’s analytical ability to be plumbed. For this, candidates should be asked to provide an analysis sample: a written analysis of software behavior from the candidate. This may be difficult for many candidates: for many engineers, these analyses may be most often found in defect reports, which may not be public. If the candidate doesn’t have such an analysis sample, the scope should be deliberately broadened to any analytical work they have done on any system (academic or otherwise). If this broader scope still doesn’t yield an analysis sample, the candidate should be asked to generate one to the best of their ability by writing down their analysis of some aspect of system behavior. (This can be as simple as asking them to write down the debugging story that would be their answer to the interview question — giving the candidate the time and space to answer the question once, and completely.)

Education

We are all born uneducated — and our own development is a result of the informal education of experience and curiosity, as well as a better structured and more formal education. To assess a candidate’s education, both the formal and informal aspects of education should be considered.

Formal education

Formal education is easier to assess by its very formality: a candidate’s education is relatively easily evaluated if they had the good fortune of discovering their interest and aptitude at a young age, had the opportunity to pursue and complete their formal education in computer science, and had the further good luck of attending an institution that one knows and has confidence in.

But one should not be bigoted by familiarity: there are many terrific software engineers who attended little-known schools or who took otherwise unconventional paths. The completion of a formal education in computer science is much more important than the institution: the strongest candidate from a little-known school is almost assuredly stronger than the weakest candidate from a well-known school.

In other cases, it’s even more nuanced: there have been many later-in-life converts to the beauty and joy of software engineering, and such candidates should emphatically not be excluded merely because they discovered software later than others. For those that concentrated in entirely non-technical disciplines, further probing will likely be required, with greater emphasis on their technical artifacts.

The most important aspect of one’s formal education may not be its substance so much as its completion. Like software engineering, there are many aspects of completing a formal education that aren’t necessarily fun: classes that must be taken to meet requirements; professors that must be endured rather than enjoyed; subject matter that resists quick understanding or appeal. In this regard, completion of a formal education represents the completion of a significant task. Inversely, the failure to complete one’s formal education may constitute an area of concern. There are, of course, plausible life reasons to abandon one’s education prematurely (especially in an era when higher education is so expensive), but there are also many paths and opportunities to resume and complete it. The failure to complete formal education may indicate deeper problems, and should be understood.

Informal education

Learning is a life-long endeavor, and much of one’s education will be informal in nature. Assessing this informal education is less clear, especially because (by its nature) there is little formally to show for it — but candidates should have a track record of being able to learn on their own, even when this self-education is arduous. One way to probe this may be with a simple question: what is an example of something that you learned that was a struggle for you? As with other questions posed here, the question should have a written answer.

Motivation

Motivation is often not assessed in the interview process, which is unfortunate because it dictates so much of what we do and why. For many companies, it will be important to find those that are intrinsically motivated — those who do what they do primarily for the value of doing it.

Selecting for motivation can be a challenge, and defies formula. Here, open source and open development can be a tremendous asset: it allows others to see what is being done, and, if they are excited by the work, to join the effort and to make their motivation clear.

Values

Values are often not evaluated formally at all in the software engineering process, but they can be critical to determine the “fit” of a candidate. To differentiate values from principles: values represent relative importance versus the absolute importance of principles. Values are important in a software engineering context because we so frequently make tradeoffs in which our values dictate our disposition. (For example, the relative importance of speed of development versus rigor; both are clearly important and positive attributes, but there is often a tradeoff to be had between them). Different engineering organizations may have different values over different times or for different projects, but it’s also true that individuals tend to develop their own values over their career — and it’s essential that the values of a candidate do not clash with the values of the team that they are to join.

But how to assess one’s values? Many will speak to values that they don’t necessarily hold (e.g., rigor), so simply asking someone what’s important to them may or may not yield their true values. One observation is that one’s values — and the adherence or divergence from those values — will often be reflected in happiness and satisfaction with work. When work strongly reflects one’s values, one is much more likely to find it satisfying; when values are compromised (even if for a good reason), work is likely be unsatisfying. As such, the specifics of one’s values may be ascertained by asking candidates some probing questions, e.g.:

Our values can also be seen in the way we interact with others. As such, here are some questions that may have revealing answers:

The answers to these questions should be written down to allow them to be answered thoughtfully and in advance — and then to serve as a starting point for conversation in an interview.

Some questions, however, are more amenable to a live interview. For example, it may be worth asking some situational questions like:

Integrity

In an ideal world, integrity would not be something we would need to assess in a candidate: we could trust that everyone is honest and trustworthy. This view, unfortunately, is naïve with respect to how malicious bad actors can be; for any organization — but especially for one that is biased towards trust and transparency — it is essential that candidates be of high integrity: an employee who operates outside of the bounds of integrity can do nearly unbounded damage to an organization that assumes positive intent.

There is no easy or single way to assess integrity for people with whom one hasn’t endured difficult times. By far the most accurate way of assessing integrity in a candidate is for them to already be in the circle of one’s trust: for them to have worked deeply with (and be trusted by) someone that is themselves deeply trusted. But even in these cases where the candidate is trusted, some basic verification is prudent.

Criminal background check

The most basic integrity check involves a criminal background check. While local law dictates how these checks are used, the check should be performed for a simple reason: it verifies that the candidate is who they say they are. If someone has made criminal mistakes, these mistakes may or may not disqualify them (much will depend on the details of the mistakes, and on local law on how background checks can be used), but if a candidate fails to be honest or remorseful about those mistakes, it is a clear indicator of untrustworthiness.

Credential check

A hidden criminal background in software engineering candidates is unusual; much more common is a slight “fudging” of credentials or other elements of one’s past: degrees that were not in fact earned; grades or scores that have been exaggerated; awards that were not in fact bestowed; gaps in employment history that are quietly covering up by changing the time that one was at a previous employer. These transgressions may seem slight, but they can point to something quite serious: a candidate’s willingness or desire to mislead others to advance themselves. To protect against this, a basic credential check should be performed. This can be confined to degrees, honors, and employment.

References

References can be very tricky, especially for someone coming from a difficult situation (e.g., fleeing poor management). Ideally, a candidate is well known by someone inside the company who is trusted — but even this poses challenges: sometimes we don’t truly know people until they are in difficult situations, and someone “known” may not, in fact, be known at all. Worse, references are most likely to break down when they are most needed: dishonest, manipulative people are, after all, dishonest and manipulative; they can easily fool people — and even references — into thinking that they are something that they are not. So while references can provide value (and shouldn’t be eliminated as a tool), they should also be used carefully and kept in perspective.

Interviews

For individuals outside of that circle of trust, checking integrity is probably still best done in person. There are several potential mechanisms here:

Mechanics of evaluation

Interviews should begin with phone screens to assess the most basic viability, especially with respect to motivation. This initial conversation might include some basic but elementary (and unstructured) homework to gauge that motivation. The candidate should be pointed to material about the company and sources that describe methods of work and specifics about what that work entails. The candidate should be encouraged to review some of this material and send formal written thoughts as a quick test of motivation. If one is not motivated enough to learn about a potential employer, it’s hard to see how they will suddenly gain the motivation to see them through difficult problems.

If and when a candidate is interested in deeper interviews, everyone should be expected to provide the same written material.

Candidate-submitted material

The candidate should submit the following:

Candidate-submitted material should be collected and distributed to everyone on the interview list.

Before the interview

Everyone on the interview schedule should read the candidate-submitted material, and a pre-meeting should then be held to discuss approach: based on the written material, what are the things that the team wishes to better understand? And who will do what?

Pre-interview job talk

For senior candidates, it can be effective to ask them to start the day by giving a technical presentation to those who will interview them. On the one hand, it may seem cruel to ask a candidate to present to a roomful of people who will be later interviewing them, but to the candidate this should be a relief: this allows them to start the day with a home game, where they are talking about something that they know well and can prepare for arbitrarily. The candidate should be allowed to present on anything technical that they’ve worked on, and it should be made clear that:

  1. Confidentiality will be respected (that is, they can present on proprietary work)

  2. The presentation needn’t be novel — it is fine for the candidate to give a talk that they have given before

  3. Slides are fine but not required

  4. The candidate should assume that the audience is technical, but not necessarily familiar with the domain that they are presenting

  5. The candidate should assume about 30 minutes for presentation and 15 minutes for questions.

The aim here is severalfold.

First, this lets everyone get the same information at once: it is not unreasonable that the talk that a candidate would give would be similar to a conversation that they would have otherwise had several times over the day as they are asked about their experience; this minimizes that repetition.

Second, it shows how well the candidate teaches. Assuming that the candidate is presenting on a domain that isn’t intimately known by every member of the audience, the candidate will be required to instruct. Teaching requires both technical mastery and empathy — and a pathological inability to teach may point to deeper problems in a candidate.

Third, it shows how well the candidate fields questions about their work. It should go without saying that the questions themselves shouldn’t be trying to find flaws with the work, but should be entirely in earnest; seeing how a candidate answers such questions can be very revealing about character.

All of that said: a job talk likely isn’t appropriate for every candidate — and shouldn’t be imposed on (for example) those still in school. One guideline may be: those with more than seven years of experience are expected to give a talk; those with fewer than three are not expected to give a talk (but may do so); those in between can use their own judgement.

Interviews

Interviews shouldn’t necessarily take one form; interviewers should feel free to take a variety of styles and approaches — but should generally refrain from “gotcha” questions and/or questions that may conflate surface aspects of intellect with deeper qualities (e.g., Microsoft’s infamous “why are manhole covers round?”). Mixing interview styles over the course of the day can also be helpful for the candidate.

After the interview

After the interview (usually the next day), the candidate should be discussed by those who interviewed them. The objective isn’t necessarily to get to consensus first (though that too, ultimately), but rather to areas of concern. In this regard, the post-interview conversation must be handled carefully: the interview is deliberately constructed to allow broad contact with the candidate, and it is possible than someone relatively junior or otherwise inexperienced will see something that others will miss. The meeting should be constructed to assure that this important data isn’t supressed; bad hires can happen when reservations aren’t shared out of fear of disappointing a larger group!

One way to do this is to structure the meeting this way:

  1. All participants are told to come in with one of three decisions: Hire, Do not hire, Insufficient information. All participants should have one of these positions and they should not change their initial position. (That is, one’s position on a candidate may change over the course of the meeting, but the initial position shouldn’t be retroactively changed.) If it helps, this position can be privately recorded before the meeting starts.

  2. The meeting starts with everyone who believes Do not hire explaining their position. While starting with the Do not hire positions may seem to give the meeting a negative disposition, it is extremely important that the meeting start with the reservations lest they be silenced — especially when and where they are so great that someone believes a candidate should not be hired.

  3. Next, those who believe Insufficient information should explain their position. These positions may be relatively common, and it means that the interview left the interviewer with unanswered questions. By presenting these unanswered questions, there is a possibility that others can provide answers that they may have learned in their interactions with the candidate.

  4. Finally, those who believe Hire should explain their position, perhaps filling in missing information for others who are less certain.

If there are any Do not hire positions, these should be treated very seriously, for it is saying that the aptitude, education, motivation, values and/or integrity of the candidate are in serious doubt or are otherwise unacceptable. Those who believe Do not hire should be asked for the dimensions that most substantiate their position. Especially where these reservations are around values or integrity, a single Do not hire should raise serious doubts about a candidate: the risks of bad hires around values or integrity are far too great to ignore someone’s judgement in this regard!

Ideally, however, no one has the position of Do not hire, and through a combination of screening and candidate self-selection, everyone believes Hire and the discussion can be brief, positive and forward-looking!

If, as is perhaps most likely, there is some mix of Hire and Insufficient information, the discussion should focus on the information that is missing about the candidate. If other interviewers cannot fill in the information about the candidate (and if it can’t be answered by the corpus of material provided by the candidate), the group should together brainstorm about how to ascertain it. Should a follow-up conversation be scheduled? Should the candidate be asked to provide some missing information? Should some aspect of the candidate’s background be explored? The collective decision should not move to Hire as long as there remain unanswered questions preventing everyone from reaching the same decision.

Assessing the assessment process

It is tautologically challenging to evaluate one’s process for assessing software engineers: one lacks data on the candidates that one doesn’t hire, and therefore can’t know which candidates should have been extended offers of employment but weren’t. As such, hiring processes can induce a kind of ultimate survivorship bias in that it is only those who have survived (or instituted) the process who are present to assess it — which can lead to deafening echo chambers of smug certitude. One potential way to assess the assessment process: ask candidates for their perspective on it. Candidates are in a position to be evaluating many different hiring processes concurrently, and likely have the best perspective on the relative merits of different ways of assessing software engineers.

Of course, there is peril here too: while many organizations would likely be very interested in a candidate who is bold enough to offer constructive criticism on the process being used to assess them while it is being used to assess them, the candidates themselves might not realize that — and may instead offer bland bromides for fear of offending a potential employer. Still, it has been our experience that a thoughtful process will encourage a candidate’s candor — and we have found that the processes described here have been strengthened by listening carefully to the feedback of candidates.

Posted on October 5, 2018 at 11:02 am by bmc · Permalink · Comments Closed
In: Uncategorized