Profiling Node.js

(For returning readers, this is basically a “tl;dr” version of my previous post on Node.js performance. The post below also appears on the Node.js blog.)

It’s incredibly easy to visualize where your Node program spends its time using DTrace and node-stackvis (a Node port of Brendan Gregg’s FlameGraph tool):

  1. Run your Node.js program as usual.
  2. In another terminal, run:
    $ dtrace -n 'profile-97/execname == "node" && arg1/{
        @[jstack(100, 8000)] = count(); } tick-60s { exit(0); }' > stacks.out

    This will sample about 100 times per second for 60 seconds and emit results to stacks.out. Note that this will sample all running programs called “node”. If you want a specific process, replace execname == "node" with pid == 12345 (the process id).

  3. Use the “stackvis” tool to transform this directly into a flame graph. First, install it:
    $ npm install -g stackvis

    then use stackvis to convert the DTrace output to a flamegraph:

    $ stackvis dtrace flamegraph-svg < stacks.out > stacks.svg
  4. Open stacks.svg in your favorite browser.

You’ll be looking at something like this:

This is a visualization of all of the profiled call stacks. This example is from the “hello world” HTTP server on the Node.js home page under load. Start at the bottom, where you have “main”, which is present in most Node stacks because Node spends most on-CPU time in the main thread. Above each row, you have the functions called by the frame beneath it. As you move up, you’ll see actual JavaScript function names. The boxes in each row are not in chronological order, but their width indicates how much time was spent there. When you hover over each box, you can see exactly what percentage of time is spent in each function. This lets you see at a glance where your program spends its time.

That’s the summary. There are a few prerequisites:

  • You must gather data on a system that supports DTrace with the Node.js ustack helper. For now, this pretty much means illumos-based systems like SmartOS, including the Joyent Cloud. MacOS users: OS X supports DTrace, but not ustack helpers. The way to get this changed is to contact your Apple developer liason (if you’re lucky enough to have one) or file a bug report at bugreport.apple.com. I’d suggest referencing existing bugs 5273057 and 11206497. More bugs filed (even if closed as dups) show more interest and make it more likely Apple will choose to fix this.
  • You must be on 32-bit Node.js 0.6.7 or later, built --with-dtrace. The helper doesn’t work with 64-bit Node yet. On illumos (including SmartOS), development releases (the 0.7.x train) include DTrace support by default.

There are a few other notes:

  • You can absolutely profile apps in production, not just development, since compiling with DTrace support has very minimal overhead. You can start and stop profiling without restarting your program.
  • You may want to run the stacks.out output through c++filt to demangle C++ symbols. Be sure to use the c++filt that came with the compiler you used to build Node. For example:
    c++filt < stacks.out > demangled.out

    then you can use demangled.out to create the flamegraph.

  • If you want, you can filter stacks containing a particular function. The best way to do this is to first collapse the original DTrace output, then grep out what you want:
    $ stackvis dtrace collapsed < stacks.out | grep SomeFunction > collapsed.out
    $ stackvis collapsed flamegraph-svg < collapsed.out > stacks.svg
  • If you’ve used Brendan’s FlameGraph tools, you’ll notice the coloring is a little different in the above flamegraph. I ported his tools to Node first so I could incorporate it more easily into other Node programs, but I’ve also been playing with different coloring options. The current default uses hue to denote stack depth and saturation to indicate time spent. (These are also indicated by position and size.) Other ideas include coloring by module (so V8, JavaScript, libc, etc. show up as different colors.)

For more on the underlying pieces, see my previous post on Node.js profiling and Brendan’s post on Flame Graphs.

.

Managing Node.js dependencies with shrinkwrap

This post also appears on the Node.js blog.


Photo by Luc Viatour (flickr)

Managing dependencies is a fundamental problem in building complex software. The terrific success of github and npm have made code reuse especially easy in the Node world, where packages don’t exist in isolation but rather as nodes in a large graph. The software is constantly changing (releasing new versions), and each package has its own constraints about what other packages it requires to run (dependencies). npm keeps track of these constraints, and authors express what kind of changes are compatible using semantic versioning, allowing authors to specify that their package will work with even future versions of its dependencies as long as the semantic versions are assigned properly.

This does mean that when you “npm install” a package with dependencies, there’s no guarantee that you’ll get the same set of code now that you would have gotten an hour ago, or that you would get if you were to run it again an hour later. You may get a bunch of bug fixes now that weren’t available an hour ago. This is great during development, where you want to keep up with changes upstream. It’s not necessarily what you want for deployment, though, where you want to validate whatever bits you’re actually shipping.

Put differently, it’s understood that all software changes incur some risk, and it’s critical to be able to manage this risk on your own terms. Taking that risk in development is good because by definition that’s when you’re incorporating and testing software changes. On the other hand, if you’re shipping production software, you probably don’t want to take this risk when cutting a release candidate (i.e. build time) or when you actually ship (i.e. deploy time) because you want to validate whatever you ship.

You can address a simple case of this problem by only depending on specific versions of packages, allowing no semver flexibility at all, but this falls apart when you depend on packages that don’t also adopt the same principle. Many of us at Joyent started wondering: can we generalize this approach?

Shrinkwrapping packages

That brings us to npm shrinkwrap[1]:

NAME
       npm-shrinkwrap -- Lock down dependency versions

SYNOPSIS
       npm shrinkwrap

DESCRIPTION
       This  command  locks down the versions of a package's dependencies so
       that you can control exactly which versions of each  dependency  will
       be used when your package is installed.

Let’s consider package A:

{
    "name": "A",
    "version": "0.1.0",
    "dependencies": {
        "B": "<0.1.0"
    }
}

package B:

{
    "name": "B",
    "version": "0.0.1",
    "dependencies": {
        "C": "<0.1.0"
    }
}

and package C:

{
    "name": "C,
    "version": "0.0.1"
}

If these are the only versions of A, B, and C available in the registry, then a normal “npm install A” will install:

A@0.1.0
└─┬ B@0.0.1
  └── C@0.0.1

Then if B@0.0.2 is published, then a fresh “npm install A” will install:

A@0.1.0
└─┬ B@0.0.2
  └── C@0.0.1

assuming the new version did not modify B’s dependencies. Of course, the new version of B could include a new version of C and any number of new dependencies. As we said before, if A’s author doesn’t want that, she could specify a dependency on B@0.0.1. But if A’s author and B’s author are not the same person, there’s no way for A’s author to say that she does not want to pull in newly published versions of C when B hasn’t changed at all.

In this case, A’s author can use

# npm shrinkwrap

This generates npm-shrinkwrap.json, which will look something like this:

{
    "name": "A",
    "dependencies": {
        "B": {
            "version": "0.0.1",
            "dependencies": {
                "C": {  "version": "0.1.0" }
            }
        }
    }
}

The shrinkwrap command has locked down the dependencies based on what’s currently installed in node_modules. When “npm install” installs a package with a npm-shrinkwrap.json file in the package root, the shrinkwrap file (rather than package.json files) completely drives the installation of that package and all of its dependencies (recursively). So now the author publishes A@0.1.0, and subsequent installs of this package will use B@0.0.1 and C@0.1.0, regardless the dependencies and versions listed in A’s, B’s, and C’s package.json files. If the authors of B and C publish new versions, they won’t be used to install A because the shrinkwrap refers to older versions. Even if you generate a new shrinkwrap, it will still reference the older versions, since “npm shrinkwrap” uses what’s installed locally rather than what’s available in the registry.

Using shrinkwrapped packages

Using a shrinkwrapped package is no different than using any other package: you can “npm install” it by hand, or add a dependency to your package.json file and “npm install” it.

Building shrinkwrapped packages

To shrinkwrap an existing package:

  1. Run “npm install” in the package root to install the current versions of all dependencies.
  2. Validate that the package works as expected with these versions.
  3. Run “npm shrinkwrap”, add npm-shrinkwrap.json to git, and publish your package.

To add or update a dependency in a shrinkwrapped package:

  1. Run “npm install” in the package root to install the current versions of all dependencies.
  2. Add or update dependencies. “npm install” each new or updated package individually and then update package.json.
  3. Validate that the package works as expected with the new dependencies.
  4. Run “npm shrinkwrap”, commit the new npm-shrinkwrap.json, and publish your package.

You can still use npm outdated(1) to view which dependencies have newer versions available.

For more details, check out the full docs on npm shrinkwrap, from which much of the above is taken.

Why not just check node_modules into git?

One previously proposed solution is to “npm install” your dependencies during development and commit the results into source control. Then you deploy your app from a specific git SHA knowing you’ve got exactly the same bits that you tested in development. This does address the problem, but it has its own issues: for one, binaries are tricky because you need to “npm install” them to get their sources, but this builds the [system-dependent] binary too. You can avoid checking in the binaries and use “npm rebuild” at build time, but we’ve had a lot of difficulty trying to do this.[2] At best, this is second-class treatment for binary modules, which are critical for many important types of Node applications.[3]

Besides the issues with binary modules, this approach just felt wrong to many of us. There’s a reason we don’t check binaries into source control, and it’s not just because they’re platform-dependent. (After all, we could build and check in binaries for all supported platforms and operating systems.) It’s because that approach is error-prone and redundant: error-prone because it introduces a new human failure mode where someone checks in a source change but doesn’t regenerate all the binaries, and redundant because the binaries can always be built from the sources alone. An important principle of software version control is that you don’t check in files derived directly from other files by a simple transformation.[4] Instead, you check in the original sources and automate the transformations via the build process.

Dependencies are just like binaries in this regard: they’re files derived from a simple transformation of something else that is (or could easily be) already available: the name and version of the dependency. Checking them in has all the same problems as checking in binaries: people could update package.json without updating the checked-in module (or vice versa). Besides that, adding new dependencies has to be done by hand, introducing more opportunities for error (checking in the wrong files, not checking in certain files, inadvertently changing files, and so on). Our feeling was: why check in this whole dependency tree (and create a mess for binary add-ons) when we could just check in the package name and version and have the build process do the rest?

Finally, the approach of checking in node_modules doesn’t really scale for us. We’ve got at least a dozen repos that will use restify, and it doesn’t make sense to check that in everywhere when we could instead just specify which version each one is using. There’s another principle at work here, which is separation of concerns: each repo specifies what it needs, while the build process figures out where to get it.

What if an author republishes an existing version of a package?

We’re not suggesting deploying a shrinkwrapped package directly and running “npm install” to install from shrinkwrap in production. We already have a build process to deal with binary modules and other automateable tasks. That’s where we do the “npm install”. We tar up the result and distribute the tarball. Since we test each build before shipping, we won’t deploy something we didn’t test.

It’s still possible to pick up newly published versions of existing packages at build time. We assume force publish is not that common in the first place, let alone force publish that breaks compatibility. If you’re worried about this, you can use git SHAs in the shrinkwrap or even consider maintaining a mirror of the part of the npm registry that you use and require human confirmation before mirroring unpublishes.

Final thoughts

Of course, the details of each use case matter a lot, and the world doesn’t have to pick just one solution. If you like checking in node_modules, you should keep doing that. We’ve chosen the shrinkwrap route because that works better for us.

It’s not exactly news that Joyent is heavy on Node. Node is the heart of our SmartDataCenter (SDC) product, whose public-facing web portal, public API, Cloud Analytics, provisioning, billing, heartbeating, and other services are all implemented in Node. That’s why it’s so important to us to have robust components (like logging and REST) and tools for understanding production failures post mortem, profile Node apps in production, and now managing Node dependencies. Again, we’re interested to hear feedback from others using these tools.


[1] Much of this section is taken directly from the “npm shrinkwrap” documentation.

[2] We’ve had a lot of trouble with checking in node_modules with binary dependencies. The first problem is figuring out exactly which files not to check in (*.o, *.node, *.dynlib, *.so, *.a, …). When Mark went to apply this to one of our internal services, the “npm rebuild” step blew away half of the dependency tree because it ran “make clean”, which in dependency ldapjs brings the repo to a clean slate by blowing away its dependencies. Later, a new (but highly experienced) engineer on our team was tasked with fixing a bug in our Node-based DHCP server. To fix the bug, we went with a new dependency. He tried checking in node_modules, which added 190,000 lines of code (to this repo that was previously a few hundred LOC). And despite doing everything he could think of to do this correctly and test it properly, the change broke the build because of the binary modules. So having tried this approach a few times now, it appears quite difficult to get right, and as I pointed out above, the lack of actual documentation and real world examples suggests others either aren’t using binary modules (which we know isn’t true) or haven’t had much better luck with this approach.

[3] Like a good Node-based distributed system, our architecture uses lots of small HTTP servers. Each of these serves a REST API using restify. restify uses the binary module node-dtrace-provider, which gives each of our services deep DTrace-based observability for free. So literally almost all of our components are or will soon be depending on a binary add-on. Additionally, the foundation of Cloud Analytics are a pair of binary modules that extract data from DTrace and kstat. So this isn’t a corner case for us, and we don’t believe we’re exceptional in this regard. The popular hiredis package for interfacing with redis from Node is also a binary module.

[4] Note that I said this is an important principle for software version control, not using git in general. People use git for lots of things where checking in binaries and other derived files is probably fine. Also, I’m not interested in proselytizing; if you want to do this for software version control too, go ahead. But don’t do it out of ignorance of existing successful software engineering practices.

Playing with Node/V8 postmortem debugging

2016 Update: The commands and output in this post are pretty dated now, but you can find an up-to-date docs (including a tutorial) in the mdb_v8 user guide.

“Post Mortem” by C. MacLaurin

Several weeks ago I posted about postmortem debugging for Node.js, a critical technique for understanding fatal software failure (and thereby keeping up software quality). Now that the underlying pieces are freely available[1], you can use the documentation below to start debugging your own Node programs. With these tools you can inspect the call stack and arbitrary JavaScript heap objects for both live programs and core dumps. Documentation is often dry reading, but I’ve worked to keep this document concise and relevant. You can read this start-to-finish in half an hour and you’ll learn a bunch about V8 internals by exploring your own node program. If you don’t care about this level of detail, just read my older post to get a sense of what this tool does. Shortly I’ll post an example of using this tool to debug a problem we weren’t able to debug with existing tools.

This tool works out of the box for Node.js on the latest SmartOS release. So if you’ve got a SmartOS live CD and any release of Node.js since 0.4.8, these tools will just work. If you’re running an older version of SmartOS, another illumos-based system, or a Joyent SmartMachine that hasn’t yet been upgraded to the latest SmartOS, you can just download the binaries that you need. The instructions below cover both of these cases.


Introduction

The V8 mdb dmod facilitates inspection of V8-level and JS-level state from MDB, the illumos modular debugger. If you’re not familiar with MDB, you may want to skim the MDB man page. The syntax may take some getting used to, but you can probably pick up enough to get started from the examples below.

The various dcmds and walkers are prefixed with “v8″ or “js” depending on which level they operate on.

The JavaScript-level dcmds are:

  walk jsframe              - walk V8 JavaScript stack frames
  dcmd jsframe              - summarize a JavaScript stack frame
  dcmd jsstack              - print a JavaScript stacktrace
  dcmd jsprint              - print a JavaScript object

The V8-level dcmds are:

  dcmd v8array              - print elements of a V8 FixedArray
  dcmd v8classes            - list known V8 heap object C++ classes
  dcmd v8load               - load canned config for a specific V8 version
  dcmd v8print              - print a V8 heap object
  dcmd v8str                - print the contents of a V8 string
  dcmd v8type               - print the type of a V8 heap object
  dcmd v8types              - list known V8 heap object types

Getting started

Step 1: Run MDB

First, run your program. Then attach with MDB (using mdb -p <pid>) or take a core dump with gcore(1M) and run MDB on the core file.

Step 2: Load the dmod

The dmod is not loaded by default. You must explicitly load it with:

> ::load v8

If this fails, you’re running an older version of SmartOS and you’ll need to download the v8.so binary and use ::load /path/to/v8.so. (You have to use ::load ./v8.so even if the file is in the same directory so that MDB knows you’re giving it a relative path; by default, MDB searches the dynamic library path, which probably doesn’t include “.”.)

Step 3: Configure the dmod

The dmod requires knowledge of internal V8 structures that change between versions of V8. Some builds of V8 include the required information directly in the binary, in which case the dmod will auto-configure itself and you’ll see this when you initially load the dmod:

> ::load v8
Loaded V8 support.

If you don’t see that, your version of Node doesn’t have support built-in, but you can load a pre-canned configuration using ::v8load, which supports arguments for “node-0.4″ (for v0.4.8 and later) and “node-0.6.5″ (which works on v0.6.5 and v0.6.6, and possibly later versions).

Summary

To summarize how you’d start debugging a core file from node 0.6.5:

$ mdb core.node.jpc
Loading modules: [ libc.so.1 ld.so.1 ]
> ::load v8
> ::v8load node-0.6.5
V8 dmod configured based on node-0.6.5

or, if you downloaded the binaries separately above:

$ ./mdb core.node.jpc
Loading modules: [ libc.so.1 ld.so.1 ]
> ::load ./v8.so
> ::v8load node-0.6.5
V8 dmod configured based on node-0.6.5

Inspecting JavaScript-level state

::walk jsframe

Iterates stack frames and prints out the frame pointers. Starts with %ebp, and then follows frame pointers until it finds something that doesn’t look like a V8 JavaScript stack frame. This is very much like MDB’s $C macro but only prints the frame pointers. To get more details, pipe this to ::jsframe.

Note: libv8 is typically compiled with -fomit-frame-pointer, so if mdb catches “node” while it’s somewhere inside of V8, you may not be able to get a stacktrace at all.

::jsframe [-v]

Given a frame pointer, prints out a summary of the stack frame. With no arguments, prints a one-line summary for use in a stacktrace that includes the frame pointer, the name of the function (which may be inferred), the script where it was defined, and either the line number or character position in the file where the function was defined. The line number information will only be available if V8 has computed that information, which it does on a per-script basis as needed.

Example 1: anonymous function, invoked with a name, with line number information present:

> 8046df0::jsframe
8046df0 <anonymous> (as EventEmitter.emit) at events.js line 38

Example 2: named function with no line number information:

> 8046af8::jsframe
8046af8 caAggrValueHeatmapImage at
/opt/local/lib/node/.npm/cabase/1.0.3vrelease-20110901-20110901-3-gc693a90-dirty/package/lib/ca/ca-agg.js position 48960

Example 3: anonymous function invoked with no name and with no line number information:

> 8047500::jsframe
8047500 <anonymous> (as <anon>) at /home/dap/node-stack/examples/loop-anon.js line 15

With -v, prints out a best guess at function arguments. This uses the “length” property of the Function object to determine how many arguments the function consumes and then prints them from the stack. For example:

> 8046a9c::jsframe -v
8046a9c <anonymous> (as exports.bucketize)
    func: fc435fcd
    file:
/opt/local/lib/node/.npm/cabase/1.0.3vrelease-20110901-20110901-3-gc693a90-dirty/package/node_modules/.npm/heatmap/0.0.1/package/lib/heatmap.js
    posn: position 7838
    arg1: fc070709 (JSArray)
    arg2: fc070719 (JSObject)

You can use ::jsprint to print these objects.

Note: there are several types of non-JavaScript frames that ::jsframe recognizes:

  • InternalFrame: represents a C++ frame
  • EntryFrame: represents a transition from C++ to JavaScript
  • ExitFrame: represents a transition from JavaScript to C++
  • ArgumentsAdaptorFrame: used when a function is invoked with fewer (or more?) arguments than it’s defined to consume.
  • ConstructorFrame: indicates the next frame’s function is being invoked as a constructor

::jsstack [-v]

Equivalent to ::walk jsframe | ::jsframe [-v]. This iterates the known JS stack frames and prints out the summary or detailed view.

::jsprint [-a] [-d depth]

Given a JavaScript object of class Object, Array, Function, Number, Boolean, or special value (like null or undefined), print it out. Except for functions, the result should closely resemble the JavaScript syntax you would use to define the object inline. For example:

> fc070719::jsprint
{
    base: 1320886447,
    weighbyrange: false,
    height: 281,
    width: 624,
    nbuckets: 50,
    max: 11538462,
    nsamples: 30,
    hue: [ 21 ],
    linear: false,
    min: 11538462,
}

Note: Not all types of objects are supported, but we haven’t yet encountered those that aren’t, and the dmod should behave sanely if they are encountered.

With -a, prints the address of the V8 heap object representing each JavaScript object so you can inspect it individually. For example:

> fc070719::jsprint -a
fc070719: {
    base: fbf4ec4d: 1320886447,
    weighbyrange: fe88e0ad: false,
    height: 232: 281,
    width: 4e0: 624,
    nbuckets: 64: 50,
    max: 160203c: 11538462,
    nsamples: 3c: 30,
    hue: fc043179: [ 2a: 21 ],
    linear: fe88e0ad: false,
    min: 160203c: 11538462,
}

Since many objects are circular (or just large), the -d option is provided to control how deep to recurse. The default recursion depth is 2.

Note: You may see expected special values like “undefined”, “null”, “true”, and “false”. There’s an additional special value called “hole” which represents holes in an array. For example:

> fc070709::jsprint
[
    [],    [],
...
    hole,
    hole,
    hole,
]

indicates that the array’s length has somehow been extended beyond the last actual element so the remaining elements are holes.

Inspecting V8-level state

Since not all information is contained directly in JavaScript state, it may be useful to inspect V8-level heap objects directly. To make sense of this, you must understand a few basics about the V8 heap. The details contained here are obviously subject to change at any time as V8 itself evolves.

Pointers in the heap are word-aligned, but the low-order bits are used to encode some information about the item. Specifically, most heap objects belong to one of two classes: SMI (small integer) values and HeapObjects.

  • If the low-order bit is 0, then the “pointer” actually represents an SMI whose integer value is represented using the upper 31 bits. For example, the “pointer” value 0x36de actually denotes the integer 7023 (0x36de>>1). There’s no object in memory corresponding to this number; the pointer word represents the value directly.
  • If the low-order bit is 1, then the pointer refers to a HeapObject. This is an actual memory object allocated on the JS heap. The actual object begins at the address with the low-order 1 cleared, but for convenience the dcmds take care of this so you do not need to subtract 1 explicitly. For example, 0xfe88e05d represents an object that actually starts at fe88e05c, but you’d still use fe88e05d as the argument to the various dcmds that inspect V8 state.

Heap Object type information

Various C++ classes are used to represent objects of the heap, including JSFunction, JSObject, JSArray, Oddball (for undefined, null, true, and false), HeapNumber (for non-SMI numbers), and various types of String. You can list all of the ones that the dmod knows about with ::v8classes.

There’s a related but separate set of type values (a C++ enum): FUNCTION_TYPE, JSOBJECT_TYPE, JSARRAY_TYPE, ODDBALL_TYPE, etc. For strings, the type field also encodes whether the string is encoded in ASCII or two-byte encoding and whether it’s represented as a sequence of bytes or a ConsString. You can list the type values that the dmod knows about with ::v8types.

All of the various heap object classes are subclasses of HeapObject. HeapObject defines a field called “map” (of class Map, itself a HeapObject) which describes the layout of the object in detail. Most importantly, the Map includes the type of the object.

Using the type information in a HeapObject’s map, and a separate built-in mapping of type values to C++ classes, the dmod can take an arbitrary pointer and tell you what C++ class it must be. This is what the ::v8type dcmd does:

> fc070719::v8type
0xfc070719: JSObject

> fbf4ec4d::v8type
0xfbf4ec4d: HeapNumber

> fc043179::v8type
0xfc043179: JSArray

> fe88e0ad::v8type
0xfe88e0ad: Oddball: "false"

> 160203c::v8type
0x160203c: SMI: value = 11538462

(The above examples are taken from the JSObject printed with ::jsprint -a in the example above.)

The dmod also knows the layout of these C++ classes. You can use ::v8print to print out an object as an instance of its C++ class (which is automatically inferred as described above):

> fc070719::v8print
fc070719 JSObject {
    fc070719 HeapObject < Object < MaybeObject  {
        fc070718 map = fe85e7e1 (Map)
    }
    fc07071c properties = fc07b9c9 (FixedArray)
    fc070720 elements = fe88e021 (FixedArray)
}

> fc043179::v8print
fc043179 JSArray {
    fc043179 JSObject {
        fc043179 HeapObject < Object < MaybeObject  {
            fc043178 map = fe84caa1 (Map)
        }
        fc04317c properties = fe88e021 (FixedArray)
        fc043180 elements = fc0480f1 (FixedArray)
    }
    fc043184 length = 2 (SMI: value = 1)
}

A few notes about this output:

  • The output is designed to look like MDB’s “::print -at”.
  • The type printed for each field is the dynamically-detected type of the object pointed to by the field, not the type declared in the source for that field. This is useful because many fields are declared with type “Object” in the source but are actually a more specific type.
  • ::v8print prints out the fields from base classes as though these were objects contained within the subclass object (which is essentially how it actually works). In this case, a JSArray is a JSObject, which is a HeapObject. ::v8print prints the base class fields first (omitting the empty struct for classes with no fields, as in Object and MaybeObject above).
  • Note that for the JSArray object at address fc043179, the base class JSObject is necessarily at the same address, and so on until we get to the first field, which is actually at offset fc043178, 1 less than the object itself for the reasons described above.

Note: Some fields may be missing because the dmod does not know about every field in every class.

Type-specific dcmds

There are two additional dcmds which are useful for printing out instances of certain specific V8 classes. You can always use ::v8print, but these commands give additional information.

::v8str [-v]

Prints the contents of any ASCII V8 string, regardless of representation. There are basically three representations:

  • SeqString: characters stored sequentially in memory (like a conventional C string)
  • ConsString: stores two pointers to Strings that are semantically concatenated
  • ExternalString: stores a pointer to a string outside the V8 heap. These are used very rarely. The representation depends on the type of the pointed-to value. The dmod assumes these are Node strings.

With -v, this dcmd shows what it did. For example:

> fc3e2081::v8str
exports.bucketize

> fc3e2081::v8str -v
ConsString
    ptr1: fe8a7ba1
    ptr2: fc417a75
    SeqAsciiString
        length: 7, will read: 7
        value: "exports"
    SeqAsciiString
        length: 10, will read: 10
        value: ".bucketize"
exports.bucketize

In this case, the original pointer is a ConsString. Each pointer in this top-level ConsString is itself a SeqString. In general, these pointers may themselves be ConsStrings that must be followed recursively.

::v8array

Prints the contents of a V8 array. This is mostly useful for exploring and debugging the dmod. For example, here’s how you can get the contents of a JSArray:

> fc043179::v8print
fc043179 JSArray {
    fc043179 JSObject {
        fc043179 HeapObject < Object < MaybeObject  {
            fc043178 map = fe84caa1 (Map)
        }
        fc04317c properties = fe88e021 (FixedArray)
        fc043180 elements = fc0480f1 (FixedArray)
    }
    fc043184 length = 2 (SMI: value = 1)
}

We see that this is a JSArray with length 1 (since “length” is stored as an SMI with value 2). We can use ::v8array to print out the elements of the “elements” FixedArray:

> fc0480f1::v8array
2a

and then we can print out the one element’s value:

> 2a::v8print
SMI: value = 21

We can check our work with ::jsprint, which does all this automatically:

> fc043179::jsprint
[ 21 ]

Putting it all together

To reinforce all of this, we’ll get the details about a particular stack frame manually. First, walk the stack:

> ::walk jsframe
8046a9c
8046af8
...

We’ll take the top frame. Use ::jsframe -v to get a pointer to the frame’s “func” object:

> 8046a9c::jsframe -v
8046a9c <anonymous> (as exports.bucketize)
    func: fc435fcd
    file: /opt/local/lib/node/.npm/cabase/1.0.3vrelease-20110901-20110901-3-gc693a90-dirty/package/node_modules/.npm/heatmap/0.0.1/package/lib/heatmap.js
    posn: position 7838
    arg1: fc070709 (JSArray)
    arg2: fc070719 (JSObject)

We’ve cheated because we have the function and file names already, but we’ll show how to get this from just the function pointer, which is a fixed offset of -0×8 from the frame pointer. We only used ::jsframe to get the function pointer.

Print out the function:

> fc435fcd::v8print
fc435fcd JSFunction {
    fc435fcd JSObject {
        fc435fcd HeapObject < Object < MaybeObject  {
            fc435fcc map = fe84d8a1 (Map)
        }
        fc435fd0 properties = fe88e021 (FixedArray)
        fc435fd4 elements = fe88e021 (FixedArray)
    }
    fc435fdc prototype_or_initial_map = fe88e0cd (Oddball: "hole")
    fc435fe0 shared = fc4348c1 (SharedFunctionInfo)
    fc435fe8 literals = fc436021 (FixedArray)
    fc435fec next_function_link = fe88e05d (Oddball: "undefined")
}

The interesting details are all in the associated SharedFunctionInfo:

> fc4348c1::v8print
fc4348c1 SharedFunctionInfo {
    fc4348c1 HeapObject < Object < MaybeObject  {
        fc4348c0 map = fe84c681 (Map)
    }
    fc4348c4 name = fe88e2f1 (SeqAsciiString: "")
    fc4348d0 construct_stub = fe86ea21 (Code)
    fc4348d4 instance_class_name = fe88e199 (SeqAsciiString: "Object")
    fc4348d8 function_data = fe88e05d (Oddball: "undefined")
    fc4348dc script = fc434581 (Script)
    fc4348e0 debug_info = fe88e05d (Oddball: "undefined")
    fc4348e4 inferred_name = fc3e2081 (ConsString: "exports.bucketize")
    fc4348e8 initial_map = fe88e05d (Oddball: "undefined")
    fc4348ec this_property_assignments = fe88e021 (FixedArray)
    fc4348f4 length = 4 (SMI: value = 2)
    fc4348f8 formal_parameter_count = 4 (SMI: value = 2)
    fc4348fc expected_nof_properties = 14 (SMI: value = 10)
    fc434900 num_literals = 8 (SMI: value = 4)
    fc434904 start_position_and_type = 7ac0 (SMI: value = 15712)
    fc434908 end_position = 3968 (SMI: value = 7348)
    fc43490c function_token_position = 1e9e (SMI: value = 3919)
    fc434910 compiler_hints = 0 (SMI: value = 0)
    fc434914 this_property_assignments_count = 0 (SMI: value = 0)
    fc434918 opt_count = 0 (SMI: value = 0)
}

We can already see that the name is “”, indicating an anonymous function, and the inferred name is “exports.bucketize”. If ::v8print hadn’t printed the function name, we could get it with ::v8str:

> fc3e2081::v8str
exports.bucketize

To get the script name, follow the “script” field:

> fc434581::v8print
fc434581 Script {
    fc434581 Struct {
        fc434581 HeapObject < Object < MaybeObject  {
            fc434580 map = fe84c581 (Map)
        }
    }
    fc434584 source = fc130021 (
    SeqAsciiString: "(function (exports, require, module, __filename, __dirname) { /*
     * Copyright 2010 Joyent, Inc.  All rights reserved.
     * Use is subject to license terms.
     */

    var assert = require('assert');
    var libpng = require('png');
    var sys = require()
    fc434588 name = fc4120bd (
    SeqAsciiString: "/opt/local/lib/node/.npm/cabase/1.0.3vrelease-20110901-20110901-3-gc693a90-dirty/package/node_modules/.npm/heatmap/0.0.1/package/lib/heatmap.js")
    fc43458c line_offset = 0 (SMI: value = 0)
    fc434590 column_offset = 0 (SMI: value = 0)
    fc434594 data = fe88e05d (Oddball: "undefined")
    fc434598 context_data = fe88e05d (Oddball: "undefined")
    fc43459c wrapper = fc400bf1 (<unknown>)
    fc4345a0 type = 4 (SMI: value = 2)
    fc4345a4 compilation_type = 0 (SMI: value = 0)
    fc4345a8 line_ends = fe88e05d (Oddball: "undefined")
    fc4345ac id = 96 (SMI: value = 75)
    fc4345b0 eval_from_shared = fe88e05d (Oddball: "undefined")
    fc4345b4 eval_from_instructions_offset = 0 (SMI: value = 0)
}

See the “name” field. (Interestingly, the “Script” object includes as a member the source code of the script itself, but we only print out the first 255 characters or so.)

You can now see how the JavaScript-level dcmds work based on the lower-level V8 facilities.

Finally, we can look back at the ::jsframe output to get the argument pointers and then print out the corresponding values. We saw that arg2 was “fc070719″, a JSObject, and we can print it out with ::jsprint:

> fc070719::jsprint
{
    base: 1320886447,
    weighbyrange: false,
    height: 281,
    width: 624,
    nbuckets: 50,
    max: 11538462,
    nsamples: 30,
    hue: [ 21 ],
    linear: false,
    min: 11538462,
}

The way the dmod extracts the object’s properties is surprisingly complex and beyond the scope of this document.


I hope this documentation is helpful for people looking to debug Node programs. Expect more information and examples in the coming weeks.

[1] For the curious, the postmortem support for V8 is contained in two illumos commits (1, 2) plus a V8 patch that has not yet landed. The illumos commits include support for the versions of V8 from Node.js 0.4.8 through 0.6.6 (and possibly later) so that you don’t actually need the V8 patch yet.

Where does your Node program spend its time?


Photo by Julian Lim (flickr)

Performance analysis is one of the most difficult challenges in building production software. If a slow application isn’t spending much time on CPU, it could be waiting on filesystem (disk) I/O, network traffic, garbage collection, or many other things. We built the Cloud Analytics tool to help administrators and developers quickly identify these sources of latency in production software. When the app is actually spending its time on CPU, the next step is figuring out what it’s doing: what functions it’s executing and who’s calling them.

One approach is to sample the program at say, 1000 times per second, and examine the 5-10 most frequently seen stacktraces. DTrace lets you do this with the profile provider, whose output might include something like this for a C program:

    libc.so.1`mutex_trylock_adaptive+0x137              ← stacktrace
    libc.so.1`mutex_lock_impl+0xdc
    libc.so.1`mutex_lock+0x10
    mysqld`_Z10open_tableP3THDP10TABLE_LISTP11st_mem_rootPbj+0x4b1
    mysqld`_Z11open_tablesP3THDPP10TABLE_LISTPjj+0x205
    mysqld`_Z20open_and_lock_tablesP3THDP10TABLE_LIST+0x1b
    mysqld`_Z21mysql_execute_commandP3THD+0x416
    mysqld`_Z11mysql_parseP3THDPKcjPS2_+0xf7
    mysqld`_Z16dispatch_command19enum_server_commandP3THDPcj+0x6c9
    mysqld`_Z10do_commandP3THD+0xca
    mysqld`handle_one_connection+0x343
    libc.so.1`_thr_setup+0x70
    libc.so.1`_lwp_start
                7                                       ← count

... (more stacktraces and associated counts)

Brendan has a great example of using this to find a 2000x win in GNU grep. Knowing that many samples shared a particular stacktrace like this one often points you right to the performance problem. But for dynamic environments like Python or Node.js, this information is very hard to obtain. Although the V8 VM supports profiling, it has to be enabled when you start the program, and this adds overhead to the entire execution of the program. That’s a non-starter for most production deployments. And although you can use DTrace to profile such programs, by default it sees only the native stack frames. In environments like Python, this corresponds to interpreter frames, which are fairly useless:

    libpython2.6.so.1.0`PyObject_Free+0xe
    libpython2.6.so.1.0`PyObject_Realloc+0xcb
    libpython2.6.so.1.0`_PyString_Resize+0x7f
    libpython2.6.so.1.0`get_line+0x333
    libpython2.6.so.1.0`file_readline+0xa8
    libpython2.6.so.1.0`PyCFunction_Call+0x63
    libpython2.6.so.1.0`PyEval_EvalFrameEx+0x4099
    libpython2.6.so.1.0`PyEval_EvalCodeEx+0x7d8
    libpython2.6.so.1.0`PyEval_EvalCode+0x2f
    libpython2.6.so.1.0`run_mod+0x41
    libpython2.6.so.1.0`PyRun_FileExFlags+0x7e
    libpython2.6.so.1.0`PyRun_SimpleFileExFlags+0x32c
    libpython2.6.so.1.0`PyRun_AnyFileExFlags+0x69
    libpython2.6.so.1.0`Py_Main+0xa70
    python2.6`main+0x28
    python2.6`_start+0x80

You can’t tell what Python-level functions are being called here. In Node.js things are even worse because the VM compiles JavaScript functions and invokes them directly on the heap, so you often don’t see any symbols at all:

    0xfc618bc0
    0xfc61bd62
    0xfe870841
    0xfc61c1f3
    0xfc617685
    0xfe870841
    0xfc6154d7
    0xfe870e1a
    0xfe8700a6
    node-dtrace`_ZN2v88internalL6InvokeEbNS0_6HandleINS0_10JSFunctionEEENS1_INS0_6ObjectEEEiPPPS4_Pb+0x1fc
    node-dtrace`0x85e6d50

However, through a mechanism called a ustack helper, DTrace can be taught to understand V8 stack frames, giving you a stacktrace that looks more like this (colors and labels added for clarity):

    libc.so.1`gettimeofday+0x7                                  ←  libc gettimeofday call
    Date at  position                                           ←  JavaScript "Date" function
    << adaptor >>
    << constructor >>                                           ← (invoked as a constructor)
    (anon) as exports.active at timers.js position 7590
    (anon) as Socket._write at net.js position 21336
    (anon) as Socket.write at net.js position 19714
    << adaptor >>
    (anon) as OutgoingMessage._writeRaw at http.js position 21526
    (anon) as OutgoingMessage._send at http.js position 20434
    << adaptor >>
    (anon) as OutgoingMessage.end at http.js position 35062     ← re-entering node library
    << adaptor >>
    (anon) as a at /home/dap/work-server.js line 25
    handle at /home/dap/work-server.js line 13                  ← invoking our request handler
    (anon) as EventEmitter.emit at events.js position 3532
    << adaptor >>
    (anon) as parser.onIncoming at http.js position 80542
    (anon) as parser.onHeadersComplete at http.js position 4904 ← parsing HTTP headers
    << internal >>
    << entry >>                                                 ← entering JavaScript
    node-dtrace`_ZN2v88internalL6InvokeEbNS0_6HandleINS0_10JSFunctionEEENS1_INS0_6ObjectEEEiPPPS4_Pb+0x1fc
    node-dtrace`0x85e6d50

The helper is built into the “node” binary and translates each stack frame that it understands into a text description that typically includes the function name, the inferred name (for anonymous functions), and the file and position where the function is defined. Unrecognized stack frames are left alone, so you can see native C/C++ frames mixed with JavaScript frames. If line numbers are available, the position includes the line number where the function was defined; if not, only the character position in the file is printed. V8 only has line number information available for files that have previously been used in an exception stack trace.

Reading from the bottom up, we see the initial entry into JavaScript from node, then some frames in the node library for processing HTTP parser events, then the call into our program to handle the HTTP request, a call back to the node library to finish sending the response, then native code invoked by the Date constructor invoked as part of sending the response.

There are a few frames that you might be surprised to see if you’re not familiar with V8 internals:

  • entry: frame transitioning from C++ to JavaScript
  • exit: frame transitioning from JavaScript to C++
  • construct: the above frame’s function is being invoked as a constructor
  • adaptor: adaptor used when a function is invoked with fewer arguments than it expects
  • internal: internal V8 frame (does not correspond to a JavaScript frame)

Now that we can have DTrace output JavaScript-friendly stacktraces, we can use the profile provider as described above to identify common stacktraces and thus find out where any Node program is spending its time.

Instructions

The DTrace ustack helper landed in the upcoming Node v0.6.7 and the tools for generating a flame graph are available on github. With this, you’ve got all the pieces to try this out on your own programs, though there are a few caveats to be aware of for now.

  1. First, you must be running on an Illumos- (or Solaris-)based system. ustack helpers don’t work at all on Mac OS and appear to be broken on FreeBSD. If you don’t have such a system and want to play around, try a SmartOS live CD (it’s really easy).
  2. You must also have access to modify kernel tunables, as the default limits are too low to support the V8 helper. (We’ll be rolling these tunable changes across the Joyent cloud so that SmartMachine customers can skip this step, but that hasn’t been completed yet.) Use MDB or /etc/system to set dtrace_helper_actions_max from 32 (the default) to at least 128:
    # echo "dtrace_helper_actions_max/W 0t128" | mdb -kw
    dtrace_helper_actions_max:      0x20            =       0x80
    

    (update: modern versions of SmartOS need no tuning so you can skip this step.)

  3. Build Node with ustack helper support. Run ./configure --with-dtrace and you’ll get support for the ustack helper as well as additional DTrace probes.
  4. Run your program as normal. I’ll use this sample program:
    /*
     * This script demonstrates a simple stack trace involving an anonymous
     * function, then goes into an infinite loop so you can more easily catch it
     * with DTrace.  One caveat: we grab a stack trace at the start to force V8 to
     * compute line number information for this script.  If we didn't have this,
     * DTrace would print out position-in-file information instead of line numbers.
     */
    new Error().stack;
    
    function main() { func1(); }
    function func1() { func2(); }
    
    function func2()
    {
    	(function () {
    		for (;;)
    			;
    	})();
    }
    
    main();
    

    which I run like this:

    $ node loop-anon.js &
    [1] 1791
    
  5. Now we’ll use this DTrace invocation to profile our program for 60 seconds and aggregate on stacktrace:
    # dtrace -n 'profile-997/execname == "node" && arg1/ {
        @[jstack(40, 2000)] = count(); } tick-60s { exit(0); }' > out.stacks
    

    The script automatically stops after 60 seconds. At this point, the output file is already pretty useful, showing JavaScript stacktraces sorted by frequency count. We can also use this file to generate flame graphs (below).

    If you’re not familiar with DTrace, this invocation has a few important pieces:

    • profile-997: Samples on all CPUs at 997Hz. We use 997 instead of the more obvious 1000 to avoid oversampling periodic activity.
    • execname == "node": the probe’s body only fires when the currently running program is “node” to avoid sampling other processes on the system. You could omit this and sample everything.
    • arg1: arg1 for profile probes is the current user-level instruction pointer, so testing against zero here ensures that we only fire the probe body for samples that interrupt userland code.
    • @[jstack(40, 2000)] = count();: the body of the probe populates an aggregation that counts the number of times each userland stacktrace has been seen. We use jstack to collect the userland stack trace. We gather 40 frames and give the ustack helper 2000 bytes of buffer for translating the stack frames.
    • tick-60s { exit(0); }: causes the DTrace invocation to exit after 60 seconds
    • > out.stacks: saves the output to out.stacks rather than stdout.

If you run through these steps with the above example program, you should see all samples with the same or nearly identical stacktraces, since the program quickly enters a tight loop. For a more interesting example, I ran this on one of our Cloud Analytics services. From the output you can see that not all of the stacktraces are complete; see below for details on that.

Scaling up

We believe this DTrace-based profiling support for Node.js will be a huge leap for understanding the performance of individual Node programs in production. We’re also exploring ways of scaling this up to many instances of the same program, or cases where there’s no single hot stacktrace but rather one long-running function that calls many different functions during its execution. For such cases, we’ve been looking at flame graphs to visualize overlapping stacktraces. Here’s a flame graph based on the above output that shows where the Cloud Analytics aggregator (itself a Node.js program) spends its time:

Try clicking that graph for an interactive SVG where you can inspect individual frames.

From here we can see that most time is spent generating heatmaps (not surprisingly, if you saw our OSCON or Surge presentations that talk about our architecture). We can also tell how much time is spent bucketizing the data in preparation for rendering the heatmap vs. generating the PNG image itself.

You can easily generate flame graphs for your own programs. Collect data as shown above, then use the stackcollapse and flamegraph tools to generate the graph.

Conclusions

With the V8 ustack helper, you can profile a running Node program to see what functions it’s spending time in. Being DTrace-based, you can switch this on and off at will and do it safely in production with literally zero overhead when not enabled and minimal overhead when enabled. (There are also ustack helpers for Python and Java.)

There are still some cases that the ustack helper doesn’t handle well that lead to some artifacts in the output. For example, in the above aggregator flame graph, three of the frames called from caAggrValueHeatmapImage are actually the same function, but because the helper doesn’t resolve them, they appear as distinct hex addresses and so show up as three silos instead of one.

Another artifact appears to result from the fact that internal V8 stack frames don’t use frame pointers. This is a common optimization that destroys observability because you can’t collect a stacktrace without frame pointers. More investigation is needed to determine whether this is actually the problem and whether it would be a reasonable tradeoff to disable this optimization.

Nevertheless, for us this tool has been a floodlight in an otherwise pitch-black room. It’s unfortunate that ustack helpers don’t work on Mac OS or FreeBSD, but perhaps with the increased attention on performance analysis in dynamic environments like Node, those systems will address the underlying issues. At Joyent, we’ll be updating both the kernel and the node binaries that we ship to fully support DTrace-based profiling out of the box, and we’re exploring ways of incorporating this data into Cloud Analytics.

Node.js/V8 postmortem debugging

2016 Update: The commands and output in this post are pretty dated now, but you can find an up-to-date docs (including a tutorial) in the mdb_v8 user guide.


Photo by Auntie P (flickr)

I recently wrote an article for ACM Queue discussing postmortem debugging for dynamic environments. I argued that while native environments for years have provided rich tools for understanding software failures postmortem, most popular dynamic environments have not developed similar facilities, and this gap is becoming more important as these environments grow popular for building complex core components of large distributed systems.

In building Cloud Analytics, we knew we needed a robust facility for postmortem debugging. The CA service comprises several components: a few global services that manage state, an agent on each server in the datacenter that collects performance data, and a cluster of data aggregators. Debugging distributed systems is always hard, and it’s even harder when the software is running on other people’s production systems.

We built a module called node-panic that dumps predefined program state in the event of a crash. It also has a tool for manually triggering a crash in the event of a hard-hang (e.g., infinite loop). These tools have been extremely valuable in helping us root-cause issues in both production and development environments. But they’re really just a first step, and as I explain in the article, this approach has several important limitations, mostly owing to the fact that node-panic is pure JavaScript and so isn’t baked into the VM itself and doesn’t have access to a lot of important program state.

A next step

More recently I’ve been playing around with building an MDB module for inspecting V8 state. (Recall that V8 is the blazingly fast JavaScript virtual machine in both Node.js and Chrome, and MDB is the illumos modular debugger, an extensible tool for analyzing both live programs and process core dumps.) With this approach, you can attach MDB to either a running Node program, a core file manually created using gcore(1), or a core file generated as a result of a Node crash (e.g., segfault).  Importantly, you don’t need a debug build of node for this so you can use it in production.

Here’s a very simple example program:

dap@devel ~ $ cat -n node-stack/examples/loop-anon.js
     1	/*
     2	 * This script demonstrates a simple stack trace involving an anonymous
     3	 * function, then goes into an infinite loop so you can catch it with the
     4	 * debugger.  One caveat: we grab a stack trace at the start to force V8 to
     5	 * compute line number information for this script.  If we didn't have this, the
     6	 * debugger would only print out position-in-file information.
     7	 */
     8	new Error().stack;
     9
    10	function main() { func1(); }
    11	function func1() { func2(); }
    12
    13	function func2()
    14	{
    15		(function () {
    16			for (;;)
    17				;
    18		})();
    19	}
    20
    21	main();

Note the comment at the top of the script. V8 computes line number information for a given script lazily. If it has never done this for a particular script, the debugger module won’t be able to print out the line number.  This example grabs a stacktrace at the start of the script to make sure V8 has computed line numbers for it.  (There’s an example of the other case below.)

Now I’ll run this script with a stock node build and attach MDB:

dap@devel ~ $ node node-stack/examples/loop-anon.js &
[1] 400524

dap@devel ~ $ mdb -p 400524
Loading modules: [ ld.so.1 libc.so.1 ]
> ::status
debugging PID 400524 (32-bit)
file: /usr/bin/node
threading model: native threads
status: stopped by debugger

Now I’ll grab a JavaScript stack trace:

> ::load mdb_v8.so
> ::jsstack
804738c <anonymous> (as <anon>) at /home/dap/node-stack/examples/loop-anon.js line 15
80473a4 func2 at /home/dap/node-stack/examples/loop-anon.js line 13
80473bc func1 at /home/dap/node-stack/examples/loop-anon.js line 1180473d4 main at /home/dap/node-stack/examples/loop-anon.js line 10
80473f0 <anonymous> (as <anon>) at /home/dap/node-stack/examples/loop-anon.js line 1
8047424 <InternalFrame>
804745c <anonymous> (as Module._compile) at module.js line 340
804747c <anonymous> (as Module._extensions) at module.js line 406
80474a0 <anonymous> (as Module.load) at module.js line 324
80474cc <anonymous> (as Module._load) at module.js line 257
80474ec <anonymous> (as Module.runMain) at module.js line 419
8047518 <anonymous> (as startup.processNextTick.process._tickCallback) at node.js line 120
8047534 <InternalFrame>
8047568 <EntryFrame>
fe84e240 <None>

If V8 hadn’t computed the line number information for our script, the stack trace would look more like this:

> ::jsstack
804738c <anonymous> (as <anon>) at /home/dap/node-stack/examples/loop-anon-noinfo.js
position 814
80473a4 func2 at /home/dap/node-stack/examples/loop-anon-noinfo.js position 772
80473bc func1 at /home/dap/node-stack/examples/loop-anon-noinfo.js position 710
80473d4 main at /home/dap/node-stack/examples/loop-anon-noinfo.js position 652
80473f0 <anonymous> (as <anon>) at /home/dap/node-stack/examples/loop-anon-noinfo.js
position 2
8047424 <InternalFrame>
804745c <anonymous> (as Module._compile) at module.js position 19476
804747c <anonymous> (as Module._extensions) at module.js position 23260
80474a0 <anonymous> (as Module.load) at module.js position 18526
80474cc <anonymous> (as Module._load) at module.js position 15200
80474ec <anonymous> (as Module.runMain) at module.js position 23916
8047518 <anonymous> (as startup.processNextTick.process._tickCallback) at node.js
position 7224
8047534 <InternalFrame>
8047568 <EntryFrame>
fe84e240 <None>

Besides getting a stack trace, you can examine individual frames in more detail using the frame pointer:

> 80473a4::jsframe -v
frame pointer: 80473a4
    function: fd862b8d (JSFunction)
        shared function info: fc615229 (SharedFunctionInfo)
            function name: fc6941a9 (ascii SeqString: "func2")
            function token position: SMI: value = 515
            script: fc614f4d (Script)
                script name: fd85ec05 (ascii ConsString: "
                /home/dap/node-stack/examples/loop-anon.js")
                line endings: fd863605 (FixedArray)
            function token at: line 13

and you can inspect individual heap objects as well. Given a pointer to any JavaScript heap object, “::jsprint” will figure out what type it is and dump a representation of the corresponding C++ class inside the V8 VM:

> fd862b8d::jstype
0xfd862b8d: JSFunction

> fd862b8d::jsprint
fd862b8d JSFunction {
    fd862b8d JSObject {
        fd862b8d HeapObject < Object < MaybeObject  {
            fd862b8c map = fe82df41 (Map)
        }
        fd862b90 properties = fe86e021 (FixedArray)
        fd862b94 elements = fe86e021 (FixedArray)
    }
    fd862b9c prototype_or_initial_map = fe86e0cd (Oddball)
    fd862ba0 shared = fc615229 (SharedFunctionInfo)
    fd862ba8 literals = fe86e021 (FixedArray)
    fd862bac next_function_link = fe86e05d (Oddball "undefined")
}

This is still very much a prototype, but we think this could be very powerful. In the meantime, there are some issues to be solved with the existing implementation and several more problems to address.

Maintainability

The heap inspector requires pretty deep knowledge of the internal memory layout of the V8 heap. How can this be maintained in a reasonable way as V8 itself evolves?

For C programs shipped with illumos, we use CTF at build-time to encode type definitions in the binary itself. CTF is compact enough to be shipped in production binaries, so MDB understands all in-memory structures for these programs on all illumos systems. Application-specific debugger commands can be implemented in terms of CTF so that they don’t have to be built specifically for a particular application binary.

This approach is tougher for C++ programs, which have additional information like class hierarchies that can’t be expressed with CTF. But while V8 uses C++ classes (and an inheritance hierarchy) to represent different heap object types, it doesn’t use standard C++ member fields. Instead, each class defines static constants describing the offsets of its member fields, and macros are used to define accessors in terms of these constants. This actually makes it easier to build and maintain a C debugger module because it’s much easier to export these constants as part of the build process than to reverse-engineer how a C++ structure gets laid out in memory. If this proves to be the right approach, then we could save these constants into the binary using CTF and then build the debugger module independent of the V8 binary.

Completeness

Today, this module can dump a simple stack trace and provide basic inspection of heap objects. But there’s a lot more that would be useful:

  • global variables
  • arguments and local variables for each function on the stack
  • list of events pending on the event queue
  • list of function contexts to be invoked in the future (e.g., when pending I/O completes)

I’ve also only tested it on pretty simple examples so far. My examples have included JavaScript frames for named and anonymous functions, constructor frames, argument adaptor frames, and simple entry frames. I haven’t yet tried optimized frames, exit frames, or an exit/entry sequence.

This module has already helped us understand enough to write a basic DTrace ustack helper for Node programs, which allows similar stack traces to be gathered via DTrace while the program is running. In future blog posts I’ll talk more about the DTrace helper as well as what I’ve learned about V8, which I hope will be useful for others hacking around with it.