The Purely Functional Software
Deployment Model
Het puur functionele softwaredeploymentmodel
(met een samenvatting in het Nederlands)
Proefschrift ter verkrijging van de graad van doctor aan de Universiteit Utrecht op gezag
van de Rector Magnificus, Prof. dr. W. H. Gispen, ingevolge het besluit van het College
voor Promoties in het openbaar te verdedigen op woensdag 18 januari 2006 des middags
te 12.45 uur
Eelco Dolstra
geboren op 18 augustus 1978, te Wageningen
Prof. dr. S. Doaitse Swierstra, Universiteit Utrecht
Dr. Eelco Visser, Universiteit Utrecht
The work in this thesis has been carried out under the auspices of the research school IPA
(Institute for Programming research and Algorithmics). Dit proefschrift werd mogelijk
gemaakt met financiële steun van CIBIT|SERC ICT Adviseurs.
Cover illustration: Arthur Rackham (1867-1939), The Rhinegold & The Valkyrie (1910),
illustrations for Richard Wagner’s Der Ring des Nibelungen: the gods enter Walhalla as
the Rhinemaidens lament the loss of their gold.
ISBN 90-393-4130-3
This thesis could not have come about without the help of many individuals. Foremost I
wish to thank my supervisor and copromotor Eelco Visser. He was also my master’s thesis
supervisor, and I was quite happy that we were able to continue to work together on the
Variability/TraCE project. He shared my eventual interest in configuration management-
related issues, and package management in particular. His insights have improved this
research and this thesis at every point.
The Software Engineering Research Center (SERC), now known as CIBIT|SERC ICT
Adviseurs, funded my PhD position. Gert Florijn initiated the variability project and the
discussions with him were a valuable source of insights. The results of the present thesis
are probably not what any of us had expected at the start; but then again, the nice thing
about a term like “variability” is that it can take you in so many directions.
My promotor Doaitse Swierstra gave lots of good advice—but I ended up not imple-
menting his advice to keep this thesis short. I am grateful to the members of the reading
committee—Jörgen van den Berg, Sjaak Brinkkemper, Paul Klint, Alexander Wolf and
Andreas Zeller—for taking the time and effort to go through this book. Karl Trygve Kalle-
berg, Armijn Hemel, Arthur van Dam and Martin Bravenboer gave helpful comments.
Several people have made important contributions to Nix. Notorious workaholic Martin
Bravenboer in particular was an early adopter who contributed to almost every aspect of
the system. Armijn Hemel (“I am the itch programmers like to scratch”) contributed to
Nixpkgs and initiated the NixOS, which will surely conquer the world. René de Groot
coined the marketing slogan for NixOS—“Nix moet, alles kan!” Roy van den Broek has
replaced my hacky build farm supervisor with something much nicer (and Web 2.0 compli-
ant!). Eelco Visser, Rob Vermaas and Merijn de Jonge also contributed to Nixpkgs and the
build farm. In addition, it is hard to overestimate the importance of the work done by the
free and open source software community in providing a large body of non-trivial modular
components suitable for validation. Thanks!
The Software Technology group is a very pleasant work environment, and I thank
all my current and former colleagues for that. I especially thank my roommates Dave
Clarke (“The cause of the software crisis is software”), Frank Atanassow (who demanded
“pointable research”), Merijn de Jonge, Martin Bravenboer, Rob Vermaas and Stefan Hold-
ermans. Andres Löh, Bastiaan Heeren, Arthur Baars, Karina Olmos and Alexey Rodriguez
were more than just good colleagues. Daan Leijen insisted on doing things the right way.
The #klaplopers provided a nice work environment in virtual space—though whether IRC
is a medium that increases productivity remains an unanswered empirical question.
Arthur van Dam and Piet van Oostrum provided valuable assistance in wrestling with
TEX. Atze Dijkstra, Bastiaan Heeren and Andres Löh had helpful advice (and code!) for
the preparation of this thesis.
And finally I would like to thank my family—Mom, Dad, Jitske and Menno—for their
support and love through all these years.
Software deployment
The state of the art
The Nix deployment system
Outline of this thesis
Notational conventions
An Overview of Nix
The Nix store
Nix expressions
Package management
Store derivations
Deployment models
Transparent source/binary deployment
Deployment as Memory Management
What is a component?
The file system as memory
A pointer discipline
The Nix Expression Language
4.2.1. Lexical syntax
4.2.2. Context-free syntax
4.3. Semantics
4.3.1. Basic values
4.3.2. Compound values
4.3.3. Substitutions
4.3.4. Evaluation rules
The Extensional Model
Cryptographic hashes
The Nix store
5.2.1. File system objects
5.2.2. Store paths
5.2.3. Path validity and the closure invariant
Translating Nix expressions to store derivations
5.4.1. Fixed-output derivations
Building store derivations
5.5.1. The simple build algorithm
5.5.2. Distributed builds
5.5.3. Substitutes
5.5.4. The parallel build algorithm
Garbage collection
5.6.1. Garbage collection roots
5.6.2. Live paths
5.6.3. Stop-the-world garbage collection
5.6.4. Concurrent garbage collection . .
The Intensional Model
Local sharing
A content-addressable store
6.3.1. The hash invariant
6.3.2. Hash rewriting
6.4.1. Equivalence class collisions
6.4.2. Adding store objects
6.4.3. Building store derivations
6.4.4. Substitutes
Trust relations
Related work
Multiple outputs
Assumptions about components .
Software Deployment
The Nix Packages collection
7.1.1. Principles
7.1.2. The standard environment
7.1.3. Ensuring purity
7.1.4. Supporting third-party binary components
7.1.5. Experience
User environments
Binary deployment
Deployment policies
Patch deployment
7.5.1. Binary patch creation
7.5.2. Patch chaining
7.5.3. Base selection
7.5.4. Experience
Related work
Continuous Integration and Release Management
The Nix build farm
Distributed builds
Discussion and related work
Service Deployment
9.1.1. Service components
9.1.2. Service configuration and deployment
9.1.3. Capturing component compositions with Nix expressions .
9.1.4. Maintaining the service
Composition of services
Variability and crosscutting configuration choices
Distributed deployment
Related work
Build Management
Low-level build management using Nix
Related work
IV. Conclusion
11.1. Future work
Part I.
1. Introduction
This thesis is about getting computer programs from one machine to another—and having
them still work when they get there. This is the problem of software deployment. Though
it is a part of the field of Software Configuration Management (SCM), it has not been a
subject of academic study until quite recently [169]. The development of principles and
tools to support the deployment process has largely been relegated to industry, system
administrators, and Unix hackers. This has resulted in a large number of often ad hoc
tools that typically automate manual practices but do not address fundamental issues in a
systematic and disciplined way.
This is evidenced by the huge number of mailing list and forum postings about de-
ployment failures, ranging from applications not working due to missing dependencies, to
subtle malfunctions caused by incompatible components. Deployment problems also seem
curiously resistant to automation: the same concrete problems appear time and again. De-
ployment is especially difficult in heavily component-based systems—such as Unix-based
open source software—because the effort of dealing with the dependencies can increase
super-linearly with each additional dependency.
This thesis describes a system for software deployment called Nix that addresses many
of the problems that plague existing deployment systems. In this introductory chapter I
describe the problem of software deployment, give an overview of existing systems and
the limitations that motivated this research, summarise its main contributions, and outline
the structure of this thesis.
1.1. Software deployment
Software deployment is the problem of managing the distribution of software to end-user
machines. That is, a developer has created some piece of software, and this ultimately has
to end up on the machines of end-users. After the initial installation of the software, it
might need to be upgraded or uninstalled.
Presumably, the developer has tested the software and found it to work sufficiently well,
so the challenge is to make sure that the software works just as well, i.e., the same, on the
end-user machines. I will informally refer to this as correct deployment: given identical
inputs, the software should behave the same on an end-user machine as on the developer
This should be a simple problem. For instance, if the software consists of a set of files,
then deployment should be a simple matter of copying those to the target machines. In
1I’m making several gross simplifications, of course. First, in general there is no single “developer”. Second,
there are usually several intermediaries between the developer and the end-user, such as a system administra-
tor. However, for a discussion of the main issues this will suffice.
1. Introduction
practice, deployment turns out to be much harder. This has a number of causes. These fall
into two broad categories: environment issues and manageability issues.
Environment issues The first category is essentially about correctness. The software
might make all sorts of demands about the environment in which it executes: that certain
other software components are present in the system, that certain configuration files exist,
that certain modifications were made to the Windows registry, and so on. If any of those
environmental characteristics does not hold, then there is a possibility that the software
does not work the same as it did on the developer machine. Some concrete issues are the
A software component is almost never self-contained; rather, it depends on other
components to do some work on its behalf. These are its dependencies. For correct
deployment, it is necessary that all dependencies are identified. This identification
is quite hard, however, as it is often difficult to test whether the dependency specifi-
cation is complete. After all, if we forget to specify a dependency, we don’t discover
that fact if the machine on which we are testing already happens to have the depen-
dency installed.
Dependencies are not just a runtime issue. To build a component in the first place we
need certain dependencies (such as compilers), and these need not be the same as the
runtime dependencies, although there may be some overlap. In general, deployment
of the build-time dependencies is not an end-user issue, but it might be in source-
based deployment scenarios; that is, when a component is deployed in source form.
This is common in the open source world.
Dependencies also need to be compatible with what is expected by the referring
component. In general, not all versions of a component will work. This is the case
even in the presence of type-checked interfaces, since interfaces never give a full
specification of the observable behaviour of a component. Also, components often
exhibit build-time variability, meaning that they can be built with or without certain
optional features, or with other parameters selected at build time. Even worse, the
component might be dependent on a specific compiler, or on specific compilation
options being used for its dependencies (e.g., for Application Binary Interface (ABI)
Even if all required dependencies are present, our component still has to find them,
in order to actually establish a concrete composition of components. This is often
a rather labour-intensive part of the deployment process. Examples include setting
up the dynamic linker search path on Unix systems [160], or the CLASSPATH in the
Java environment.
Components can depend on non-software artifacts, such as configuration files, user
accounts, and so on. For instance, a component might keep state in a database that
has to be initialised prior to its first use.
1.1. Software deployment
• Components can require certain hardware characteristics, such as a specific proces-
sor type or a video card. These are somewhat outside the scope of software deploy-
ment, since we can at most check for such properties, not realise them if they are
• Finally, deployment can be a distributed problem. A component can depend on
other components running on remote machines or as separate processes on the same
machine. For instance, a typical multi-tier web service consists of an HTTP server,
a server implementing the business logic, and a database server, possibly all running
on different machines.
So we have two problems in deployment: we must identify what our component’s re-
quirements on the environment are, and we must somehow realise those requirements in
the target environment. Realisation might consist of installing dependencies, creating or
modifying configuration files, starting remote processes, and so on.
Manageability issues The second category is about our ability to properly manage the
deployment process. There are all kinds of operations that we need to be able to perform,
such as packaging, transferring, installing, upgrading, uninstalling, and answering various
queries; i.e., we have to be able to support the evolution of a software system. All these
operations require various bits of information, can be time-consuming, and if not done
properly can lead to incorrect deployment. For example:
When we uninstall a component, we have to know what steps to take to safely undo
the installation, e.g., by deleting files and modifying configuration files. At the same
time we must also take care never to remove any component still in use by some
other part of the system.
Likewise, when we perform a component upgrade, we should be careful not to over-
write any part of any component that might induce a failure in another part of the
system. This is the well-known DLL hell, where upgrading or installing one applica-
tion can cause a failure in another application due to shared dynamic libraries. It has
been observed that software systems often suffer from the seemingly inexplicable
phenomenon of “bit rot,” i.e., that applications that worked initially stop working
over time due to changes in the environment [26].
Administrators often want to perform queries such as “to what component does this
file belong?”, “how much disk space will it take to install this component?”, “from
what sources was this component built?”, and so on.
Maintenance of a system means keeping the software up to date. There are many
different policy choices that can be made. For instance, in a network, system ad-
ministrators may want to push updates (such as security fixes) to all client machines
periodically. On the other hand, if users are allowed to administer their own ma-
chines, it should be possible for them to select components individually.
When we upgrade components, it is important to be able to undo, or roll back the
effects of the upgrade, if the upgrade turns out to break important functionality. This
1. Introduction
requires both that we remember what the old configuration was, and that we have
some way to reproduce the old configuration.
• In heterogeneous networks (i.e., consisting of many different types of machines),
or in small environments (e.g., a home computer), it is not easy to stay up to date
with software updates. In particular in the case of security fixes this is an important
problem. So we need to know what software is in use, whether updates are available,
and whether such updates should be performed.
• Components can often be deployed in both source and binary form. Binary pack-
ages have to be built for each supported platform, and sometimes in several variants
as well. For instance, the Linux kernel has thousands of build-time configuration
options. This greatly increases the deployment effort, particularly if packaging and
transfer of packages is a manual or semi-automatic process.
• Since components often have a huge amount of variability, we sometimes want to
expose that variability to certain users. For instance, Linux distributors or system
administrators typically want to make specific feature selections. A deployment
system should support this.
1.2. The state of the art
Having seen some of the main issues in the field of software deployment, we now look at
some representative deployment tools. This section does not aim to provide a full survey
of the field; rather, the goal is to show the main approaches to deployment. Section 7.6 has
a more complete discussion of related work.
Package management is a perennial problem in the Unix community [2]. In fact, entire
operating system distributions rise and fall on the basis of their deployment qualities. It can
be argued that Gentoo Linux’s [77] quick adoption in the Linux community was entirely
due to the perceived strengths of its package management system over those used by other
distributions. This interest in deployment can be traced to Unix’s early adoption in large,
advanced and often academic installations (in contrast to the single PC, single user focus
in the PC industry in a bygone era).
Also, for better or for worse, Unix systems have traditionally insisted on storing com-
ponents in global namespaces in the file system such as the /usr/bin directory. This makes
management tools indispensable. But more importantly, modern Unix components have
fine-grained reuse, often having dozens of dependencies on other components. Since it
is not desirable to use monolithic distribution (as is generally done in Windows and Mac
OS X, as discussed below), a package management tool is absolutely required to support
the resulting deployment complexity. Therefore Unix (and specifically, Linux) package
management is what we will look at first.
RPM The most widely used Linux package management system is the Red Hat Package
Manager (RPM) [62], and as it is a good, mature tool it is instructive to look at it in some
detail. RPM is a low-level deployment tool: it installs and manages components, keeping
meta-information about them to prevent unsafe upgrades or uninstalls.
1.2. The state of the art
Summary: Hello World program
Summary: Hello World GUI
Name: hello
Name: xhello
Version: 1.0
Version: 1.4
Source0: %{name}-%{version}.tar.gz
Source0: %name-%version.tar.gz
Requires: hello >= 1.0
Requires: XFree86
This program prints out "Hello
This program provides a graphical
user interface around Hello.
./configure --prefix=%{_prefix}
./configure --prefix=%_prefix
make install
make install
Figure 1.1.: Spec file for hello
Figure 1.2.: Spec file for xhello
A software component is deployed by packaging it into an RPM package (or simply
RPM), which is an archive file that consists of the files that constitute the component, and
some meta-information about the package. This includes a specification of the dependen-
cies of the package, scripts to be run at various points in the install and uninstall process,
and administrative information such as the originator of the package. RPMs are built from
spec files. Figure 1.1 shows a trivial spec file for an imaginary component called Hello that
provides a command /usr/bin/hello.
An RPM package is typically produced from source as follows2:
$ rpmbuild -ba hello.tar.gz
where hello.tar.gz contains the sources of the component and its spec file. The resulting
RPM (say, hello-1.0.i686.rpm) must then be transferred to the client machines in some way
that the RPM tool itself does not specify. Once it is on a client machine, it can be installed:
$ rpm -i hello-1.0.i686.rpm
Similarly, we can upgrade the package when a new RPM with the same name comes along
(rpm -u hello-2.0.i686.rpm), and uninstall it (rpm -e hello). To perform upgrades and unin-
stalls cleanly, it is necessary to track which files belong to what packages. This information
can be queried by users:
$ rpm -ql hello
2In this thesis, the prefix $ in code samples indicates a Unix shell command.
1. Introduction
$ rpm -qf /usr/bin/hello
RPM also maintains a cryptographic hash [145] of the contents of each file as it existed at
installation time. Thus, users can verify that files have not been tampered with.
RPM maintains the integrity of installed packages with respect to the dependency re-
quirements. Figure 1.2 shows another RPM package, xhello, that provides a GUI front-end
for hello and thus depends on the hello package (expressed at point 1 ). When we have the
xhello package installed, we cannot uninstall hello unless xhello is uninstalled also:
$ rpm -e hello
hello is needed by (installed) xhello-1.4
Similarly, RPM does not allow two packages that contain files with equal path names
to exist simultaneously in the system. For instance, when we have our hello-1.0 package
installed, it is not possible to install another package that also installs a file with path name
/usr/bin/hello. In general, this means that we cannot have multiple versions of the same
component installed simultaneously. It also means that it is hard for multiple users to
independently select, install and manage software.
A fundamental problem of RPM and essentially all general package management sys-
tems is that they cannot reliably determine the correctness of dependency specifications.
In our example, xhello depends on hello, and so its spec file will contain a dependency
specification to that effect, e.g.,
Requires: hello
But what happens when we forget to specify this? When the developer builds and tests the
RPM package, the component will probably work because in all likelihood the developer
has hello installed. If we then deploy xhello to clients, the component will work if hello
happens to have been installed previously. If it was not, xhello may fail mysteriously (Fig-
ure 1.3; black ovals denote broken components). Thus, it is intrinsically hard to validate
dependency specifications. (It is also hard to prevent unnecessary dependencies, but that
does not harm correctness, just efficiency.) An analysis of the actual number of depen-
dency errors in a large RPM-based Linux distribution is described in [87]. The number of
dependency errors turned out to be quite low, but this is likely to be at least in part due to
the substantial effort invested in specifying complete dependencies. Missing dependencies
lead to incomplete deployment; correct deployment on the other hand requires complete
Related to the inability to validate dependency specifications is the fact that dependen-
cies tend to be inexact. Above, xhello required that a component named hello is present—
but it makes no requirements on the actual properties or interfaces of that component. That
is, the dependency specification is nominal
(determined by name only), not by contract
(requiring that the dependency has certain properties). So any component named hello
satisfies the dependency. Actually, we can require specific versions of the dependency:
Requires: hello >= 1.0
1.2. The state of the art
Producer Site
version 0.5
version 1.3
Upgrade of App2
Removal of App3
Consumer Site
version 0.3
Figure 1.4.: Interference
Figure 1.3.: Incomplete deploy-
which excludes version 1.0. However, such version specifications involve a high degree of
wishful thinking, since we can never in general rely on the fact that any version in an open
range works. For instance, there is no way to know whether future release 1.3.1 of hello
will be backwards compatible. Even “exact” dependencies such as
Require: hello = 1.0
are unsafe, because this is still a nominal dependency: we can conceive of any number
of component instances with name hello and version number 1.0 that behave completely
differently. In fact, this is a real problem: Linux distributions from different vendors can
easily have components with equal names (e.g., glibc-2.3.5) that actually have vendor-
specific patches applied, have been built with specific options, compilers, or ABI options,
and so on.
Incomplete dependencies and inexact notions of component compatibility give rise to
interference between components, which is the phenomenon that an operation on one com-
ponent can “break” an essentially unrelated component. Figure 1.4 shows two examples of
interference. In the left scenario, the upgrading of application App2 breaks App1 because
the new version App2’ requires a newer version of LibB’, which happens not to be suffi-
ciently backwards compatible. In the right scenario, the uninstallation of App3 also breaks
App1, because the package manager has removed LibA under the mistaken belief that App3
was the sole user of LibA.
A more subtle problem in RPM (and most other deployment tools) is that it has the prop-
erty of destructive upgrading, which means that components are upgraded by overwriting
the files of the old version with those of the new one. This is the case because the new ver-
Figure 1.5.: The dependency hell: the runtime dependency graph of Mozilla Firefox
sion typically lives in the same paths in the file system, e.g., hello-2.0 will still install into
/usr/bin/hello and /etc/hello.conf. Apart from the resulting inability to have multiple ver-
sions installed at the same time, this gives rise to a temporary inconsistency in the system:
there is a time window in which we have some of the files of the old version, and some of
the new version. Thus, upgrading is not atomic. If a user were to start the Hello program
during this time window, the program might not work at all or misbehave in certain ways.
The main criticism leveled at RPM by some of its users is the difficulty in obtaining RPM
packages. If we want to install a complex, multi-component system such as, KDE,
or GNOME, we have to manually download a possibly large number of RPM packages
until all missing dependencies are resolved. That is, RPM verifies that an installation of a
package proceeds safely according to its spec file, but has no way to resolve problems by
itself if they do occur. This gives rise to the dependency hell, where users find themselves
chasing down RPMs that may not be easy to obtain. Figure 1.5 shows the dependency
graph of Mozilla Firefox, a popular web browser3. Each node is a component that must be
present. This gives an intuition as to the magnitude of the problem.
However, it is not actually a problem that RPM does not obtain packages automatically:
it is in fact a good separation of mechanism and policy. Higher-level deployment tools
can be built on top of RPM that do provide automatic fetching of packages. Installation
3This picture was produced from the Firefox component in the Nix Packages collection using the nix-env --query
--graph command and Graphviz [54].
1.2. The state of the art
Client machine
Server 1
refers to
refers to
Server 2
refers to
Server 3
Figure 1.6.: Software deployment using yum
tools such as SuSE Linux’s YaST have a global view of all available packages in a Linux
distribution, and will automatically download them from a network site or prompt for the
appropriate installation CD or DVD. The tool yum (short for Yellow Dog Updater, Modi-
fied) [174], used among others by the Fedora Core Linux distribution, is a wrapper around
RPM to add support for network repositories from which the tool can automatically ac-
quire packages. Figure 1.6 illustrates a typical yum deployment scenario. Client machines
have a file /etc/yum.conf that contains a list of RPM repositories. We can then install a
package with all its dependencies by saying, e.g.,
$ yum install firefox
Yum will consult the repositories defined in /etc/yum.conf to find the appropriate RPM
packages, download them, and install them by calling the RPM tool.
Source deployment models In the open source community there are several operating
system distributions that deploy packages not (primarily) in binary form but in source form.
The main advantage of source deployment is that it allows greater flexibility by allowing
users to make specific build time configuration choices. For instance, we can optimise
components for our specific processor architecture, or remove all unneeded optional func-
tionality in components. Whether this is actually a desirable feature for end-users is up for
debate, but the ability to easily construct a customised set of component instances is quite
valuable to some users, such as developers, system administrators, deployers of complex
installations, and distributors.
The archetypical source deployment system is the FreeBSD Ports Collection [73]. Pack-
ages in source form (called ports) are essentially Makefiles [56] that describe how to recur-
sively build the dependencies of the package, download the sources of the package, apply
possible FreeBSD-specific patches, build the package, and finally install it in the file sys-
tem. By passing arguments to Make (or editing the Makefile, associated files, and options
in global configuration files), the component can be adapted to local requirements.
An obvious downside to source deployment is its rather considerable resource consump-
tion. While binary deployment only requires disk space for the binaries, and possibly net-
work resources for obtaining the binary package, source deployment requires resources to
obtain and store the sources, CPU time and memory to build the package, and disk space
for temporary files and the final results. To make matters worse, binary deployment only
1. Introduction
involves runtime dependencies, while source deployment may involve additional depen-
dencies that are only used at build time, such as compilers. Unsurprisingly, FreeBSD
therefore also offers a binary deployment option, called packages, which are pre-built
ports. However, ports and packages are not quite integrated: for instance, when installing
a port, dependencies that might be installed using packages are still built from source.
A further problem with source deployment is that it increases the risk of deployment
failure, as now not just runtime dependencies but also build-time dependencies can affect
the result of the deployment. For instance, if the user’s installed C compiler is not quite
what a component’s developer tested against, there is a slight probability that the com-
ponent fails to build or run correctly—and the product of sufficiently many slight failure
probabilities is a large probability.
Windows and Mac OS Windows and Mac OS X tend to use monolithic deployment
for applications: except for some large-grained dependencies on the operating system en-
vironment (e.g., on the kernel, the core GUI libraries, or Direct X), dependencies tend
to be distributed as part of the application itself, with no sharing of dependencies be-
tween applications. This can be accomplished through static linking or by having dy-
namic libraries be part of the private namespace (directory tree) of the application (e.g.,
C:\Program Files\MyApp\Foo.DLL). While this reduces deployment complexity at the end-
user machine, it has several downsides. First, it removes sharing: if two applications use
the “same” component, they will nevertheless end up using private copies. The result is
increased resource consumption in terms of disk space, RAM, cache efficiency, download
time, and so on. Clearly, it is bad if all or many of Firefox’s dependencies in Figure 1.5
were unshared. In the worst case, we get a quadratic blowup in the disk space require-
ments: if we have N applications that share the same M dependencies, then we need disk
space Θ(NM) instead of Θ(N + M). Second, it still requires the developer to obtain and
compose the components, typically through a semi-manual process.
Especially elegant from an end-user perspective are Mac OS’s application bundles,
which are directory trees containing all files belonging to an application. Generally, such
bundles are self-contained, except for operating system component dependencies. Con-
trary to typical Windows applications, they do not have other environment dependencies
such as registry settings. This means that bundles can be copied or moved around in the
file system arbitrarily. For instance, the whole of Microsoft Office X on Mac OS X can be
copied between machines by dragging it from one disk to another. Again, the limitation of
this approach is that it falls apart when components have dependencies on each other. That
is, the bundle approach works only for “top level” components, i.e., end-user applications4.
.NET Historically, Windows has suffered from the DLL hell, a result of an unmanaged
global namespace being used to store shared dependencies, e.g., the directory C:\Windows-
\System. An installation or uninstallation of one application frequently caused other ap-
plications to break because a shared library (DLL) would be replaced with an incompat-
ible version, or deleted altogether. This is a classic example of component interference.
4For instance, the management of Mac OS X’s BSD-based Unix foundations is a complete mess, which has
prompted the development of tools such as Fink [138], which is based on Debian’s APT system (discussed in
Section 7.6).
1.3. Motivation
Microsoft’s .NET platform [17] improves on this situation through its assemblies, which
are files that store code and other resources. Assemblies can have strong names, which
are globally unique identifiers signed by the assembly producer. Assemblies can depend
on each other through these strong names. The Global Assembly Cache (GAC) holds a
system-wide repository of assemblies, indexed by strong name. If assemblies in the GAC
differ in their strong names, they will not overwrite each other. Thus, interference does not
However, the GAC is only intended for shared components. Application code and mis-
cellaneous resources are typically not stored in the GAC, but in application-specific di-
rectories, e.g., C:\Program Files\MyApp. That is, management of those components must
be handled through other mechanisms. Also, the GAC only holds components that use a
specific component technology—.NET assemblies.
Other systems The Zero Install system [112] installs components on demand from net-
work repositories. On-demand installation is enabled by using a virtual file system that
intercepts file system requests for component files. For instance, when a process accesses
/uri/0install/, and this file is not already present in the local component
cache, it is fetched from a repository, at which point the requesting process resumes. Users
can independently install software in this manner, and are unaffected by the actions of other
users. Unfortunately, systems depending on non-portable extensions to the file system face
difficulty in gaining wide adoption5. This is especially the case for deployment systems as
they should support a wide variety of platforms.
1.3. Motivation
From the previous discussion of existing deployment systems it should be clear that they
lack important features to support safe and efficient deployment. In particular, they have
some or all of the following problems:
• Dependency specifications are not validated, leading to incomplete deployment.
• Dependency specifications are inexact (e.g., nominal).
• It is not possible to deploy multiple versions or variants of a component side-by-side.
• Components can interfere with each other.
• It is not possible to roll back to previous configurations.
• Upgrade actions are not atomic.
• Applications must be monolithic, i.e., they must statically contain all their depen-
5For instance, Apple is moving away from resource forks to bundles in recent releases of its Mac OS. The
former approach stores streams of metadata (such as icons) in files in addition to its regular data contents,
which is not portable. The latter approach on the other hand uses directory trees of regular files to keep such
data elements together, which is portable.
1. Introduction
• Deployment actions can only be performed by administrators, not by unprivileged
• There is no link between binaries and the sources and build processes that built them.
• The system supports either source deployment or binary deployment, but not both;
or it supports both but in a non-unified way.
• It is difficult to adapt components.
• Component composition is manual.
• The component framework is narrowly restricted to components written in a specific
programming language or framework.
• The system depends on non-portable techniques.
The objective of the research described in this thesis is to develop a deployment system
that does not have these problems.
1.4. The Nix deployment system
This thesis describes the Nix deployment system, which overcomes the limitations of con-
temporary deployment tools described above. I describe the concepts and implementation
(how it works), the underlying principles (why it works), our experiences and empirical
validation (that it works), and the application areas to which it can be applied (where it
The main idea of the Nix approach is to store software components in isolation from each
other in a central component store, under path names that contain cryptographic hashes of
all inputs involved in building the component, such as /nix/store/rwmfbhb2znwp...-firefox-
1.0.4. As I show in this thesis, this prevents undeclared dependencies and enables support
for side-by-side existence of component versions and variants.
Availability At the time of writing, the Nix system is available as free software at the
homepage of the TraCE project [161].
1.5. Contributions
The main contribution of this thesis is the development of a purely functional deployment
model, which we implemented in the Nix system. In this model a binary component is
uniquely defined by the declared inputs used to build the component. This enables arbitrary
component instances to exist side by side. I show how such a model can be “retrofitted”
onto existing software that was designed without such a model in mind.
Concretely, the contributions of this thesis are the following.
• The cryptographic hashing scheme used by the Nix component store prevents un-
declared dependencies, giving us complete deployment. Furthermore it provides
support for side-by-side existence of component versions and variants.
1.5. Contributions
Isolation between components prevents interference.
Users can install software independently from each other, without requiring mutual
trust relations. Components that are common between users are shared, i.e., stored
only once.
Upgrades are atomic; there is no time window in which the system is in an inconsis-
tent state.
Nix supports O(1)-time rollbacks to previous configurations.
Nix supports automatic garbage collection of unused components.
The Nix component language describes not just how to build individual components,
but also compositions. The language is a lazy, purely functional language. This is
a good basis for a component composition language, as it allows dependencies and
variability to be expressed in an elegant and flexible way.
Nix has a transparent source/binary deployment model that marries the best parts
of source deployment models such as the FreeBSD Ports Collection, and binary
deployment models such as RPM. In essence, binary deployment is an automatic
optimisation of source deployment.
Nix is policy-free; it provides mechanisms to implement various deployment poli-
cies, but does not enforce a specific one. Some policies described in this thesis are
channels (in push and pull variants), one-click installations, and pure source deploy-
I show that a purely functional model supports efficient component upgrades despite
the fact that a change to a fundamental component can propagate massively through
the dependency graph.
Nix supports distributed multi-platform builds in a transparent manner, i.e., a remote
build is indistinguishable from a local build from the perspective of the user.
Nix provides a good basis for the implementation of a build farm, which supports
continuous integration and portability testing. I describe a Nix build farm that has
greatly improved manageability over other build farms and is integrated with release
management, that is, it builds concrete installable components that can be deployed
directly through Nix.
The use of Nix for software deployment extends naturally to the field of service de-
ployment. Services are running processes that provide some useful facility to users,
such as web servers. They consist of software components, static configuration and
data, and mutable state. The first two aspects can be managed using Nix, and its
advantages—such as rollbacks and side-by-side deployment—apply here as well.
Though Nix is typically used to build large-grained components (i.e., traditional
packages), it can also be used to build small-grained components such as individual
source files. When used in this way it is a superior alternative to build managers
1. Introduction
such as Make [56], ensuring complete dependency specifications and enabling more
sharing between builds.
The above may sound like a product feature list, which is not necessarily a good thing
as it is always possible to add any feature to a system given enough ad-hockery. However,
these contributions all follow from a small set of principles, foremost among them the
purely functional model.
1.6. Outline of this thesis
This thesis is divided in four parts. Part I (which includes this chapter) introduces the
problem domain and motivates the Nix system. Chapter 2, “An Overview of Nix”, intro-
duces the Nix system from a high-level perspective (i.e., that of users or authors of Nix
components), and gives some intuitions about the underlying principles.
Part II is about the principles and formal semantics of Nix. The basic philosophy behind
Nix—to solve deployment problems by treating them analogously to memory management
issues in programming languages—is introduced in Chapter 3, “Deployment as Memory
The syntax and semantics of the Nix expression language—a purely functional language
used to describe components and compositions—is given in Chapter 4, “The Nix Expres-
sion Language”. This chapter also discusses some interesting aspects of the implementa-
tion of the Nix expression evaluator.
The next two chapters form the technical core of this thesis, as they describe the Nix
component store and the fundamental operations on it. In fact, there are two models for
the Nix store that are subtly different with important consequences. Chapter 5, “The Ex-
tensional Model”, formalises the extensional model, which was the first model and the one
with which we have gained the most experience. This chapter describes the objects that
live in the Nix store, the building of Nix expressions, binary deployment as an optimisation
of source deployment, distributed and multi-platform builds, and garbage collection.
The extensional model however has several downsides, in particular the inability to se-
curely share a Nix store between multiple users. These problems are fixed in the more
recent intensional model, described in Chapter 6, “The Intensional Model”.
Part III discusses in detail the various applications of Nix. Chapter 7, “Software Deploy-
ment”, is about Nix’s raison d’être, software deployment. It describes principles for Nix
expression writers that help ensure correct deployment, and various policies that deploy
components to clients. The main vehicle for this discussion is the Nix Packages collection
(Nixpkgs), a large set of existing Unix components that we deploy through Nix. Most of
our experience with the Nix system is in the context of the Nix Packages collection, so this
chapter provides a certain level of empirical validation for the Nix approach.
Chapter 8, “Continuous Integration and Release Management”, shows that Nix pro-
vides a solid basis for the implementation of a build farm, which is a facility that automati-
cally builds software components from a version management repository. This is useful to
support continuous integration testing, which provides developers with almost immediate
feedback as to whether their changes integrate cleanly with the work of other developers.
1.7. Notational conventions
Also, a Nix build farm can produce concrete releases that can be automatically deployed
to user machines, for instance through the channel mechanism described in Chapter 7.
Chapter 9, “Service Deployment”, takes Nix beyond deployment of software compo-
nents and into the realm of service deployment. It shows that Nix extends gracefully into
this domain using several examples of production services deployed using Nix.
Chapter 10, “Build Management”, extends Nix in another direction, towards support for
low-level build management. As stated above, Nix provides exactly the right technology
to implement a build manager that fixes the problems of most contemporary build tools.
Roadmap Different readers will find different parts of this thesis of interest. Chapter 2,
“An Overview of Nix” should be read by anyone. The semantics of the Nix system in Part II
can be skipped—initially, at least—by readers who are interested in Nix as a tool, but is
required reading for those who wish to modify Nix itself. Nix users will be most interested
in the various applications discussed in Part III. Chapter 7, “Software Deployment” on
deployment should not be skipped as it provides an in-depth discussion on Nix expressions,
builders, and deployment mechanisms and policies that are also relevant to the subsequent
application chapters.
Nix users and developers will also find the online manual useful [161].
Origins Parts of this thesis are adapted from earlier publications. Chapter 2, “An Over-
view of Nix” is very loosely based on the LISA ’04 paper “Nix: A Safe and Policy-
Free System for Software Deployment”, co-authored with Merijn de Jonge and Eelco
Visser [50]. Chapter 3, “Deployment as Memory Management” is based on Sections 3-6 of
the ICSE 2004 paper “Imposing a Memory Management Discipline on Software Deploy-
ment”, written with Eelco Visser and Merijn de Jonge [52]. Chapter 6, “The Intensional
Model” is based on the ASE 2005 paper “Secure Sharing Between Untrusted Users in a
Transparent Source/Binary Deployment Model” [48]. Section 7.5 on patch deployment
appeared as the CBSE 2005 paper “Efficient Upgrading in a Purely Functional Compo-
nent Deployment Model” [47]. Chapter 9, “Service Deployment” is based on the SCM-12
paper “Service Configuration Management”, written with Martin Bravenboer and Eelco
Visser [49]. Snippets of Section 10.2 were taken from the SCM-11 paper “Integrating
Software Construction and Software Deployment” [46].
1.7. Notational conventions
Part II of this thesis contains a number of algorithms in a pseudo-code notation in an im-
perative style with some functional elements. Keywords in algorithms are in boldface,
variables are in italic, and function names are in sans serif. Callouts like this 1 are fre-
quently used to refer to points of interest in algorithms and listings from the text. Variable
assignment is written x ← value.
Sets are written in curly braces, e.g., {1, 2, 3}. Since many algorithms frequently extend
← set which is syn-
← {2, 3} extends the set stored in x with the
elements 2 and 3.
1. Introduction
Set comprehensions are a concise notation for set construction: {expr | conditions} pro-
duces a set by applying the expression expr to all values from a (usually implicit) universe
that meet the predicate conditions. For example, {x2 | x ∈ {2, 3, 4} ∧ x = 3} produces the
set {4, 16}.
Ordered lists are written between square brackets, e.g., [1, 2, 1, 3]. Lists can contain ele-
ments multiple times. Lists are sometimes manipulated in a functional (Haskell-like [135])
style using the function map and λ -abstraction: x ← map(λ v . f (v), y) constructs a list x by
applying a function f (v) to each element in the list x. For instance, map(λ x . x2, [4, 2, 6])
produces the list [16, 4, 36]. Tuples or pairs are enclosed in parentheses, e.g., (a, b, c).
Throughout this thesis, storage sizes are given in IEC units [33]: 1 KiB = 1024 bytes,
1 MiB = 10242 bytes, and 1 GiB = 10243 bytes.
Data types are defined in a Haskell-like notation. For instance, the definition
data Foo = Foo {
x : String,
ys : [String],
zs : {String}
defines a data type Foo with three fields: a string x, a list of strings ys, and a set of strings
zs. An example of the construction of a value of this type is as follows:
Foo {x = "test", ys = ["hello", "world"], zs = 0}
In addition, types can have unnamed fields and multiple constructors. For example, the
data Tree = Leaf String | Branch Tree Tree
defines a binary tree type with strings stored in the leaves. An example value is
Branch (Leaf "a") (Branch (Leaf "b") (Leaf "c")).
2. An Overview of Nix
The previous chapter discussed the features that we require from a software deployment
system, as well as the shortcomings of current technology. The subject of this thesis is
the Nix deployment system, which addresses many of these problems. This chapter gives
a gentle introduction to Nix from a user perspective (where “user” refers to component
deployers as well as end-users). The succeeding parts of this thesis will present a more
formal exposition of Nix’s semantics (Part II) and applications (Part III).
2.1. The Nix store
Nix is a system for software deployment. The term component will be used to denote the
basic units of deployment. These are simply sets of files that implement some arbitrary
functionality through an interface. In fact, Nix does not really care what a component ac-
tually is. As far as Nix is concerned a component is just a set of files in a file system. That
is, we have a very weak notion of component, weaker even than the commonly used defi-
nition in [155]. (What we call a component typically corresponds to the ambiguous notion
of a package in package management systems. Nix’s notion of components is discussed
further in Section 3.1.)
Nix stores components in a component store, also called the Nix store. The store is
simply a designated directory in the file system, usually /nix/store. The entries in that
directory are components (and some other auxiliary files discussed later). Each component
is stored in isolation; no two components have the same file name in the store.
A subset of a typical Nix store is shown in Figure 2.1. It contains a number of applica-
tions—GNU Hello 2.1.1 (a toy program that prints “Hello World”, Subversion 1.1.4 (a ver-
sion management system), and Subversion 1.2.0—along with some of their dependencies.
These components are not single files, but directory trees. For instance, Subversion con-
sists of a directory called bin containing a program called svn, and a directory lib containing
many more files belonging to the component. (For simplicity, many files and dependencies
have been omitted from the example.) The arrows denote runtime dependencies between
components, which will be described shortly.
The notable feature in Figure 2.1 is the names of the components in the Nix store,
such as /nix/store/bwacc7a5c5n3...-hello-2.1.1. The initial part of the file names, e.g.,
bwacc7a5c5n3..., is what gives Nix the ability to prevent undeclared dependencies and
component interference. It is a representation of the cryptographic hash of all inputs in-
volved in building the component.
Cryptographic hash functions are hash functions that map an arbitrary-length input onto
a fixed-length bit string. They have the property that they are collision-resistant: it is
computationally infeasible to find two inputs that hash to the same value. Cryptographic
hashes are discussed in more detail in Section 5.1. Nix uses 160-bit hashes represented in
An Overview of Nix
Figure 2.1.: The Nix store
Figure 2.2.: Closure in the store
a base-32
notation, so each hash is 32 characters long. In this thesis, hashes are abridged
to ellipses (...) most of the time for reasons of legibility. The full path of a directory entry
in the Nix store is referred to as its store path. An example of a full store path is:
So the file bin/hello in that component has the full path
The hash is computed over all inputs, including the following:
• The sources of the components.
• The script that performed the build.
• Any arguments or environment variables [152] passed to the build script.
• All build time dependencies, typically including the compiler, linker, any libraries
used at build time, standard Unix tools such as cp and tar, the shell, and so on.
Cryptographic hashes in store paths serve two main goals. They prevent interference
between components, and they allow complete identification of dependencies. The lack
2.1. The Nix store
of these two properties is the cause for the vast majority of correctness and flexibility
problems faced by conventional deployment systems, as we saw in Section 1.2.
Preventing interference Since the hash is computed over all inputs to the build process
of the component, any change, no matter how slight, will be reflected in the hash. This
includes changes to the dependencies; the hash is computed recursively. Thus, the hash
essentially provides a unique identifier for a configuration. If two component compositions
differ in any way, they will occupy different paths in the store (except for dependencies that
they have in common). Installation or uninstallation of a configuration therefore will not
interfere with any other configuration.
For instance, in the Nix store in Figure 2.1 there are two versions of Subversion. They
were built from different sources, and so their hashes differ. Additionally, Subversion
1.2.0 uses a newer version of the OpenSSL cryptographic library. This newer version of
OpenSSL likewise exists in a path different from the old OpenSSL. Thus, even though
installing a new Subversion entails installing a new OpenSSL, the old Subversion instance
is not affected, since it continues to use the old OpenSSL.
Recursive hash computation causes changes to a component to “propagate” through the
dependency graph. This is illustrated in Figure 2.3, which shows the hash components of
the store paths computed for the Mozilla Firefox component (a web browser) and some of
its build time dependencies, both before and after a change is made to one of those depen-
dencies. (An edge from a node A to a node B denotes that the build result of A is an input to
the build process of B.) Specifically, the GTK GUI library dependency is updated from ver-
sion 2.2.4 to 2.4.13. That is, its source bundle (gtk+-2.2.4.tar.bz2 and gtk+-2.4.13.tar.bz2,
respectively) changes. This change propagates through the dependency graph, causing dif-
ferent store paths to be computed for the GTK component and the Firefox component.
However, components that do not depend on GTK, such as Glibc (the GNU C Library),
are unaffected.
An important point here is that upgrading only happens by rebuilding the component in
question and all components that depend on it. We never perform a destructive upgrade.
Components never change after they have been built—they are marked as read-only in the
file system. Assuming that the build process for a component is deterministic, this means
that the hash identifies the contents of the components at all times, not only just after it has
been built. Conversely, the build-time inputs determine the contents of the component.
Therefore we call this a purely functional model. In purely functional programming
languages such as Haskell [135], as in mathematics, the result of a function call depends
exclusively on the definition of the function and on the arguments. In Nix, the contents of
a component depend exclusively on the build inputs. The advantage of a purely functional
model is that we obtain strong guarantees about components, such as non-interference.
Identifying dependencies The use of cryptographic hashes in component file names
also prevents the use of undeclared dependencies, which (as we saw in Section 1.2) is the
major cause of incomplete deployment. The reason that incomplete dependency informa-
tion occurs is that there is generally nothing that prevents a component from accessing
another component, either at build time or at runtime. For instance, a line in a build script
or Makefile such as
2. An Overview of Nix
Figure 2.3.: Propagation of dependency changes through the store paths of the build-time
component dependency graph
gcc -o program main.c ui.c -lssl
which links a program consisting of two C modules against a library named ssl, causes
the linker to search in a set of standard locations for a library called ssl1. These standard
locations typically include “global” system directories such as /usr/lib on Unix systems. If
the library happens to exist in one of those directories, we have incurred a dependency.
However, there is nothing that forces us to include this dependency in the dependency
specifications of the deployment system (e.g., in the RPM spec file of Figure 1.2).
At runtime we have the same problem. Since components can perform arbitrary I/O
actions, they can load and use other components. For instance, if the library ssl used
above is a dynamic library, then program will contain an entry in its dynamic linkage
meta-information that causes the dynamic linker to search for the library when program is
started. The dynamic linker similarly searches in global locations such as /lib and /usr/lib.
Of course, there are countless mechanisms other than static or dynamic linking that es-
tablish a component composition. Some examples are including C header files, importing
Java classes at compile time, calling external programs found through the PATH environ-
ment variable, and loading help files at runtime.
The hashing scheme neatly prevents these problems by not storing components in global
locations, but in isolation from each other. For instance, assuming that all components in
the system are stored in the Nix store, the linker line
gcc -o program main.c ui.c -lssl
will simply fail to find libssl. Unless the path to an OpenSSL instance (e.g., /nix/store/-
5jq6jgkamxjj...-openssl-0.9.7d) was explicitly fed into the build process and added to the
linker’s search path, no such instance will be found by the linker.
1To be precise, libssl.a or on Unix.
2.1. The Nix store
7f 45 4c 46 01 01 01 00
00 00 00 00 00 00 00 00
02 00 03 00 01 00 00 00
40 bb 04 08 34 00 00 00
04 00 00 00 2f 6e 69 78
2f 73 74 6f 72 65 2f 37
32 62 79 32 69 77 35 77
64 38 69 68 72 35 79 37
6e 31 31 31 6a 66 77 6c
37 33 71 32 68 36 37
67 6c 69 62 63 2d 32 2e
33 2e 35 2f 6c 69 62 2f
6c 64 2d 6c 69 6e 75 78
2e 73 6f 2e 32 00 00 00
73 74 6f 72 65 2f 35 6a
71 36 6a 67 6b 61 6d 78
6a 6a 64 64 6c 61 76 67
76 63 39 6e 76 30 6c 72
6d 38 66 79 73 37 2d 6f
70 65 6e 73 73 6c 2d 30
2e 39 2e 37 64 2f 6c 69
62 3a 2f 6e 69 78 2f 73
Figure 2.4.: Retained dependencies in the svn executable
Also, we go to some lengths to ensure that component builders are pure, that is, not
influenced by outside factors. For example, the builder is called with an empty set of
environment variables (such as the PATH environment variable, which is used by Unix
shells to locate programs) to prevent user settings such as search paths from reaching tools
invoked by the builder. Similarly, at runtime on Linux systems, we use a patched dynamic
linker that does not search in any default locations—so if a dynamic library is not explicitly
declared with its full path in an executable, the dynamic linker will not find it.
Thus, when the developer or deployer fails to specify a dependency explicitly (in the Nix
expression formalism, discussed below), the component will fail deterministically. That
is, it will not succeed if the dependency already happens to be available in the Nix store,
without having been specified as an input. By contrast, the deployment systems discussed
in Section 1.2 allow components to build or run successfully even if some dependencies
are not declared.
Retained dependencies A rather tricky aspect to dependency identification is the oc-
currence of retained dependencies. This is what happens when the build process of a
component stores a path to a dependency in the component. For instance, the linker invo-
cation above will store the path of the OpenSSL library in program, i.e., program will have
a “hard-coded” reference to /nix/store/5jq6jgkamxjj...-openssl-0.9.7d/lib/
Figure 2.4 shows a dump of parts of the Subversion executable stored in the file /nix/-
store/v2cc475f6nv1...-subversion-1.1.4/bin/svn. Offsets are on the left, a hexadecimal rep-
resentation in the middle, and an ASCII representation on the right. The build process for
Subversion has caused a reference to the aforementioned OpenSSL instance to be stored in
the program’s executable image. The path of OpenSSL has been stored in the RPATH field
of the header of the ELF executable, which specifies a list of directories to be searched
by the dynamic linker at runtime [160]. Even though our patched dynamic linker does
not search in /nix/store/5jq6jgkamxjj...-openssl-0.9.7d/lib by default, it will find the library
anyway through the executable’s RPATH.
This might appear to be bad news for our attempts to prevent undeclared dependencies.
Of course, we happen to know the internal structure of Unix executables, so for this specific
file format we can discover retained dependencies. But we do not know the format of every
2. An Overview of Nix
file type, and we do not wish to commit ourselves to a single component composition
mechanism. E.g., Java and .NET can find retained dependencies by looking at class files
and assemblies, respectively, but only for their specific dynamic linkage mechanisms (and
not for dependencies loaded at runtime).
The hashing scheme comes to the rescue once more. The hash part of component paths
is highly distinctive, e.g., 5jq6jgkamxjj
Therefore we can discover retained dependen-
cies generically, independent of specific file formats, by scanning for occurrences of hash
parts. For instance, the executable image in Figure 2.4 contains the highlighted string
5jq6jgkamxjj..., which is evidence that an execution of the svn program might need that
particular OpenSSL instance. Likewise, we can see that it has a retained dependency on
some Glibc instance (/nix/store/72by2iw5wd8i
Thus, we automatically add these as run-
time dependencies of the Subversion component. Using this scanning approach, we find
the runtime dependencies indicated in Figure 2.1.
This approach might seem a bit dodgy. After all, what happens when a file name is rep-
resented in a non-standard way, e.g., in UTF-16 [34, Section 2.5] or when the executable
is compressed? In Chapter 3 the dependency problem is cast in terms of memory man-
agement in programming languages, which justifies the scanning approach as an analogue
of conservative garbage collection. Whether this is a legitimate approach is an empirical
question, which is addressed in Section 7.1.5.
Note that strictly speaking it is not the use of cryptographic hashes per se, but globally
unique identifiers in file names that make this work. A sufficiently long pseudo-random
number also does the trick. However, the hashes are needed to prevent interference, while
at the same time preventing unnecessary duplication of identical components (which would
happen with random paths).
Closures Section 1.2 first stated the goal of complete deployment: safe deployment re-
quires that there are no missing dependencies. This means that we need to deploy closures
of components under the “depends-on” relation. That is, when we deploy (i.e., copy) a
component X to a client machine, and X depends on Y , then we also need to deploy Y to
the client machine.
The hash scanning approach gives us all runtime dependencies of a component, while
hashes themselves prevent undeclared build-time dependencies. Furthermore, these de-
pendencies are exact, not nominal (see page 8). Thus, Nix knows the entire dependency
graph, both at build time and runtime. With full knowledge of the dependency graph, Nix
can compute closures of components. Figure 2.2 shows the closure of the Subversion 1.1.4
instance in the Nix store, found by transitively following all dependency arrows.
In summary, the purely functional model, and its concrete implementation in the form
of the hashing approach used by the Nix store, prevents interference and enables complete
deployment. It makes deployment much more likely to be correct, and is therefore one
of the major results of this thesis. However, the purely functional model does provoke a
number of questions, such as:
• Hashes do not appear to be very user-friendly. Can we hide them from users in
everyday interaction?
• Can we deploy upgrades efficiently? That is, suppose that we want to upgrade Glibc
2.2. Nix expressions
(a dependency of all other components in Figure 2.1). Can we prevent a costly
redownload of all dependent components?
As we will see in the remainder of this thesis, the answer to these questions is “yes”.
2.2. Nix expressions
Nix components are built from Nix expressions. The language of Nix expressions is a sim-
ple purely functional language used to describe components and the compositions thereof.
This section introduces the Nix expression language using a simple example. In com-
puter science’s finest tradition we will start with “Hello World”; to be precise, a Nix ex-
pression that builds the GNU Hello package. GNU Hello is a program that prints out the
text Hello World and so “allows nonprogrammers to use a classic computer science tool
which would otherwise be unavailable to them” [67]. It is representative of what one must
do to deploy a component using Nix. Generally, to deploy a component one performs the
following three steps:
• Write a Nix expression for the component (Figure 2.6), which describes all the in-
puts involved in building the component, such as dependencies (other components
required by the component), sources, and so on.
• Write a builder (Figure 2.7)—typically a shell script—that actually builds the com-
ponent from the inputs.
• Create a composition (Figure 2.8). The Nix expression written in the first step is a
function: in order to build a concrete component, it requires that the dependencies are
filled in. To compose the component with its dependencies, we must write another
Nix expression that calls the function with appropriate arguments.
The Nix Packages collection The GNU Hello example is taken from the Nix Packages
collection (Nixpkgs), a large set of Nix expressions for common and not-so-common soft-
ware components. Since Nixpkgs contains many components and also describes their com-
positions, many files are involved in building a component. It is useful to have a mental pic-
ture of how the different parts fit together. Figure 2.5 shows the directory structure of parts
of the Nix Packages collection, including some of the Nix expressions, builders, and mis-
cellaneous inputs contained therein. The function that builds Hello components is defined
in pkgs/applications/misc/hello/default.nix. Likewise, many other components are defined,
hierarchically organised by component purpose or topic. Builders are placed in the same
directory as the Nix expression that uses them, e.g., pkgs/applications/misc/hello/
for the Hello build script.
Compositions are defined in pkgs/system/all-packages-generic.nix, which imports the
various component-specific Nix expressions and puts them all together. The file has
generic in its name because it is itself a function that returns compositions for vari-
ous machine architecture and operating system platforms. The function pkgs/system/all-
packages.nix does actually evaluate to a set of concrete components that can be built and
installed on the current platform.
An Overview of Nix
2.5.: Organisation of the Nix Packages collection
There is nothing in Nix that requires this organisation; it is merely a useful convention.
It is perfectly possible to place component expressions and builders in the same directory
by naming them appropriately, e.g., hello.nix,, firefox.nix, and so on. It is
also possible to put all Nix expressions in a single file, e.g., put everything in all-packages-
generic.nix, which of course would make this file quickly spiral out of control. In fact,
it is not even necessary to define components as functions and compose them separately,
as done in the 3-step procedure above; we could remove all variability and write Nix
expressions that describe exactly one build action, rather than a family of build actions.
Clearly, that is not a very general approach.
We now look at the parts that constitute the Hello component in detail.
Nix expression for the Hello component Figure 2.6 shows a Nix expression for GNU
Hello. It has a number of salient features:
This states that the expression is a function that expects to be called with three ar-
guments: stdenv, fetchurl, and perl. They are needed to build Hello, but we don’t
know how to build them here; that’s why they are function arguments. stdenv is a
component that is used by almost all Nix Packages components; it provides a “stan-
dard” environment consisting of the things one expects in a basic Unix environment:
a C/C++ compiler (GCC, to be precise), the Bash shell, fundamental Unix tools
2.2. Nix expressions
{stdenv, fetchurl, perl}:
stdenv.mkDerivation {
name = "hello-2.1.1";
builder = ./;
src = fetchurl {
url =;
md5 = "70c9ccf9fac07f762c24f2df2290784d";
inherit perl;
Figure 2.6.: pkgs/applications/misc/hello/default.nix: Nix expression for GNU Hello
such as cp, grep, tar, etc. fetchurl is a function that downloads files. perl is the Perl
Nix functions generally have the form {x, y, ..., z}: e where x, y, etc. are the names
of the expected arguments, and where e is the body (result) of the function. So here,
the entire remainder of the file is the body of the function; when given the required
arguments, the body describes how to build an instance of the Hello component.
The result of the function in this case is a derivation. This is Nix-speak for a com-
ponent build action, which derives the component from its inputs. We perform a
derivation by calling stdenv.mkDerivation. mkDerivation is a function provided by
stdenv that builds a component from a set of attributes2. An attribute set is just a
list of key/value pairs where each value is an arbitrary Nix expression. They take the
general form {name1 = expr1; ... namen = exprn;}.
The attributes given to stdenv.mkDerivation are the concrete inputs to the build ac-
A few of the attributes to derivations are special. The attribute name specifies the
symbolic name and version of the component. It is appended to the cryptographic
hash in store paths (see, e.g., Figure 2.1). Nix doesn’t care very much about it most
of the time.
The attribute builder specifies the script that actually builds the component. This
attribute can sometimes be omitted, in which case stdenv.mkDerivation will fill in
a default builder (which essentially performs a “standard” Unix component build
sequence consisting of the commands configure; make; make install). Hello is suf-
ficiently simple that the default builder suffices; but in this case, for educational
purposes an actual builder is shown in Figure 2.7, discussed below.
The builder has to know what the sources of the component are. Here, the attribute
src is bound to the result of a call to the fetchurl function. Given a URL and an MD5
hash of the expected contents of the file at that URL, this function builds a derivation
2mkDerivation is actually a wrapper around the primitive operation derivation, shown on page 80.
2. An Overview of Nix
source $stdenv/setup
tar xvfz $src
cd hello-*
./configure --prefix=$out
make install
Figure 2.7.: pkgs/applications/misc/hello/ Builder for GNU Hello
that downloads the file and checks its hash. So the sources are dependencies that like
all other dependencies are built before Hello itself is built.
Instead of src any other name could have been used, and in fact there can be any
number of sources (bound to different attributes). However, src is customary, and it
is also expected by the default builder (which we don’t use in this example).
Since the derivation requires Perl, we have to pass the value of the perl function
argument to the builder. All attributes in the set are actually passed as environment
variables to the builder, so declaring an attribute
perl = perl;
will do the trick: it binds an attribute perl to the function argument which also hap-
pens to be called perl. However, this looks a bit silly, so there is a shorter syntax.
The inherit keyword causes the specified attributes to be bound to whatever variables
with the same name happen to be in scope.
Builder for the Hello component Figure 2.7 shows the builder for GNU Hello. To build
a component, Nix executes the builder. The attributes in the attribute set of the derivation
are passed as environment variables. Attribute values that evaluate to derivations (such as
perl, which below in Figure 2.8 is bound to a derivation) are built recursively. The paths
of the resulting components are passed through the corresponding environment variables.
The special environment variable out holds the intended output path. This is where the
builder should store its result.
The build script for Hello is a typical Unix build script: it unpacks the sources, runs a
configure script that detects platform characteristics and generates Makefiles, runs make to
compile the program, and finally runs make install to copy it to its intended location in the
file system. However, there are some Nix-specific points:
When Nix runs a builder, it initially completely clears all environment variables (ex-
cept for the attributes declared in the derivation). For example, the PATH variable is
empty3. This is done to prevent undeclared inputs from being used in the build pro-
cess. If for example the PATH contained /usr/bin, then the builder might accidentally
use /usr/bin/gcc.
3Actually, it is initialised to /path-not-set to prevent Bash from setting it to a default value.
2.2. Nix expressions
The first step is therefore to set up the environment so that the tools included in the
standard environment are available. This is done by calling the setup script of the
standard environment. The environment variable stdenv points to the location of the
standard environment being used. (It wasn’t specified explicitly as an attribute in the
Hello expression in Figure 2.6, but mkDerivation adds it automatically.)
Since Hello needs Perl, we have to make sure that Perl is in the PATH. The perl en-
vironment variable points to the location of the Perl component (since it was passed
in as an attribute to the derivation), so the shell expression $perl/bin evaluates to the
directory containing the Perl interpreter.
Now we have to unpack the sources. The src attribute was bound to the result of
fetching the Hello source distribution from the network, so the src environment vari-
able points to the location in the Nix store to which the distribution was downloaded.
After unpacking, we change to the resulting source directory.
GNU Hello is a typical Autoconf-based package. Autoconf [64, 172] is a tool that
allows a source-based package to automatically adapt itself to its environment by de-
tecting properties such as availability and paths of libraries and other dependencies,
quirks in the C compiler, and so on. Autoconf-based packages always have a script
called configure that performs these tests and generates files such as Makefiles, C
header files (e.g., config.h), etc. Thus, we first have to run Hello’s configure script
(which was generated by Autoconf). The store path where the component is to be
stored, is passed in through the environment variable out. Here we tell configure to
produce Makefiles that will build and install a component in the intended output path
in the Nix store.
Finally we build Hello (make) and install it into the location specified by the out
variable (make install).
Composition of the Hello component Since the Nix expression for Hello in Figure 2.6
is a function, we cannot use it directly to build and install a Hello component. We need
to compose it first; that is, we have to call the function with the expected arguments. In
the Nix Packages collection this is done in the file pkgs/system/all-packages-generic.nix,
where all Nix expressions for components are imported and called with the appropriate
arguments. This is a large file, but the parts relevant to Hello are shown in Figure 2.8.
This file defines a set of attributes, all of which in this case evaluate to concrete
derivations (i.e., not functions). In fact, we define a mutually recursive set of at-
tributes using the keyword rec. That is, the attributes can refer to each other4. This
is precisely what we want, namely, “plug” the various components into each other.
Here we import the Nix expression for GNU Hello. The import operation just loads
and returns the specified Nix expression. In fact, we could just have put the contents
4While attributes can refer to values defined in the same scope, derivations cannot be mutual dependencies:
such derivations would be impossible to build (see also Lemma 3 on page 130).
2. An Overview of Nix
rec {
hello = (import ../applications/misc/hello
inherit fetchurl stdenv perl;
perl = (import ../development/interpreters/perl) {
inherit fetchurl stdenv;
fetchurl = (import ../build-support/fetchurl) {
inherit stdenv; ...
stdenv = ...;
Figure 2.8.: pkgs/system/all-packages-generic.nix: Composition of GNU Hello and other
of Figure 2.6 in all-packages-generic.nix at this point; this is completely equivalent,
but it would make the file rather bulky.
Relative paths such as ../applications/misc/hello are relative to the directory of the Nix
expression that contains them, rather than the current directory or some global search
path. Since the expression in Figure 2.8 is stored in pkgs/system/all-packages-
generic.nix, this relative path resolves to pkgs/applications/misc/hello, which is the
directory of the Nix expression in Figure 2.6.
Note that we refer to ../applications/misc/hello, not ../applications/misc/hello/-
default.nix. The latter is actually equivalent to the former in the case of directories;
when importing a path referring to a directory, Nix automatically appends /default.nix
to the path.
This is where the actual composition takes place. Here we call the function imported
from ../applications/misc/hello with an attribute set containing the arguments that the
function expects, namely fetchurl, stdenv, and perl. We use the inherit construct
again to copy the attributes defined in the surrounding scope (we could also have
written fetchurl = fetchurl;, etc.). Note that Nix function arguments are not positional;
Nix functions just expect an attribute set containing attributes corresponding to the
declared (formal) arguments.
The result of this function call is an actual derivation that can be built by Nix (since
when we fill in the arguments of the function, what we get is its body, which is the
call to stdenv.mkDerivation in Figure 2.6).
Likewise, concrete instances of Perl, fetchurl, and the standard environment are con-
2.2. Nix expressions
structed by calling their functions with the appropriate arguments.
The attribute fetchurl actually does not evaluate to a derivation, but to a function that
takes arguments url and md5, as seen in Figure 2.6. This is an example of partial
Thus, the value bound to the hello attribute in Figure 2.8 represents a derivation of an
instance of Hello, as well as of all its dependencies.
Surprisingly, describing not just components but also compositions is not a common
feature of the component description formalisms of deployment systems. For instance,
RPM spec files (see page 6) describe how to build components, but not how to build the
dependencies—even though the dependencies can have a profound effect on the build re-
sult. The actual composition is left implicit. Nominal dependency specifications are used
to require the presence of certain dependencies, but as we have seen, nominal dependency
specifications are insufficient. Formalisms such as RPM spec files therefore woefully un-
derspecify components. Only a full set of RPM spec files closed under the dependency
relation fully describes a component build (disregarding, of course, the real possibility of
undeclared dependencies and other environmental influences).
A slightly more complex example Figure 2.9 shows a Nix expression that demonstrates
some additional language features. It contains a function that produces Subversion com-
ponents. Subversion [31, 137] is a version management system [159, 180]. Subversion
clients can access a central version management repository through various means: local
file system access, WebDAV (an extension to HTTP to support authoring and version-
ing [30]), and the specialised svnserve protocol. These various access methods lead to a
number of optional features, which in turn lead to optional dependencies5:
• Subversion has two storage back-ends for repositories: one that uses a built-in file
system based storage, and one that uses the external Berkeley DB embedded database
library. Use of the latter back-end induces a dependency on the Berkeley DB com-
• Subversion can build an Apache module that allows it to act as a WebDAV server for
remote clients. This feature requires the Apache HTTPD server component.
• Subversion can access remote WebDAV repositories over HTTP by default. If access
over HTTPS (i.e., HTTP encrypted over the Secure Sockets Layer [76]) is desired,
then it requires the OpenSSL cryptographic library component.
• Data sent over HTTP connections can be compressed using the Zlib algorithm,
which requires the Zlib library component.
In other words, Subversion has quite a bit of build-time variability. Figure 2.9 shows
how such variability can be expressed in the Nix expression language. For instance, the
optional support for SSL encryption is expressed by giving the function a Boolean param-
eter sslSupport 17 that specifies whether to build a Subversion instance with SSL support.
5There are actually even more dependencies than listed here, such as bindings for various languages. These
have been omitted to keep the example short.
2. An Overview of Nix
{ bdbSupport ? false
, httpServer ? false
, sslSupport ? false
, compressionSupport ? false
, stdenv, fetchurl
, openssl ? null
18 , httpd ? null, db4 ? null, expat, zlib ? null
assert bdbSupport -> db4 != null;
assert httpServer -> httpd != null && httpd.expat == expat;
assert sslSupport -> openssl != null &&
(httpServer -> httpd.openssl == openssl);
assert compressionSupport -> zlib != null;
stdenv.mkDerivation {
name = "subversion-1.2.1";
builder = ./;
src = fetchurl {
url =;
md5 = "0b546195ca794c327c6830f2e88661f7";
openssl = if sslSupport then openssl else null;
zlib = if compressionSupport then zlib else null;
httpd = if httpServer then httpd else null;
db4 = if bdbSupport then db4 else null;
inherit expat bdbSupport httpServer sslSupport;
Figure 2.9.: pkgs/applications/version-management/subversion/default.nix: Nix expression
for Subversion
This is distinct from the passing of the actual OpenSSL dependency 18 in order to separate
logical features from the dependencies required to implement them, if any. Many features
require no dependencies, or require multiple dependencies. The list of function arguments
also shows the language feature of default arguments, which are arguments that may be
omitted when the function is called. If they are omitted, the default is used instead.
A call to the Subversion function (e.g., in all-packages-generic.nix) to build a minimal
Subversion component will look like this:
subversion = (import .../subversion) {
inherit fetchurl stdenv expat;
On the other hand, the following call builds a full-featured Subversion component:
subversion = (import .../subversion) {
2.2. Nix expressions
inherit fetchurl stdenv openssl db4 expat zlib;
httpd = apacheHttpd;
localServer = true;
httpServer = true;
sslSupport = true;
compressionSupport = true;
There are some other language aspects worth noting in Figure 2.9. Assertions enable
consistency checking between components and feature selections. The assertion in 19
verifies that if Berkeley DB support is requested, the db4 dependency should not be null.
The value null is a special constant that allows components to be omitted, which would be
wrong here. Note that the default for the db4 argument is in fact null, which is appropriate
for bdbSupport’s default of false. But if the function is called with bdbSupport set to true,
then the assertion protects the user against forgetting to specify the db4 parameter.
The next two assertions 20 express consistency requirements between components. The
first one states that if the Apache WebDAV module is to be built, then Apache’s expat
dependency (Expat is an XML parsing library) should be exactly equal to Subversion’s
expat dependency, where “exactly equal” means that they should be in the same store path.
This is because the dynamic libraries of Apache and Subversion will end up being linked
against each other. If both bring in different instances of Expat, a runtime link error will
ensue. Hence the requirement that both Expats are to be the same. Likewise, if SSL
and WebDAV support are enabled, Apache and Subversion should use the same OpenSSL
As in the Hello example, the dependencies must be passed to the build script. But we
only pass the optional dependencies if their corresponding features are enabled 21 . If they
are disabled, the value null is passed to the builder (null corresponds to an empty string in
the environment variables). This is to relieve the programmer from having to figure out
at the call site what specific dependencies should be passed for a given feature selection.
After all, that would break abstraction—such decisions should be made by the component’s
Nix expression, not by the caller. Thus, the caller can always pass all dependencies, and
the component’s Nix expression will figure out what dependencies to use. As we will see
in Section 4.1 (page 62), unused dependencies will not be evaluated, let alone be built,
due to the fact that Nix expressions are a lazy language (values are only computed when
Figure 2.10 shows the builder for the Subversion component, just to give an idea of what
is involved in getting a non-trivial piece of software to work in Nix. This builder is at a
higher level than the one we have seen previously for Hello (Figure 2.7), as it uses the
generic builder, which is a shell function defined by the standard environment that takes
care of building most Unix components with little or no customisation. Most modern Unix
software has a standard build interface consisting of the following sequence of commands:
tar xvf ...sources.tar
make install
2. An Overview of Nix
buildInputs="$openssl $zlib $db4
$httpd $expat"
source $stdenv/setup
configureFlags="--without-gdbm --disable-static"
if test "$bdbSupport"; then
if test "$sslSupport"; then
configureFlags="--with-ssl --with-libs=$openssl $configureFlags"
if test "$httpServer"; then
configureFlags="--with-apxs=$httpd/bin/apxs --with-apr=$httpd ..."
makeFlags="APACHE_LIBEXECDIR=$out/modules $makeFlags"
configureFlags="--without-apxs $configureFlags"
Figure 2.10.: pkgs/applications/version-management/subversion/ Builder
This is in essence what the generic builder does. For many components, a simple call 24 to
the generic builder suffices. The generic builder is discussed in more detail in Section 7.1.2.
So the effort to build Subversion is quite modest: it is a matter of calling Subversion’s
configure script with the appropriate flags to tell it where it can find the dependencies. For
instance, the configure flags to enable and locate Berkeley DB are set in 23 . Actually,
following the Autoconf philosophy of auto-detecting system capabilities, some of the op-
tional features are enabled automatically if the corresponding dependency is found. For
example, Zlib compression support is enabled automatically if configure finds Zlib in the
compiler and linker search path. The variable buildInputs contains a list of components that
should be added to such search paths; thus all components added in 22 are in the scope of
the compiler and linker.
This concludes the brief overview of the Nix expression language. Chapter 4 provides a
more in-depth discussion of the syntax and semantics of the language.
2.3. Package management
Installing Now that we have written Nix expressions for Hello, we want to build and
install the component. That is, we want to reach a state where the Hello component is in
some way “available” to the user. In a Unix system, this means that the application should
2.3. Package management
be in the user’s search path:
$ hello
Hello, world!
while in different settings it might mean that the application becomes available as an icon
on the desktop, or as a menu item in the Start Menu.
On Unix, operations such as installation, upgrading, and uninstallation are performed
through the command nix-env, which manipulates so-called user environments—the views
on “available” components (discussed below on page 36). For instance,
$ nix-env -f .../all-packages.nix -i hello
installing `hello-2.1.1'
evaluates the Nix expression in all-packages.nix, looks for a derivation with symbolic name
hello, builds the derivation, and makes the result available to the user. The building of the
derivation is a recursive process: Nix will first build the dependencies of Hello, which
entails building their dependencies, and so on. However, if a derivation has been built
previously, it will not be rebuilt.
Queries It is of course possible to query the set of installed components:
$ nix-env -q
This query prints the set of components in the current user environment. Likewise, it is
possible to query which components are available for installation in a Nix expression:
$ nix-env -f .../all-packages.nix -qa
It is usually not convenient to specify the Nix expression all the time. Therefore it is possi-
ble to specify a specific Nix expression as the default for package management operations:
$ nix-env -I .../all-packages.nix
All subsequent nix-env invocations will then use this file if no expression is explicitly
Upgrades and rollbacks When a new version of the set of Nix expressions that contains
Hello comes along, we can upgrade the installed Hello:
$ hello -v
hello - GNU hello 2.1.1
2. An Overview of Nix
$ nix-env -f .../all-packages.nix -i hello
upgrading `hello-2.1.1' to `hello-2.1.2'
$ hello -v
hello - GNU hello 2.1.2
This is standard procedure for a package management system, completely analogous to,
e.g., rpm -i hello and rpm -U hello. However, if it turns out that the new version of Hello
does not meet our requirements, we can roll back to the previous version:
$ nix-env --rollback
$ hello -v
hello - GNU hello 2.1.1
The reason that this is possible is that the new version of Hello resides in a different path
in the Nix store, since at least some inputs of the build process of the new Hello must be
different (e.g., its sources and its name attribute). The old one is still there. There is no
destructive upgrading: in the purely functional model, components never change after they
have been built.
User environments User environments are Nix’s mechanism for allowing different users
to have different configurations, and to do atomic upgrades and rollbacks. Clearly, we want
to abstract over store paths; it would not do for users to have to type
$ /nix/store/dpmvp969yhdq...-subversion-1.1.3/bin/svn
to start a program.
Of course we could set up the PATH environment variable to include the bin direc-
tory of every component we want to use, but this is not very convenient since changing
PATH doesn’t take effect for already existing processes. The solution Nix uses is to cre-
ate directory trees of symlinks to activated components (similar to systems such as GNU
Stow [78])6. These are called user environments and they are components themselves
(though automatically generated by nix-env), so they too reside in the Nix store. In Fig-
ure 2.11 the user environment /nix/store/0c1p5z4kda11...-user-env contains a symlink to
just Subversion 1.1.2 (dashed arrows in the figure indicate symlinks). This is what we
would have obtained if we had performed
$ nix-env -i subversion
on a set of Nix expressions that contained Subversion 1.1.2.
This does not in itself solve the problem, of course; a user would not want to type
/nix/store/0c1p5z4kda11...-user-env/bin/svn either. This is why there are symlinks outside
of the store that point to the user environments in the store, e.g., the symlinks default-42-
link and default-43-link in the example. These are called generations since whenever one
performs a nix-env operation, a new user environment is generated based on the current
one. For instance, generation 43 was created from generation 42 when the user executed
6While this approach is rather Unix-specific, user environments can be implemented in many other ways, as is
discussed in Section 7.2.
2.3. Package management
Figure 2.11.: User environments
$ nix-env -i subversion firefox
on a set of Nix expressions that contained Firefox and a new version of Subversion.
Generations are grouped together into profiles so that different users don’t interfere with
each other (unless they share a profile). For example, the links default-42-link and default-
43-link are part of the profile called default. The file default itself is a symlink that points
to the current generation. When we perform a nix-env operation, a new user environment
and generation link are created based on the current one, and finally the default symlink is
made to point at the new generation.
Upgrades are atomic. This means that a user can never observe that an upgrade is “half-
way through”; the user either sees all the components of the old user environment, or all the
components of the new user environment. This is a major improvement over most deploy-
ment systems which work by destructive upgrading. Due to the purely functional model,
as we are building new components (including user environments, which are components
also), the components in the user’s current user environment are left entirely untouched.
They are not in any way changed. The last step in the upgrade—switching the current
generation symlink—can be performed in an atomic action on Unix.
Different users can have different profiles. The user’s current profile is pointed to by the
symlink ~/.nix-profile (where ~ denotes the user’s home directory). Putting the directory
~/.nix-profile/bin in the user’s PATH environment variables completes the user environments
picture (Figure 2.11), for if we now resolve the chain of symlinks from ~/.nix-profile/bin,
we eventually end up in the bin directory of the current user environment, which in turn
contains symlinks to the activated applications. A user can create or switch to a new profile:
$ nix-env --switch-profile /nix/var/nix/profiles/my-new-profile
2. An Overview of Nix
which will simply flip the ~/.nix-profile symlink to the specified profile. Subsequent nix-env
operations will then create new generations in this profile.
Uninstalling and garbage collection A component can be removed from a user envi-
$ nix-env -e hello
uninstalling `hello-2.1.1'
$ hello -v
bash: hello: No such file or directory
However, nix-env -e does not actually remove the component from the Nix store. This is
because the component might still be in use by the user environment of another user, or be
a dependency of a component in some environment. More importantly, if the component
were removed, it would no longer be possible to perform a rollback.
To reclaim disk space, it is necessary to periodically run the garbage collector, which
deletes from the store any components not reachable from any generation of any user
environment. (Reachability is defined with respect to the runtime dependencies found by
the hash scanning approach discussed earlier, and in more detail in Section 3.4.) That is,
the generations are roots of the garbage collector.
This means that the Hello component cannot be garbage collected until the old gener-
ations that included it are gone. The following command removes any generations of the
user’s profile except the current:
$ nix-env --remove-generations old
Note that this removes the ability to roll back to any configuration prior to the invocation of
this command. Thus, the user should only perform this operation when sure that rollback
is not necessary.
After we have removed the old generations, the Hello component has become garbage,
assuming no generations of other profiles reach it. Thus an execution of the garbage col-
lector will remove the component from the Nix store:
$ nix-store --gc
deleting `/nix/store/bwacc7a5c5n3...-hello-2.1.1'
(The command nix-store performs “low-level” Nix store operations.)
The advantage of garbage collection of components is that users never need to consider
whether some dependency might still be needed. A common phenomenon in many de-
ployment systems is that users are asked to confirm whether it is safe to remove some
component (e.g., “Should Windows delete foo.dll?”). Since Nix ensures reliable depen-
dency information, garbage collection can be safe and automatic. The act of uninstallation
is a logical action concerning only top-level components (i.e., applications), not dependen-
cies that should remain hidden to users.
2.4. Store derivations
outside the Nix store
in the Nix store
store derivation
(nix-store --realise)
Figure 2.12.: Two-stage building of Nix expressions
2.4. Store derivations
So how do we build components from Nix expressions? This could be expressed directly
in terms of Nix expressions, but there are several reasons why this is a bad idea. First, the
language of Nix expressions is fairly high-level, and as the primary interface for develop-
ers, subject to evolution; i.e., the language might change to accommodate new features.
However, this means that we would have to be able to deal with variability in the Nix
expression language itself: several versions of the language would need to be able to co-
exist in the store. Second, the richness of the language is nice for users but complicates
the sorts of operations that we want to perform (e.g., building and deployment). Third,
Nix expressions cannot easily be identified uniquely. Since Nix expressions can import
other expressions scattered all over the file system, it is not straightforward to generate an
identifier (such as a cryptographic hash) that uniquely identifies the expression. Finally, a
monolithic architecture makes it hard to use different component specification formalisms
on top of the Nix system (e.g., we could retarget Makefiles to use Nix as a back-end).
For these reasons Nix expressions are not built directly; rather, they are translated to
the more primitive language of store derivations, which encode single component build
actions. This is analogous to the way that compilers generally do the bulk of their work
on simpler intermediate representations of the code being compiled, rather than on a full-
blown language with all its complexities. Store derivations are placed in the Nix store, and
as such have a store path too. The advantage of this two-level build process is that the paths
of store derivations give us a unique identification of objects of source deployment, just as
paths of binary components uniquely identify objects of binary deployment.
Figure 2.12 outlines this two-stage build process: Nix expressions are first translated
to store derivations that live in the Nix store and that each describe a single build action
with all variability removed. These store derivations can then be built, which results in
derivation outputs that also live in the Nix store.
The exact details of the translation of Nix derivation expressions into store derivations is
not important here; the process is fully described in Section 5.4. What matters is that each
derivation is translated recursively into a store derivation. This translation is performed
automatically by nix-env, but it can also be done explicitly through the command nix-
instantiate. For instance, supposing that foo.nix is a Nix expression that selects the value
hello from all-packages-generic.nix in Figure 2.8, then applying nix-instantiate will yield
the following:
$ nix-instantiate foo.nix
That is, the expression foo.nix is evaluated, and the resulting top-level derivation is trans-
2. An Overview of Nix
output = "/nix/store/bwacc7a5c5n3...-hello-2.1.1" 25
inputDrvs = { 26
"/nix/store/mjdfbi6dcyz7...-perl-5.8.6.drv" 27 }
inputSrcs = {"/nix/store/"} 28
system = "i686-linux" 29
builder = "/nix/store/3nca8lmpr8gg...-bash-3.0/bin/sh" 30
args = ["-e", "/nix/store/"] 31
envVars = { 32
("builder", "/nix/store/3nca8lmpr8gg...-bash-3.0/bin/sh"),
("name", "hello-2.1.1"),
("out", "/nix/store/bwacc7a5c5n3...-hello-2.1.1"),
("perl", "/nix/store/h87pfv8klr4p...-perl-5.8.6"),
("src", "/nix/store/h6gq0lmj9lkg...-hello-2.1.1.tar.gz"),
("stdenv", "/nix/store/hhxbaln5n11c...-stdenv-linux"),
("system", "i686-linux"),
("gtk", "/store/8yzprq56x5fa...-gtk+-2.6.6"),
Figure 2.13.: Store derivation for the hello attribute in Figure
lated to a store derivation which is placed in /nix/store/1ja1w63wbk5q...-hello-2.1.1.drv. All
input derivations of hello are recursively translated and placed in the store as well.
The store derivation in
/nix/store/1ja1w63wbk5q...-hello-2.1.1.drv is shown in Fig-
ure 2.137. It lists all information necessary to build the component, with references to
the store derivations of dependencies:
The output field specifies the path that will be built by this derivation, if and when it
is built8. It is computed essentially by hashing the store derivation with the output
field cleared.
The inputDrvs field contains the paths of all the input derivations. These have to be
built before we can build this derivation.
The inputSrcs field contains the paths of all input sources in the Nix store. The
Hello Nix expression in Figure 2.8 referred to a local file This file has
been copied by nix-instantiate to the Nix store, since everything used by the build
process must be inside the Nix store to prevent undeclared dependencies.
7This is a pretty-printed representation of the actual store derivation, which is encoded as an ATerm [166].
Figure 5.8 on page 105 shows the actual ATerm.
8This field exists in the extensional model described in Chapter 5, where the output path is computed in advance,
but not in the intensional model described in Chapter 6, where it is only known after the derivation has been
2.4. Store derivations
The system field specifies the hardware and operating system system on which the
derivation is to be built. This field is part of the hash computation, so derivations
for different systems (e.g., i686-linux versus powerpc-darwin) result in different store
The builder field is the program to be executed. The reader might wonder about the
apparent mismatch between the builder declared in Figure 2.6 and the one specified
here. This is because the function stdenv.mkDerivation massages the arguments it
receives a bit, substituting a shell binary as the actual builder, with the original shell
script passed to the shell binary as a command line argument.
The args field lists the command line arguments to be passed to the builder.
The envVars field lists the environment variables to be passed to the builder.
Note that inputDrvs and envVars specify different paths for corresponding depen-
dencies. For example, Hello’s Perl dependency is built by the derivation stored in
/nix/store/mjdfbi6dcyz7...-perl-5.8.6.drv 27 . The output path of this derivation is /nix/store/-
h87pfv8klr4p...-perl-5.8.6 33 .
Building store derivations Store derivations can be built by calling the command nix-
store --realise, which “realises” the effect of a derivation in the file system. It builds the
derivation (if the output path of the derivation does not exist already), and prints the output
$ nix-store --realise /nix/store/1ja1w63wbk5q...-hello-2.1.1.drv
Nix users do not generally have to deal with store derivations. For instance, the nix-env
command hides them entirely—the user interacts only with high-level Nix expressions,
which is really just a fancy wrapper around the two commands above. However, store
derivations are important when implementing deployment policies. Their relevance is that
they give us a way to uniquely identify a component both in source and binary form,
through the paths of the store derivation and the output, respectively. This can be used to
implement a variety of deployment policies, as we will see in Section 2.5.
Closures revisited A crucial operation for deployment is to query the closure of a store
path under the dependency relation, as discussed in Section 2.1. Nix maintains this infor-
mation for all paths in the Nix store. For instance, the closure of the Hello component can
be found as follows:
$ nix-store -qR /nix/store/bwacc7a5c5n3...-hello-2.1.1/bin/hello
2. An Overview of Nix
This means that when we deploy this instance of Hello, we must copy all four of the paths
listed here9. When we copy these paths, we have binary deployment. It should be pointed
out that /nix/store/h87pfv8klr4p...-perl-5.8.6 is not in the closure, since it is not referenced
from the Hello binary or any of the paths referenced by it. That is, Perl is exclusively a
build-time dependency.
Similarly, we can query the closure of the store derivation:
$ nix-store -qR /nix/store/1ja1w63wbk5q...-hello-2.1.1.drv
... (98 more paths omitted) ...
Copying these paths gives us source deployment, since the store derivations describe
build actions from source.
It is also possible to combine the two:
$ nix-store -qR --include-outputs \
... (155 more paths omitted) ...
The --include-outputs flag causes the closures of the outputs of derivations to be in-
cluded. It is worth noting that this set is more than the union of the two sets above, since
there are many output paths of derivations that are not in the closure of the Hello binary.
For instance, though the Hello binary does not reference Perl, the Perl binary is included
here because its derivation is included.
2.5. Deployment models
What we have described above is not deployment—it is build management combined with
local package management. The calls to nix-env build components locally, and construct
user environments out of the build results. To perform actual software deployment, it is
necessary to get the Nix expressions to the client machines. Nix in itself does not provide
any specific way to do so. Rather, it provides the mechanisms to allow various deployment
policies. The commands nix-instantiate and nix-store provide the fundamental mechanisms.
(The command nix-env implemented on top of these already implements a bit of policy,
namely, user environments.)
Here are some examples of deployment models implemented and used in practice to
distribute the Nix expressions of the Nix Packages collection:
• Manual download. A user can manually download releases of the Nix Packages
collection as tar archives, unpack them, and apply nix-env to install components.
9In case the reader is wondering: the quite unnecessary dependency on linux-headers is through glibc; see
Section 6.7. The dependency on gcc is entirely unnecessary and does not occur in the “production” version
of Hello, which uses the utility patchelf (page 179) to prevent this retained dependency.
2.5. Deployment models
The downside to this approach is that it is rather laborious; in particular, it is not
easy to stay up-to-date with new releases.
Updating through a version management system. As a refinement to the previous
model, the set of Nix expressions can be distributed through a version management
system. For example, Nixpkgs can be obtained from its Subversion repository. After
the initial checkout into a local working copy, staying up-to-date is a simple matter
of running an update operation on the working copy. An added advantage is that it
becomes easy to maintain local modifications to the Nix expressions.
Channels. A more sophisticated model is the notion of channels. A channel is
a URL that points to a tar archive of Nix expressions. A user can subscribe to a
$ nix-channel --add http://server/channels/nixpkgs-stable
Then, the command
$ nix-channel --update
downloads the Nix expressions from the subscribed channels and makes them the
default for nix-env operations. That is, an operation such as
$ nix-env -i firefox
will install a derivation named firefox from one of the subscribed channels. In par-
ticular, it is easy to keep the entire system up-to-date:
$ nix-channel --update
$ nix-env -u '*'
where nix-env -u ’*’ updates all installed components in the user environment to the
latest versions available in the subscribed channels.
One-click installations. When a user wants to install a specific program, by far
the easiest method is to simply install it by clicking on it on some distribution web
page. Of course, this should also install all dependencies. An example of such
one-click installations is shown in Figure 2.14 (along with direct download of the
Nix expressions, and channel subscription information). Users can click on links on
release web pages to install specific components into their user environments. The
links lead to small files containing the information necessary to install the component
(e.g., Nix expressions), while the installation of the component is performed by a
simple tool associated with the MIME type [74] of those files.
Chapter 7 looks at the wide variety of deployment policies in much greater detail.
2. An Overview of Nix
Figure 2.14.: Deployment policies: direct download, channels, and one-click installation
2.6. Transparent source/binary deployment
The Nix model as described above is a source deployment model. Nix expressions describe
how to build components from source10. A source deployment model is convenient for
deployers because they do not need to create binary packages explicitly. It also gives users
the freedom to make local modifications and feature selections.
However, building all dependencies is rather slow. For the Hello component in Nix
Packages11, it entails building 63 derivations, including the builds of
hello-2.1.1, stdenv-linux, gcc-3.4.4, bash-3.0, gnutar-1.15.1, curl-7.14.0,
binutils-2.16.1, gnupatch-2.5.4, gnused-4.1.4, patchelf-0.1pre2286, linux-
headers-, zlib-1.2.2, bzip2-1.0.3, findutils-4.2.20, gnugrep-
2.5.1a, coreutils-5.2.1, perl-5.8.6, gcc-3.4.4, gnumake-3.80, gzip-1.3.3, gawk-
3.1.4, pcre-6.0, glibc-2.3.5, diffutils-2.8.1, gcc-3.4.4,
the fetchurl downloads of the sources of those packages, and some derivations for the boot-
strapping of stdenv (e.g., the statically linked compiler used to compile gcc-3.4.4); with
10As we will see in Section 7.1.4, it is possible to support binary-only packages by using a “trivial” builder
that just fetches a binary distribution of a component, unpacks it and copies it to $out. This is how the Nix
Packages collection supports components such as Adobe Reader.
11Data based on nixpkgs-0.9pre3252 on i686-linux.
2.6. Transparent source/binary deployment
downloads of source archives totalling 130 megabytes; and with the build results occupy-
ing 357 megabytes of disk space (although most of it can be garbage collected).
Thus, source deployment is clearly awful for most end-users, who do not have the re-
sources or patience for a full build from source of the entire dependency graph.
However, Nix allows the best of both worlds—source deployment and binary deploy-
ment—through its transparent source/binary deployment model. The idea is that pre-built
derivation outputs are made available in some central repository accessible to the client
machines, such as a web server or installation CD-ROM. For instance, a component dis-
tributor or system administrator can pre-build components, then push (upload) them to a
server using PUT requests:
$ nix-push http://server/dist/ $(nix-instantiate foo.nix)
This will build the derivations in foo.nix, if necessary, and upload it and all its dependencies
to the indicated site. For instance, the path /nix/store/mkmpxqr8d7f7...-firefox-1.0 will be
archived and compressed into an archive yq318j8lal09...-firefox.nar.bz2 and uploaded to
the server. Such a file is called a substitute, since a client machine can substitute it for a
build. The server provides a manifest of all available substitutes. An example entry in the
manifest is12:
{ StorePath: /nix/store/mkmpxqr8d7f7...-firefox-1.0
NarURL: http://server/dist/yq318j8lal09...-firefox.nar.bz2
Size: 11480169 }
which states that to create store path /nix/store/mkmpxqr8d7f7...-firefox-1.0, Nix can down-
load and unpack URL http://server/dist/yq318j8lal09...-firefox.nar.bz2.
To use these substitutes, client machines must register the fact that they are available
using the command nix-pull:
$ nix-pull http://server/dist
This does not actually download any of the substitutes, it merely registers their availability.
When the user subsequently attempts to install a Nix expression, and Nix when building
some store path p notices that it knows a substitute for p, it will download the substitute
and unpack it instead of building from source.
Thus, from a user perspective, source deployment automatically optimises into binary
deployment. This is good: binary deployment can be considered a partial evaluation of
source deployment with respect to a specific platform, and such optimisations should be
However, if the user has made modifications to parts of the Nix expression, it is possible
that some or all of the store paths will be different. In that case, the substitutes fail to
be applicable and Nix will build from source. Thus, binary deployment automatically
“degrades” back to source deployment.
The transparent source/binary model therefore combines the flexibility of source de-
ployment systems such as Gentoo with the efficiency of binary deployment systems such
as RPM.
12This is somewhat simplified. The full details of nix-push manifests are in Section 7.3.
2. An Overview of Nix
Of course, the deployment models described in Section 2.5 support transparent source/-
binary deployment. For instance, a Nix channel contains not just a set of Nix expressions,
but also the URL of a manifest that clients can pull when they update from the channel.
Thus, subscribers to the channel automatically get binary deployment for those derivations
in the channel that have been pre-built.
Part II.
3. Deployment as Memory Management
The previous chapter introduced the Nix system. It obtains isolation and non-interference
between components using a Nix store that stores components under file names that contain
a unique cryptographic hash of the component. However, some of the underlying concepts
might seem a bit suspicious! For instance, storing components in an essentially flat, rigid
“address space” of components is very different from the way software is typically stored
in most operating systems1. And the hash scanning technique to identify dependencies
clearly cannot work in all circumstances.
The purpose of this chapter is to “derive” the basic design of the Nix store by relating
the main issues in software deployment to those in memory management in conventional
programming languages. As we shall see, conventional deployment tools treat the file sys-
tem as a chaotic, unstructured component store, similar to how an assembler programmer
would treat memory. In contrast, modern programming languages impose a certain disci-
pline on memory, such as rigidly defined object layouts and prohibitions against arbitrary
pointer formation, to enable features such as garbage collection and pointer safety. The
idea is that by establishing a mapping between notions in the two fields, solutions from
one field carry over to the other. In particular, the techniques used in conservative garbage
collection serve as a sort of apologia for the hash scanning approach used to find runtime
3.1. What is a component?
It is useful to first say some things about what it is that we are deploying. Of course,
we deploy software (although in Chapter 9 on service deployment, we will also see some
things being deployed that cannot be called software proper), but what are the units of
deployment? That is, what do we call the smallest meaningful artifacts that are subject to
deployment? It is quite customary in some communities to call these packages, but this
term has the problem that it implies that the artifact is in packaged form. For instance, a
.RPM file is a package, but what do we call the corresponding installed object? In RPM
terminology, this is also a package, and so the term is ambiguous.
In the Component-Based Software Engineering (CBSE) community, the term compo-
nent is used for units of deployment. Like some other terms in computer science (such as
object), this term is overloaded almost to the point of uselessness. In fact, Czyperski [154,
Chapter 11] surveys fourteen different definitions. Some definitions are not meaningfully
distinguishable from the notion of a module, or are used to identify any code reuse mech-
Despite the multitude of definitions, I shall use the term anyway, because I believe that
the essential aspect that should be captured by a definition of components is deployment.
1Indeed, no Linux distribution based on Nix will ever be compliant with the Linux Standards Base [82, 81]!
3. Deployment as Memory Management
Otherwise, the notion of component does not essentially add anything not already provided
by concepts such as module, class, method, function, and other organisational structures
provided by programming languages. Components in CBSE have many other concerns
than deployment aspects, such as interfaces, contracts, composition techniques, verifica-
tion of performance and other extra-functional requirements, and so on. However, a dis-
cussion at CBSE-2005 revealed that packaging and deployment were the only aspects on
which there was a consensus that they should be provided by a component-based system.
In this thesis, I will use the following definition of “software component”:
• A software component is a software artifact that is subject to automatic composition.
It can require, and be required by, other components.
• A software component is a unit of deployment.
The first property is of course entailed by the etymology of the word component, except
for the stipulation that a composition can be established automatically. Thus, source code
fragments printed in a book, while composable through human intervention, are not com-
ponents. The second property implies that the purpose of the notion of a component is to
be the principal object of deployment.
This definition has some important implications. For instance, it does not say any-
thing about whether components are in “source” or “binary” form, since such a distinction
is rather technology-specific (e.g., what does it mean for interpreted languages such as
Python?). But since it has to be automatically composable, it must come with all the nec-
essary infrastructure to support automatic composition. For instance, mere source code is
not a component, but source code with the necessary meta-information—such as Make-
files, configuration and build scripts, deployment descriptors, and so on—is a component.
So a Python source file in itself is not a component. But it is when placed in an RPM pack-
age or combined with a Nix expression, since that meta-information makes the component
available for automatic composition.
A subtle point is what we mean by “software.” Of course, software is a specification that
can be executed automatically by a computer (possibly with the aid of other software, such
as an interpreter), but not every constituent part of a software system is executable; some
of it (such as data) is an adjunct to execution. This means that not all software components
need to contain executable code themselves, but are intended for ultimate composition with
an executable component that uses them. For instance, in a software system with an online
help feature, it might be useful to split off the online help into a separate component so that
it can be installed optionally.
The definition above can be considered the greatest common denominator of a number
of influential definitions. Czyperski [154, Section 4.1.5] gives the following definition:
“A software component is a unit of composition with contractually specified
interfaces and explicit context dependencies only. A software component can
be deployed independently and is subject to composition by third parties.”
The important aspects of this definition are the following:
• A component is a unit of independent deployment. Since it is a unit, it does not
have to support partial deployment—the component must be deployed in its entirety.
3.1. What is a component?
Since it is independently deployable, it should not cause interference with its target
environment, such as other components.
• Thus it must have explicit context dependencies only, since otherwise it would place
requirements (such as component dependencies) on its target environment that are
not formally specified by its deployment metadata.
• It has contractually specified interfaces, which enable it to be subject to third-party
composition, i.e., it is possible for other people’s components to use the component.
• It has no externally observable state. Different uses of the component over time
should not interfere with each other. Note that this does not mean that the code of
the component must have no side effects, i.e., should be written in a purely func-
tional language. It is perfectly fine for an execution involving the component, e.g.,
the classes and objects loaded from a component, to have side effects. But the com-
ponent should not modify its on-disk representation or other static resources in a way
that affects future uses of the component. This is exactly the case with Nix compo-
nents: they can be entirely impure dynamically, but their static representation—the
files on disk—never changes after they have been built.
These properties do not always hold for the things that one might want to deploy. For
instance, Mozilla Firefox 1.0 (unless specially prodded not to do so) wants to make modifi-
cations to some of its own files when started for the first time, violating the requirement of
having no externally observable state. This is bad because it makes security difficult (the
files in question should be writable by all potential users), and hinders proper identification
in the sense of configuration management (since the “same” component has different con-
tents over time). Likewise, many components are not intended for third-party composition,
but are rather specifically tied to certain other components and do not have a well-defined
interface. The reason why such components are separate, rather than combined into a sin-
gle component, is typically to allow optional functionality to be omitted, or for licensing
reasons (e.g., some clients not being entitled to access the full feature set of a system).
This thesis’s definition differs from Czyperski’s in a number of ways:
• It does not require explicit context dependencies, it merely states that components
can have dependencies. Many components have implicit dependencies—which is
what makes correct deployment so difficult. Of course, to ensure correct deploy-
ment, it is necessary to make implicit dependencies explicit.
• It does not require the possibility of third-party composition. As explained above,
many components are not in fact usefully reusable by third parties.
• It does not require contractually specified interfaces. It is enough for components to
be composable automatically, which requires some kind of formal interface (since
it can be used automatically), but this can be as weak as a symbol table in an ELF
library [160], or the existence of a named function in a Perl module.
Czarnecki and Eisenecker [40] have an even more minimalistic definition: components
are simply the “building blocks” of software systems that should be composable as easily
and flexibly as possible in order to maximise reuse and minimise duplication. However,
3. Deployment as Memory Management
this definition has the problem that it makes a component nothing more than a general term
for reuse mechanisms, as stated above. This is intentional: the authors feel that component
is a “natural concept” for which it is hard to give a rigid definition. But a definition that is
overly broad is not very useful.
Meyer [119] defines a component as a “modular unit” that can be used by other compo-
nents, possesses an “official usage description” enabling its use by other components, and
is not “tied to any fixed set of clients”. The definition used here does not have the latter
two requirements (many components have poorly defined or missing usage descriptions,
or are specific to a particular client, as argued above).
So the definition of component used in this thesis is quite minimal. I do not mean to
imply that a stronger notion of component is not useful or appropriate in many cases. For
instance, component technologies that require strong contracts from components are very
valuable. But such kinds of “high level” composition aspects are not the subject of this
thesis. This thesis is about the “low level” aspect of storage of components: how can we
store components so that they do not interfere with each other, that their dependencies are
known, and so on? That is, it is about components in the file system.
Components in the file system In this thesis, I shall further restrict the notion of a
software component to things that have an independent representation in the file system,
i.e., a component consists of a set of files. Of course components do not have to exist as
discrete objects in the file system. They might be stored in a database (as suggested, e.g.,
by the Java Language Specification [79, Section 7.2.2]). But this is quite rare, so we will
not consider it.
Through the file system, components can require functionality from other components,
and they can provide it. Thus, components need to be plugged into each other, i.e., be
composed. There are many mechanisms by which a composition can be realised, such
as static linking, dynamic linking, inclusion of source files, and so on; and these are just
general classes of composition technologies. Concrete realisation mechanisms are legion:
Unix static libraries, ELF dynamic libraries, Java JAR files, C preprocessor directives,
.NET assemblies, Perl modules, configuration files, etc. These mechanisms also have
various binding times: some are typically used at build time, others at runtime, with the
more nebulous notions of packaging time and installation time thrown in for good measure.
But what they all have in common is that the composition must be established in the
file system. That is, since the components exist in the file system, to compose them,
the composition mechanism must be able to find them in the file system. This aspect of
composition—establishing the “physical” composition,” so to speak—is often overlooked
in the component-based software engineering literature as well as in programming sys-
tems. This is unfortunate, since the difficulty in correctly and efficiently establishing and
managing the composition in the file system is the major source of deployment problems.
3.2. The file system as memory
In this section I recast the deployment problems identified in the introduction in terms of
concepts from the domain of memory management in programming languages. That we
can do this is not surprising: after all, file systems on the one hand and RAM on the other
3.2. The file system as memory
hand are just two different levels in the storage hierarchy. Where programs manipulate
memory cells, deployment operations manipulate the file system. This analogy reveals that
the safeguards against abuse of memory applied by programming languages are absent in
deployment, and suggests that we can make deployment safe by trying to impose similar
Components, as defined above, exist in the file system and can be accessed through
paths, sequences of file names that specify a traversal through the directory hierarchy,
such as /usr/bin/gcc. We can view a path as an address. Then a string representing a path
is a pointer, and accessing a file through a path is a pointer dereference. Thus, component
interference due to file overwriting can be viewed as an address collision problem: two
components occupy overlapping parts of the address space.
Furthermore, we can view components as representations of values or objects. Just as
objects in a programming language can have references to other objects, so components
can have references to other components. If dereferencing a pointer is not possible because
a file does not exist, we have the deployment equivalent of a dangling pointer.
A component A is dependent on a component B if the set of files that constitutes A
enables an execution involving A to dereference pointers to the files constituting B. It
enables an execution to dereference B (rather than that it necessarily must dereference B),
since A itself might not be executed or even be executable; e.g., when A is a configuration
file containing a path to B, and this file is read by some other component C.
For example, suppose that we have an application that is dynamically linked against a
file /lib/, the GNU C library on Linux systems. This means that the executable image
of the application contains an instruction to the dynamic linker to load the file /lib/
Thus, the application enables a pointer to that file, since execution of the application will
cause a dereference of that file when it is started. But if the file is missing, the deployed
application contains a dangling pointer. Such dangling pointers are the result of incomplete
deployment, and they must be prevented.
To prevent dangling pointers, we should consider the various ways through which a
component A can cause a dereference of a pointer to another component B. Since compo-
nents are analogous to objects, we can provide analogues to the ways in which a method
of a class can obtain and dereference a pointer to another object.
First, a pointer to B can be obtained and dereferenced by A at runtime. This is a form
of late binding, since the pointer is not passed in when it is built, but rather when it is
executed. This may happen through environment variables (such as the PATH program
search path on Windows and Unix), program arguments, function arguments (in the case
of a library), registry settings, user interaction, and so on. Conceptually, this is similar to
the following pseudo-Java:
class Foo {
Foo() {}
int run(Bar y) { return y.doIt(); }
We can consider the constructor method Foo() to be the build time of the objects of class
Foo, while the method run corresponds to the runtime. Thus any arguments passed to the
constructor are analogous to build-time dependencies, while arguments passed to run are
3. Deployment as Memory Management
analogous to runtime dependencies. In this case, an instance of Foo obtains a pointer to an
instance of Bar at runtime.
Second, a pointer to B can be obtained and dereferenced by A at build time. In this
case, a pointer is passed in at build-time and is completely “consumed”, meaning that B is
not needed subsequently. It can therefore no longer cause a dangling pointer. Examples
include pointers to static libraries or the compiler, which are not usually retained in the
result. This is comparable to a constructor that uses a pointer to another object to compute
some derived value but that does not store the pointer itself:
class Foo {
int x;
Foo(Bar y) { x = y.doIt(); }
int run() { return x; }
Third, a pointer to B can be obtained by A at build time and dereferenced at runtime.
In this case, a pointer to B is passed to and saved in the build process that constructed A,
and is dereferenced during program execution. This is the notion of retained dependencies
described on page 23. It happens often with Unix-style dynamically linked libraries: the
build of an application stores the full path to the library in the RPATH of the application
binary (see page 23), which is used to locate the library at program startup. This is equiv-
alent to the constructor of an object storing a pointer that was passed in, which is later
class Foo {
Bar x;
Foo(Bar y) { x = y; }
int run() { return x.doIt(); }
Here, the execution of the constructor is similar to the construction of a component, and
the execution of method run() is similar to the use of a component.
Finally, and orthogonal to the previous methods, a pointer to B can be obtained using
pointer arithmetic. Since paths are represented as strings, any form of string manipulation
such as concatenation may be used to obtain new paths. If we have a pointer to /usr/bin,
then we may append the string gcc to obtain /usr/bin/gcc. Note that the names to be ap-
pended may be obtained by dereferencing directories, that is, reading their contents.
It follows from the above that it is hard to find the set of pointers that can be dereferenced
by a component. First, pointers may already be present in the source. Second, pointers
passed in at build-time may or may not be stored in the component. Obviously, we do not
want to distribute a compiler along with an application just because it was used to build
it—but other build-time components, such as dynamic libraries, should be. Finally, pointer
arithmetic can be used to obtain new pointers in uncontrolled ways. For correct software
deployment it is essential that no dangling pointers can occur. Thus, we need a method to
prevent these.
Figure 3.1 summarises the mapping of concepts between the domain of memory man-
agement in programming language implementation on the one hand, and deployment on
the other hand. (The third set of correspondences is first discussed in Section 3.4.)
3.3. Closures
Programming Language Domain
Deployment Domain
⇔ disk
objects (values)
⇔ components
⇔ path names
pointer dereference
⇔ I/O
pointer arithmetic
⇔ string operations
dangling pointer
⇔ reference to absent component
object graph
⇔ dependency graph
Kinds of dependencies
calling already constructed object with
⇔ runtime dependency through late bind-
reference to other object
calling constructor with reference to
⇔ build-time dependency
other object, not stored
calling constructor with reference to
⇔ retained dependency
other object, stored
Pointer discipline vs. deployment styles
languages with total absence of pointer
⇔ typical Unix-style deployment
discipline (e.g., assembler)
languages with enough pointer disci-
⇔ Nix
pline to support conservative garbage
collection (e.g., C, C++)
languages with full pointer discipline
⇔ an as-yet unknown deployment style
(e.g., Java, Haskell)
not enabled by contemporary operat-
ing systems
Figure 3.1.: Deployment / memory management analogies
3.3. Closures
As noted above, dangling pointers in components are a root cause of deployment failure.
Thus, to ensure successful software deployment, we must copy to the target system not just
the files that make up the component, but also all files to which it has pointers. Formally,
the set of files to be included in the distribution of a component is the closure of the set
of files in the component under the points-to relationship. Informally, a file that is part of
a component is characterised as a tuple (p, c) where p is the path where the file is stored,
and c is the contents required at that path. The contents of a path include not just the file
contents if p is a regular file, but also metadata such as access permissions. In addition,
the file may be a directory, in which case the contents include a mapping from directory
entry names to the contents of these directory entries. This will be made more precise in
Section 5.2.1.
Now the closure of a set of files C is the smallest set C ⊇ C satisfying
∀(p, c) ∈ C : ∀pref references(c) : ∃(p, c) ∈ C : pref = p
where references(c) denotes the set of pointers contained in the contents of c. That is, if a
path p is in the closure, then the paths to which it has pointers must be as well.
3. Deployment as Memory Management
By definition, a closure does not contain dangling pointers. A closure can therefore
be distributed correctly to and deployed on another system. Unfortunately, as mentioned
in Section 3.2, it is not possible in general to determine the set references(c) because of
retained dependencies and pointer arithmetic. In the next section, I propose a heuristic
approach to reliably determine this set.
3.4. A pointer discipline
In the previous section we saw that correct deployment without dangling pointers can be
achieved by deploying file system closures. Unfortunately, determining the complete set
of pointers is hard.
In the domain of programming languages, we see the same problem. Languages such as
C [106, 100] and C++ [153, 101] allow arbitrary pointer arithmetic (adding or subtracting
integers to pointers, or casting between integers and pointers). In addition, compilers for
these languages do not generally emit runtime information regarding record and stack lay-
outs that enable one to determine the full pointer graph. Among other things, this makes
it impossible to implement garbage collectors [104, 179] that can accurately distinguish
between garbage and live objects. Garbage collectors consist of two logical phases [179].
In the detection phase, the collector discovers which objects on the heap are “live,” i.e.,
reachable by following pointers from the stack, registers, and global variables. In the
reclamation phase, all objects that are not live (i.e., dead) are freed. To find live objects in
the detection phase, it must be clear what the pointers are. For instance, the collector must
know what words on the stack are pointers, and not, say, integers.
Other languages address this problem by imposing a pointer discipline: the layouts
of memory objects (including the positions of pointers) are known, and programs cannot
manipulate pointers arbitrarily. For example, Java does not permit casting between integers
and pointers, or direct pointer arithmetic. Along with runtime information of memory
layouts, this enables precise determination of the pointer graph.
For file systems the problem is that arbitrary pointer arithmetic is allowed and that we
do not know where pointers are stored in files. Certainly, if we restrict ourselves to compo-
nents consisting of certain kinds of files, such as Java class files, we can determine at least
part of the pointer graph statically, for instance by looking at the classes imported by a Java
class file. However, this information is still not sufficient due to the possibility of dynamic
class loading. Since the dependency information cannot be guaranteed to be complete and
because this technique is not generally applicable, this approach does not suffice.
The solution to proper dependency identification comes from conservative garbage col-
lection [15, 14]. This is a technique to provide garbage collection for languages that have
pointer arithmetic and no runtime memory layout information, such as C and C++. Conser-
vative garbage collection works by constructing a pointer graph during the detection phase
by assuming that anything that looks like a valid pointer, is a valid pointer. For instance,
if we see a bit pattern in memory corresponding to a pointer to some address p, and p is
in fact part of an allocated part of memory, the object containing p is considered live. Of
course, this assumption might be wrong: the bit pattern might simply encode an integer, a
floating point number, and so on. But conservative garbage collection errs on the side of
caution: a misclassification causes a possibly dead object to be considered live, preventing
3.4. A pointer discipline
it from being reclaimed. The worst that can happen as a result is that we might run out of
memory. Since misclassifications are rare, such cases are unlikely to occur2.
Since there is a correspondence between objects in memory and files in a file system, we
would like to borrow from conservative garbage collection techniques. That is, we want
to borrow the concept of conservatively scanning for pointers from the detection phase.
Specifically, we want to scan the contents of files (of which we do not know the layout),
and if we see a string of characters that looks like a file name, we assume that the file being
scanned has a pointer to the file indicated by the name. But there are two problems with
this idea. First, there might be many false positives: the fact that the strings usr, bin and
gcc occur in a component, does not necessarily imply that the component is dependent
on /usr/bin/gcc). But more importantly, arbitrary pointer arithmetic makes it possible that
even files that are not referenced, might still be dereferenced at runtime through pointers
computed at runtime.
There is a reason that conservative garbage collection works in languages such as C—
namely, that they actually do have a minimal pointer discipline. For instance, pointer
arithmetic is not allowed to increment or decrement a pointer beyond the boundaries of
the object that it originally pointed to. Of course, the compiler has no way to verify the
validity of such arithmetic, but programmers are well aware of the fact that a C fragment
char * p = malloc(100);
char * q = p + 200; /* ouch! */
printf("%c", *q);
is quite illegal. Thus, correct C programs will work with a conservative garbage collector.
But even this minimal pointer discipline is absent in general component storage in the
file system! Tools can perform arbitrary pointer arithmetic on paths to produce pointers to
any file in the file system. Most tools don’t, but even in a limited sense it is a problem. For
instance, once the C compiler has a pointer to /usr/include, it will happily dereference that
directory and produce pointers to all the files in it. In a way, how we store components
in the file system in conventional deployment systems is reminiscent of programming in
assembler—there is perfect freedom, and because of that freedom, all bets are off and we
can make no statements about the structure of memory objects (see the bottom third of
Figure 3.1).
So we have to impose a discipline on the storage of components. There are two aspects
to this:
• Pointers have to be shaped in such a way that they are reliably recognisable. Short
strings such as bin offer too much opportunity for misdetection.
• Components have to be separated from each other. The constituent files of multiple
components should not be mixed. Otherwise a tool can “hop” from one component
to another through pointer arithmetic.
2But note that a system using conservative garbage collection can be vulnerable to denial-of-service attacks
if an attacker has an opportunity to feed it data (e.g., strings) that contains many bit patterns that might be
misidentified as pointers.
3. Deployment as Memory Management
The Nix store introduced in Section 2.1 satisfies both requirements. Components are
stored in isolation from each other in paths such as /nix/store/bwacc7a5c5n3...-hello-2.1.1,
which include long, distinctive strings such as bwacc7a5c5n3
The latter is suitable for
pointer scanning, contrary to, say, hello. Thus, if we find the ASCII string bwacc7a5c5n3...
in a different component, we can conclude that there is a dependency; and if we don’t find
the string, we can conclude that no such dependency exists between the two components.
As noted in Section 3.2, the build of a component may retain a pointer to another com-
ponent, thus propagating a dependency to later points on the deployment timeline. In con-
ventional deployment systems, these dependencies have to be specified separately, with all
the risks inherent in manual specification. By analysing those files of a component that are
part of a distribution (in source or binary form) or of an installation, we can automatically
detect timeline dependencies, such as build-time and runtime dependencies.
Pointer hiding What are the risks of this approach, i.e., when does it fail to detect depen-
dencies? It has exactly the same limitation as conservative garbage collection in program-
ming languages, namely, the failure to detect pointers due to pointer hiding. Pointer hiding
occurs when a pointer is encoded in such a way that it is not recognised by the scanner (a
false negative). For example, in C we can do the following:
char * p = malloc(...);
unsigned int x = ~((unsigned int) p); /* invert all bits */
p = 0;
... /* run the collector */
p = (char *) ~x;
That is, we allocate a pointer, then cast it to an integer, flip all the bits in the integer, and
clear the original pointer. Later, we invert the integer again, and cast it back to a pointer,
thus recovering the original pointer. However, if the garbage collector runs in between, it
will not be able to detect that the referenced memory object if alive, since x does not look
like a pointer to that object. Likewise, the collector can be fooled by writing a pointer to
disk and reading it back later.
With components in the file system, there are similar risks3. If the collector searches for
ASCII representations of the hash parts of component names, then anything that causes
those hash parts to be not represented as contiguous ASCII strings will break the detec-
tion. Obviously, flipping the bit representation of the ASCII string will accomplish that,
but this is unlikely to happen in real software. More likely is that the file name will be
split in some way. Path components are a common place to split path names: /nix/store/-
bwacc7a5c5n3...-hello-2.1.1 might be represented as a list ["nix", "store", "bwacc7a5c5n3...-
hello-2.1.1"]. This is why we scan for just the hash part, not the whole store path. Tools
are unlikely to split inside path components since substrings of path components have no
general semantic meaning.
More threatening is the use of another representation than ASCII. For instance, if paths
are represented in UTF-16 [34, Section 2.5], Nix’s scanner will not find them. Of course,
3In all fairness, the risks are potentially a bit bigger in the file system than in C. The C standard leaves the
semantics of the code example above implementation-defined. So most things that can cause pointer hiding
are not used by educated C programmers in general. On the plus side, the chance of a false positive is
practically non-existent.
3.5. Persistence
the scanner can easily be adapted to support such encodings. Another concern that is harder
to support is compressed executables. The best solution here is not to use such facilities if
they are available.
Ultimately, how well the scanning approach works is an empirical question. As we shall
see in Chapter 7, scanning works very well in practice, as evidenced by the large set of
components in the Nix Packages collection, which have not yielded a single example of
a hidden pointer. Admittedly, this result may be dependent on the characteristics of the
components being deployed; namely, they are Unix components, which do not typically
feature compressed executables, and store filenames almost invariably in plain ASCII.
3.5. Persistence
The technique of pointer scanning, as described in the previous section, solves the problem
of reliable identification of component dependencies. Here I describe a solution to the
problem of component interference, which occurs when two components occupy the same
addresses in the file system. When we deploy software by copying a file system closure to
another system, we run the risk of overwriting other software. The underlying problem is
similar to that in the notion of persistence in programming languages, which is essentially
the migration of data from one address space to another (such as a later invocation of a
process). We cannot simply dump the data of one address space and reload them at the
same addresses in the other address space, since those may already be occupied (or may
be invalid). This problem is solved by serialising the data, that is, by writing it to disk in a
format that abstracts over memory locations.
Unfortunately, it is hard to apply an analogue of serialisation to software deployment,
because it requires changing pointers, which may not be reliably possible in files4. For
instance, a tempting approach is to rename the files in a closure to addresses not existing
on the target system. To do this, we also have to change the corresponding pointers in the
files. However, patching files in such a manner might cause the affected components to
break, e.g., due to internal checksums on files being invalidated in the process.
Thus, we should choose addresses in such a way as to minimise the chance of address
collision. To make our scanning approach work, we already need long, recognisable ele-
ments in these file names, such as the base-32 representations of 160-bit values used in Nix.
Not every selection of values prevents collisions: e.g., using a combination of the name
and version number of a component is insufficient, since there frequently are incompatible
instances even for the same version of a component.
Another approach is to select random addresses whenever we build a component. This
works, but it is extremely inefficient; components that are functionally equal would obtain
different addresses on every build, and therefore might be stored many times on the same
system. That is, there is a complete lack of sharing: equal components are not stored at
the same address.
Hence, we observe a tension between the desire for sharing on the one hand, and the
avoidance of collisions on the other hand. The solution is another notion from memory
management. Maximal sharing [166] is the property of a storage system that two values
4But if we assume that we can rewrite pointers, then we can do something very similar to serialisation. The
implications of this assumption (which turns out to be quite reasonable) are explored in Chapter 6.
3. Deployment as Memory Management
occupy the same address in memory, if and only if they are equal (under some notion of
equality). This minimises memory usage in most cases.
The notion of maximal sharing is also applicable to deployment. We define two compo-
nents to be equal if and only if the inputs to their builds are equal. The inputs of a build
include any file system addresses passed to it, and aspects like the processor and operating
system on which it is performed. We can then use a cryptographic hash [145] of these
inputs as the recognisable part of the file name of a component. Cryptographic hashes
are used because they have good collision resistance, making the chance of a collision
negligible. In essence, hashes thus act as unique “product codes” for components. This
is somewhat similar to global unique identifiers in COM [16], except that these apply to
interfaces, not implementations, and are not computed deterministically. In summary, this
approach solves the problem of component interference at local sites and between sites
by imposing a single global address space on components. The implementation of this
approach is discussed in Chapter 5.
4. The Nix Expression Language
This chapter describes the Nix expression language, the formalism used to describe compo-
nents and compositions. It first motivates why a lazy functional language is an appropriate
choice. It then gives a full description of the syntax and semantics of the language. Fi-
nally, it discusses some of the more interesting aspects of the implementation of the Nix
expression evaluator.
4.1. Motivation
The Nix expression language is a simple, untyped, purely functional language. It is not a
general purpose programming language. Its only purpose is to describe components and
compositions. A component is created through the derivation primitive operation, which
accepts all information necessary to build the component. This includes its dependencies,
which are other derivations or sources.
Purely functional language A purely functional language gives us a clean component
description formalism. The notion that components can be produced by functions that
accept values for the variation points in the component, is much more flexible than, say, a
language that binds such variation points using global variables.
Consider the ATerm library [166], which is a small library for term manipulation. It has
several build-time options, in particular an option that specifies whether the domain feature
“maximal sharing” is enabled. This setting is vitally important, as it completely changes
the semantics of the library. Thus a build with maximal sharing cannot be substituted by
a build without maximal sharing, and vice versa. Now suppose that we have a component
consisting of two programs, foo and bar, that require the ATerm library with and without
maximal sharing respectively. In a purely functional language, this is easy to model. We
just make a function for the ATerm library:
aterm = {maximalSharing}: derivation {...};
We can now call this function twice with different values, in the derivations of foo and bar
foo = derivation { ...
buildInputs = [ (aterm {maximalSharing = true;}) ];
bar = derivation { ...
buildInputs = [ (aterm {maximalSharing = false;}) ];
4. The Nix Expression Language
Imagine that we had a formalism that used global variables to bind variation points
such as maximalSharing. If the language were a Makefile-like declarative formalism, then
it would be quite hard to implement this example. Makefiles [56] are a formalism to
describe the construction of (small-grained) components. Variability in a component is
typically expressed by setting variables, e.g.,
MAX_SHARING = 1 # or 0
aterm: ...
foo: aterm
bar: aterm
It is not clear how to build foo and bar in both variants concurrently. In Make, it is typ-
ically very hard to build a component in multiple configurations that exist side-by-side.
If it is done at all, it is usually accomplished through ad hoc tricks such as calling Make
recursively with different flags, giving the build result a different name for each call.
If the formalism were imperative, then it would be possible. Such a style might look
like this:
maxSharing = true;
maxSharing = false;
But this has the same problems that imperative programming has in general. Since the
order in which statements are executed matters, it is hard to see exactly what goes into a
build function such as aterm(). Of course, in a trivial example like this, it is doable. But
once we get many functions, variables, and potential control-flow paths, it may become
very hard to comprehend what is happening. Note that this is the case even if variables are
not global. For instance, maxSharing could be a field of the aterm object. But even then
the execution order frustrates comprehension.
Interestingly, most true component-level formalisms do not specify compositions at all,
just components.
(With “component-level” I mean that they work at the level of large-
grained components, like RPM spec files do, rather than at the level of small-grained com-
ponents, like Makefiles.) An RPM spec file for instance specifies a component build action,
which can include variables that can be set when the rpmbuild tool is called, but there is no
way for one RPM spec file to declare that it needs the result of another RPM spec file with
certain variables set to certain values. In any case RPM will not recursively build those
dependencies, let alone with the desired variable values (since it has no way to prevent
collisions between the various builds).
Other essentially functional build formalisms are Amake and Odin, discussed in Sec-
tion 10.3.
Laziness A lazy language only evaluates values when they are needed. This is a very
useful property for a component description language. Here are some concrete examples
of why this is useful.
It is possible to define large sets of components concurrently (i.e., in a single file or ex-
pression), without having to build them all. For instance, the Nix expression pkgs/system-
4.1. Motivation
/all-packages-generic.nix in the Nix Packages collection returns a large attribute set of
rec {
stdenv = derivation { ... };
glibc = derivation { ... };
gcc = derivation { ... };
hello = derivation { ... };
subversion derivation { ... };
firefox = derivation { ... };
and hundreds more. Since the language is lazy, the right-hand sides of the attributes will
not be evaluated until they are actually needed, if at all. For instance, if we install any
specific component from this set, say, nix-env -i hello, then only the hello value will be
evaluated (although it may in turn require other values to be evaluated, such as stdenv).
Laziness also allows greater efficiency if only parts of a complex data structure are
needed. Take the operation nix-env -qa, which prints the name attribute of all top-level
derivations in a Nix expression:
$ nix-env -f ./pkgs/system/i686-linux.nix -qa
This operation is quite fast: it takes around 0.6 seconds on an Athlon XP 2600 on an
expression containing 417 top-level derivations. But the derivation calls for those 417
derivations potentially need to do a lot of work, as we will see in Section 5.4: they have to
copy sources to the Nix store, compute cryptographic hashes, and so on. But that work is
actually attached to two attributes returned from derivation, namely drvPath and outPath,
which compute the derivation and output store paths. As long as those attributes are not
used, they are not computed. Above, this was the case, since nix-env -qa only uses the
name attribute. But if we force the computation of either of those two attributes:
$ nix-env -f ./pkgs/system/i686-linux.nix -qa --drv-path
acrobat-reader-7.0 /nix/store/1fdl...-acrobat-reader-7.0.drv
then the operation takes much longer: 2.7 seconds.
Laziness allows arguments to be passed to functions that may not be used. In the Subver-
sion example (Figure 2.9), the Subversion function passes dependencies such as openssl
and httpd that are only passed to the derivation if the corresponding domain feature is set,
4. The Nix Expression Language
{ ..., sslSupport, openssl, ... }:
derivation {
openssl = if sslSupport then openssl else null;
That is, the derivation’s openssl attribute is set to null if SSL support is disabled, regardless
of the value of the openssl function parameter. This allows us to unconditionally pass an
instance of openssl in the function call in all-packages-generic.nix:
subversion = import .../subversion {
inherit openssl;
sslSupport = false; # set to `true' if you want SSL support
Thanks to laziness, OpenSSL will only be built if SSL support is enabled, despite the fact
that it is always passed as an argument. This simplifies the call sites. Without laziness, the
caller would have the responsibility to ensure that no unnecessary dependencies are passed
to the derivation, thus breaking abstraction.
4.2. Syntax
This section presents the syntax of the Nix expression language using the Syntax Definition
Formalism [175, 90], which has the following useful properties:
• It is scannerless: the lexical syntax and the high-level “context-free syntax” are
specified in the same formalism. Thus, no separate lexical scanner is necessary. The
result is a single context-free grammar.
• It supports generalised LR (GLR) parsing. Most context-free parsers (e.g., LL(k)
and LR(k) parsers, for fixed k) suffer from bounded look-ahead: they must choose
between different productions on the basis of a fixed number of tokens. This is
inconvenient for language implementors, since they must deal with the resulting
parse conflicts. For instance, the input fragment "{x" could be start of an attribute set
(e.g., {x=123;}), or a function definition (e.g., {x}: x)1. This requires the programmer
to introduce additional production rules to resolve the conflict. Since generalised LR
parsing has unbounded look-ahead, such grammar transformations are unnecessary.
Another advantage of GLR is that since it supports arbitrary context-free grammars
(including ambiguous ones), it allows grammars to be composed. This makes it easy
to extend programming languages [19].
• It is a declarative formalism that specifies a grammar separate from any particular
programming language (contrary to, say, Yacc [103]). This is the main reason why
I use SDF in this chapter: it allows the entire language to be described in a concise,
readable and directly usable formalism, with little meta-syntactic overhead.
1Actually, this is the only shift/reduce conflict in the language, so generalised LR parsing is not that important
here, except that it enables scannerless parsing.
4.2. Syntax
The current Nix implementation unfortunately does not use the SDF parser (sglr) for
performance reasons (scannerless parsing in particular is expensive). Rather, it uses Bi-
son [66], which is the GNU implementation of Yacc [103]. It recently added support for
GLR parsing, but still requires a separate lexical scanner. The Flex lexical scanner gener-
ator [130], an implementation of Lex [113], is used for lexical analysis.
The following is a brief overview of SDF necessary to understand the Nix grammar. A
full specification of SDF can be found in [175]. An SDF grammar contains production
rules of the form
symbols → nt
where symbols is a sequence of terminals and non-terminals, and nt is the non-terminal pro-
duced by the production rule. This notation is somewhat untraditional: most context-free
grammar formalisms are written the other way around (nt ← symbols or nt → symbols).
The left-hand side can contain various types of syntactic sugar:
• Regular expressions to denote lexical elements.
• The usual EBNF constructs: repetition (S*), choice (S?), and grouping ((S)).
SDF has two kinds of productions: those defining the lexical syntax, and those defining
the context-free syntax. Lexical syntax is used to define the tokens of the language. An
example of the first is the definition of the non-terminal for identifiers:
lexical syntax
[a-zA-Z\_][a-zA-Z0-9\_\']* -> Id
This defines identifiers using a regular expression: they start with a letter or underscore,
and are followed by zero or more letters, digits, underscores, and accents.
Layout—whitespace and comments that can occur anywhere between tokens—is also
defined using lexical syntax productions. For example, the production
[\ \t\n] -> LAYOUT
says that spaces, tabs and newlines are all layout. There is a built-in production that says
that sequences of layout are also layout. The non-terminal LAYOUT has a special status,
as it is automatically inserted between all symbols in the productions of the context-free
For example, the following context-free syntax2 rule defines a production for a simple
expression language:
context-free syntax
Expr "+" Expr -> Expr
This rule is (essentially) desugared to
2The term “context-free syntax” in SDF is a bit confusing, as both its “lexical syntax” and “context-free syntax”
are combined to a single context-free grammar.
4. The Nix Expression Language
SDF allows priorities and associativities to be defined to disambiguate otherwise am-
biguous grammars. The following example defines left-associative productions for addi-
tion and multiplication in a simple expression language, and then defines that multiplica-
tion takes precedence over addition:
context-free syntax
Expr "+" Expr -> Expr {left}
Expr "*" Expr -> Expr {left}
context-free priorities
Expr "*" Expr -> Expr
> Expr "+" Expr -> Expr
Finally, there are typically all sorts of ambiguities between the lexical elements of a lan-
guage. For instance, the string if can be parsed as the identifier "if", a sequence of identifiers
i and f, and the keyword if (assuming that the language has a production containing such a
keyword). The SDF features of rejects and follow restrictions enforce the correct parsing.
This declaration says that if should not be parsed as an identifier:
lexical syntax
"if" -> Id {reject}
Technically, this defines a production that whenever it matches at some point with the
input, it causes any other Id production for those same characters to be rejected. To prevent
a sequence of identifier characters from being interpreted as a sequence of identifiers, a
follow restriction can be used:
lexical restrictions
Id -/- [a-zA-Z0-9\_\']
This says that an Id cannot be followed by a character matching the given character class.
Thus, xy cannot be parsed as the identifiers x and y, since x is followed by a letter.
4.2.1. Lexical syntax
We start with the lexical syntax of Nix expressions. It is divided into two parts: the defini-
tion of the layout, and of the actual information-carrying tokens of the language.
Figure 4.1 shows the SDF module that defines the layout of the language. (SDF gram-
mars are modular in order to allow the different aspects of a language to be defined sepa-
rately, and to make it easier to combine grammars.) Layout consists of whitespace (spaces,
tabs and newlines) and comments. It is worth noting that the language supports two styles
of comments:
(x: x) # Single-line comments like this
"foo" + /* Multi-line
comments like this */ "bar"
Figure 4.2 defines the remainder of the lexical syntax of the language. The token types
defined here are identifiers (which are exactly as described in the Id example above), and
various kinds of constants defined using regular expressions. These are:
• Natural numbers (Int).
4.2. Syntax
module Nix-Layout
sorts HashComment Asterisk Comment
lexical syntax
[\ \t\n]
HashComment -> LAYOUT
"#" ~[\n]* -> HashComment
"/*" ( ~[\*] | Asterisk )* "*/" -> Comment
[\*] -> Asterisk
lexical restrictions
Asterisk -/- [\/]
HashComment -/- ~[\n]
context-free restrictions
LAYOUT? -/- [\ \t\n]
Figure 4.1.: Layout in Nix expressions
• Strings (Str) enclosed between double quotes. Characters are escaped by prefixing
them with a backslash.
• Paths (Path) are essentially sequences of characters with at least one forward slash in
them. To be precise, they are sequences of path components separated by slashes. A
path component consists of one or more letters, digits, and some other characters3.
The initial path component, i.e., the one before the first slash, may be omitted. If it
is omitted, we have an absolute path, e.g., /home/alice/.profile. If it is present, we
have a relative path, e.g., ../../foo/
• URIs (Uri) are a convenience for the specification of URIs in, e.g., calls to the func-
tion fetchurl. The advantage of having URIs as a language construct over simply us-
ing strings is that the user gets syntax checking, and can omit the enclosing quotes.
The regular expression for URIs used here follows [12, Appendix A].
The remainder of Figure 4.2 is concerned with rejects and follow restrictions. Note
that the keywords of the language (used in the context-free syntax below) must be given
twice, first to reject them as identifiers, and second to prevent them from being followed
by identifier characters (e.g., ifx should not be parsed as the keyword if followed by the
variable x). That we have to specify keywords twice is a current infelicity of SDF.
Also note that Booleans (true and false) are not specially defined here. Rather, true and
false are parsed as identifiers and interpreted as built-in nullary functions during execution,
as discussed in the semantics below.
4.2.2. Context-free syntax
Figure 4.3 shows the context-free syntax of the language. A parse of a Nix expression
must match an Expr non-terminal 34 . Almost all productions here produce Exprs. In that
3So currently there is no way to define paths that contain characters not in this set.
4. The Nix Expression Language
module Nix-Lexicals
sorts Id Int Str Path Uri
lexical syntax
[a-zA-Z\_][a-zA-Z0-9\_\']* -> Id
"rec" | "let" | "if" | "then" | "else" |
"assert" | "with" | "inherit" -> Id {reject}
[0-9]+ -> Int
"\"" ~[\n\"]* "\"" -> Str
[a-zA-Z0-9\.\_\-\+]* ("/"[a-zA-Z0-9\.\_\-\+]+)+ -> Path
[a-zA-Z] [a-zA-Z0-9\+\-\.]* ":"
-> Uri
lexical restrictions
Id -/- [a-zA-Z0-9\_\']
Int -/- [0-9]
Path -/- [a-zA-Z0-9\.\_\-\+\/]
Uri -/- [a-zA-Z0-9\%\/\?\:\@\&\=\+\$\,\-\_\.\!\~\*\']
"rec" "let" "if" "then" "else"
"assert" "with" "inherit" -/- [A-Za-z0-9\_\']
Figure 4.2.: Lexical syntax of Nix expressions
respect the Nix expression grammar is quite flat: for instance, there is no separate gram-
matical level for concepts such as modules, function definitions, and so on. Everything is
an expression.
The first production just injects the non-terminals for constants into Expr 35 . Of course,
expressions can be enclosed in parentheses to override the normal precedence rules 36 .
The language has two types of functions. The first 37 takes a single argument. For
instance, the (anonymous) identity function can be defined as follows:
x: x
Of course, this style of function is just a plain λ -abstraction from the λ -calculus [10], as we
will see in the discussion of the semantics below. Though this style only allows functions
with a single argument, since this is a functional language we can still define (in a sense)
functions with multiple arguments, e.g.,
x: y: x + y
which is a function taking an argument x that returns another function that accepts an
argument y.
The second style of function definition 38 , which we have seen in Section 2.2, is more
important in this language. It takes an attribute set and binds the attributes defined therein
to local variables. Thus,
4.2. Syntax
module Nix-Exprs
imports Nix-Lexicals
sorts Expr Formal Bind ExprList
context-free start-symbols Expr
context-free syntax
Id | Int | Str | Uri | Path -> Expr
"(" Expr ")" -> Expr
Id ":" Expr -> Expr
"{" {Formal ","}* "}" ":" Expr -> Expr
-> Formal
Id "?" Expr -> Formal
Expr Expr -> Expr
"assert" Expr ";" Expr -> Expr
"with" Expr ";" Expr -> Expr
"{" Bind* "}" -> Expr
"rec" "{" Bind* "}" -> Expr
"let" "{" Bind* "}" -> Expr
Expr "." Id -> Expr
Id "=" Expr ";"
-> Bind
"inherit" ("(" Expr ")")? Id* ";"
-> Bind
"[" ExprList "]" -> Expr
"" -> ExprList
Expr ExprList -> ExprList
"if" Expr "then" Expr "else" Expr
-> Expr
Figure 4.3.: Context-free syntax of Nix expressions
4. The Nix Expression Language
{x, y}: x + y
declares a function that accepts an attribute set with attributes x and y (and nothing else),
and the expression
({x, y}: x + y) {y = "bar"; x = "foo";}
yields "foobar". The formal (expected) argument can have a default value, allowing it to be
omitted. So
({x, y ? "bar"}: x + y) {x = "foo";}
also evaluates to "foobar".
Following the tradition of most functional languages, function calls are written by juxta-
position, i.e., f x instead of, say, f(x). Whether the absence of an argument delimeter makes
function calls more or less readable is a matter of controversy. However, most calls in Nix
expressions pass argument sets, which are enclosed in curly braces and therefore delimited
anyway (i.e., f {attrs}).
Assertions have been discussed in Section 2.2 (page 33). With expressions 40 allow all
attributes in an attribute set to be added to the lexical scope. For example,
with {y = "bar"; x = "foo";}; x + y
evaluates to "foobar"; the variables x and y defined in the attribute set are in scope in the
expression x + y. This construct is primarily useful in conjunction with import, allowing
values to be “included” from an attribute set defined in a file:
with (import ./definitions.nix); x + y
In essence, this enables a “conventional” module system: commonly used definitions can
be placed in a separate file, and imported into the lexical scope of an expression in another
file. But note that contrary to most module systems, with and import enable a “policy-free”
module system: for instance, imports can occur in any expression, not just at top level.
The various types of attribute set definitions
41 —plain, recursive, and lets—are dis-
cussed in more detail below. All three contain lists of attributes enclosed in curly braces.
There are two kinds of attribute definitions 42 : plain name/value pairs, and inherits, which
copy the value of an attribute from the surrounding lexical scope or an optional expression.
Lists 43 are enclosed in square brackets, and the list elements are juxtaposed, not sepa-
rated by an explicit delimiter. For instance,
[1 2 3]
is a list of three elements. More importantly,
[a b c]
is also a list of three elements, not a call to function a with arguments b and c4.
Finally, the language has conditionals 44 . There is only an if...then...else construct and
no if...then, since the latter does not make sense in a functional setting as it leaves the
value of the expression undefined in case of a false condition. Assertions already serve this
4.3. Semantics
Expr "==" Expr -> Expr {non-assoc}
Expr "!=" Expr -> Expr {non-assoc}
"!" Expr -> Expr
Expr "&&" Expr -> Expr {right}
Expr "||" Expr -> Expr {right}
Expr "->" Expr -> Expr {right}
Expr "//" Expr -> Expr {right}
Expr "~" Expr -> Expr {non-assoc}
Expr "?" Id -> Expr
Expr "+" Expr -> Expr {left}
Figure 4.4.: Context-free syntax of Nix expressions: Operators
purpose—they abort evaluation if the condition is false. Note that due to the absence of an
if...then, there is no “dangling else” problem.
Figure 4.4 continues the context-free syntax. It defines operators, along with their asso-
ciativities. The meaning of the operators is given below.
Figure 4.5 defines the relative precedences of all language constructs. Functions bind the
weakest, and thus the body of a function extends maximally to the right unless the function
is enclosed in parentheses. Assertions, with-expressions and conditions are the next level,
followed by the operators. Function application binds very strongly. These priorities cause
the following example expression:
{x}: assert true; f x != !g y
to be parsed as:
({x}: (assert (true); ((f x) != (!(g y)))))
4.3. Semantics
This section gives the formal semantics of the Nix expression language.
4.3.1. Basic values
The basic (data) values of the Nix expression language, in addition to natural numbers,
strings, paths, and URIs described above, are the following:
• Null values, denoted as the built-in nullary function null.
• Booleans, denoted as the built-in nullary functions true and false.
4The reason that there is a ExprList non-terminal is actually for this very reason: we cannot write Expr* in the
production rule for lists, since then we are not able to give it a priority relative to function calls.
The Nix Expression Language
context-free priorities
Expr "." Id -> Expr
> Expr ExprList -> ExprList
> Expr Expr -> Expr
> Expr "~" Expr -> Expr
> Expr "?" Id -> Expr
> Expr "+" Expr -> Expr
> "!" Expr -> Expr
> Expr "//" Expr -> Expr
> Expr "==" Expr -> Expr
> Expr "!=" Expr -> Expr
> Expr "&&" Expr -> Expr
> Expr "||" Expr -> Expr
> Expr "->" Expr -> Expr
> "if" Expr "then" Expr "else" Expr -> Expr
> "assert" Expr ";" Expr -> Expr
> "with" Expr ";" Expr -> Expr
> Id ":" Expr -> Expr
> "{" {Formal ","}* "}" ":" Expr -> Expr
Figure 4.5.: Context-free syntax of Nix expressions: Priorities
Subpaths are a somewhat obscure language feature that allows files in derivations to
be referenced from other derivations. This is useful if a derivation (as is typical) pro-
duces a directory tree, and we are interested in a particular file in that tree. Suppose
that we have a variable perl that refers to a derivation that builds Perl. The output
of this derivation is a directory at some path p. Suppose now that we want to use
the Perl interpreter as the builder of some other derivation. However, the interpreter
is a program stored not at p but at p + "/bin/perl". With subpaths, we can write the
derivation { ...
builder = perl ~ /bin/perl;
The operator ~ is the constructor of subpath values. We need subpaths in order to
maintain the proper dependency relation between derivations. The above might have
been written as:
derivation { ...
builder = perl.outPath + "/bin/perl";
That is, we compute the path of the builder through ordinary string concatenation
(the outPath attribute contains the computed store path of the perl derivation). As
we shall see in Section 5.4, this derivation will not properly identify perl as one of
its dependencies. It will just have an opaque string value for its builder attribute5.
5Currently, direct references to outPath are not disallowed, so this unsafe example is actually possible. It can be
disallowed by restricting access to the outPath and drvPath attributes described in Section 5.4.
4.3. Semantics
All path values are internally in canonical form, meaning that they are not relative to
the current directory (i.e., start with /), do not contain . or .. elements, do not contain
redundant separators (e.g., //), and do not end in a separator. Any relative paths in a Nix
expression are absolutised relative to the directory that contained the expression. For in-
stance, if the expression /foo/bar/expr.nix contains a path ../bla/, it is absolutised
to /foo/bla/ Sometimes Nix expressions are not specified in a file but given on
the command line or passed through standard input, e.g., in nix-env and nix-instantiate.
Such paths are absolutised relative to the current directory. In the semantic rules below we
will make use of a function canonicalise(p, d) (whose definition is omitted on grounds of
dullness) that yields a canonical representation of path p, absolutised relative to directory
d if necessary.
4.3.2. Compound values
The language has two ways to form compound data structures: lists and attribute sets.
Lists Lists are enclosed in square brackets, as described above. List elements are lazy, so
they are only evaluated when needed. Currently, the language has very limited facilities for
list manipulation. There is a built-in function map that applies a function to all elements in
a list. Lists can be included in derivations, as we will see in Section 5.4. Other than that,
there are no operations on lists.
Attribute sets The most important data type in the language is the attribute set, which is
a set of name/value pairs, e.g.,
{ x = "foo"; y = 123; }
Attribute names are identifiers, and attribute values are arbitrary expressions. The order of
attributes is irrelevant, but any atttribute name can occur only once in a set. Attributes can
be selected using the . operator:
{ x = "foo"; y = 123; }.y
This expression evaluates to 123.
Recursive attribute sets allow attribute values to refer to each other. They are constructed
using the rec keyword. Formally, each attribute in the set is added to the scope of the entire
attribute set. Hence,
rec { x = y; y = 123; }.x
evaluates to 123. If rec were omitted, the identifier y in the definition of the attribute x
would refer to some y bound in the surrounding scope. Recursive attribute sets introduce
the possibility of recursion, including non-termination:
rec { x = x; }.x
A let-expression, constructed using the let keyword, is syntactic sugar for a recursive
attribute set that automatically selects the special attribute body from the set. Thus,
4. The Nix Expression Language
let { body = a ++ b; a = "foo"; b = "bar"; }
evaluates to "foobar".
As we saw above, when defining an attribute set, attributes values can be inherited from
the surrounding lexical scope or from other attribute sets. The expression
x: { inherit x; y = 123; }
defines a function that returns an attribute set with two attributes: x which is inherited from
the function argument named x, and y which is declared normally. The inherit construct is
just syntactic sugar. The example above could also be written as
x: { x = x; y = 123; }
Note that the right-hand side of the attribute x = x refers to the function argument x, not to
the attribute x. Thus, x = x is not a recursive definition.
Likewise, attributes can be inherited from other attribute sets:
rec {
as1 = {x = 1; y = 2; z = 3;};
as2 = {inherit (as1) x y; z = 4;};
Here the set as2 copies attributes x and y from set as1. It desugars to
rec {
as1 = {x = 1; y = 2; z = 3;};
as2 = {x = as1.x; y = as1.y; z = 4;};
However, the situation is a bit more complicated for rec attribute sets, whose attributes
are mutually recursive, i.e., are in each other’s scope. The intended semantics of inherit is
still the same: values are inherited from the surrounding scope. But simply desugaring is
no longer enough. So if we desugar
x: rec { inherit x; y = 123; }
x: rec { x = x; y = 123; }
we have incorrectly created an infinite recursion: the attribute x now evaluates to itself,
rather than the value of the function argument x. For this reason rec is actually internally
stored as two sets of attributes: the recursive attributes, and the non-recursive attributes.
The latter are simply the inherited attributes. We denote this internally used representation
of rec sets as rec {as1/as2}, where as1 and as2 are the recursive and non-recursive attributes,
4.3. Semantics
e if (x ⇝ e) ∈ subs
subst(subs, x)
x otherwise
subst(subs, {as})
= {mapn = e.n = subst(subs, e), as)}
subst(subs, rec {as1/as2})
= rec { mapn = e.n = subst(subs, e), as1)
/ mapn = e.n = subst(subs, e), as2)}
where subs = {x ⇝ e|x ⇝ e ∈ subs ∧ x ∈ names(as)}
subst(subs, let {as1/as2})
= analogous to the rec case
subst(subs, x: e)
= x: subst(subs,e)
where subs = {x2 ⇝ e|x2 ⇝ e ∈ subs ∧ x = x2}
subst(subs, {fs}: e)
= {fs}: subst(subs, e)
where subs = {x ⇝ e|x ⇝ e ∈ subs ∧ x ∈ argNames(fs)}
Figure 4.6.: subst: Substitutions
4.3.3. Substitutions
Variable substitution is an important operation in the evaluation process6. The substitution
function subst(subs, e) performs a set of substitutions subs in the expression e. The set
subs consists of substitutions of the form x ⇝ e that replace a variable x with an expression
Here and in the semantic rules below, the variable e ranges over expressions, x over vari-
ables, p over paths, s over strings, and n over attribute names, with subscripts as appropri-
ate. The members of attribute sets are denoted collectively as as, and we write n = e ∈ as
to denote that the attribute set as has an attribute named n with value e. The arguments of
a multi-argument function are denoted as fs (for “formals”).
The auxiliary function names(as) yields the set of attribute names occurring in the left
hand side of a set of attributes as. Likewise, argNames(fs) yields the set of names of
formal arguments from a set of formal arguments (note that formal arguments can also
contain defaults, which are left out by this function).
The function subst replaces all free variables for which there is a substitution. A variable
is free in a subexpression if it is not bound by any of its enclosing expressions. Variables
are bound in functions and in recursive attribute sets. In recursive attribute sets, only the
recursive attributes (as1) bind variables; the non-recursive attributes (as2) do not. The
function subst is shown in Figure 4.6. For all cases not specifically mentioned here, the
substitution is recursively applied to all subexpressions. For instance, for function calls the
following substitution is applied:
subst(subs, e1 e2) = subst(subs, e1) subst(subs, e1)
6The term substitution in this section has no relation to Nix’s substitute mechanism.
4. The Nix Expression Language
It is assumed that the substitution terms (i.e., the expressions in subs) contain no free
variables, so subst does not have to perform renaming to prevent name capture; the style
of evaluation used below allows us to get away with this.
4.3.4. Evaluation rules
The operational semantics of the language is specified using semantic rules of the form
e1 → e2 that transform expression e1 into e2. Rules may only be applied to closed terms,
i.e., terms that have no free variables. Thus it is not allowed to arbitrary apply rules to
An expression e is in normal form if no rules are applicable. Not all normal forms are
acceptable evaluation results. For example, no rule applies to the following expressions:
123 x
assert false; 123
{x = 123;}.y
({x}: x) {y = 123;}
The predicate good(e) defines whether an expression is a valid evaluation result. It is true
if e is a basic or compound value or a function (lambda), and false otherwise. Since rules
are only allowed to be applied to an expression at top level (i.e., not to subexpressions),
a good normal form is in weak head normal form (WHNF) [133, Section 11.3.1]. Weak
head normal form differs from the notion of head normal form in that right-hand sides of
functions need not be normalised. A nice property of this style of evaluation is that there
can be no name capture [10], which simplifies the evaluation machinery.
An expression e1 is said to evaluate to e2, notation e1
→ e2, if there exists a sequence
of zero or more applications of semantic rules to e1 that transform it into e2 such that
good(e2) is true; i.e., the normal form must be good. In the implementation, if the normal
form of e1 is not good, its evaluation triggers a runtime error (e.g., “undefined variable” or
“assertion failed”).
Not all expressions have a normal form. For instance, the expression
(rec {x = x;}).x
does not terminate. But if evaluation does terminate, there must be a single normal form.
That is, evaluation is confluent [5]. The confluence property follows from the fact that at
most one rule applies to any expression. The implementation detects some types of infinite
recursion, as discussed below.
The semantic rules are stated below in the following general form:
That is, we can conclude that e evaluates to e (e → e), if the proposition condition holds.
If there are no conditions, the rule is simply written as RULE-NAME : e → e.
4.3. Semantics
Attribute sets The SELECT rule implements attribute selection. This rule governs suc-
cessful selection, i.e., it applies only if the given attribute name exists in the attribute set.
{as}n = e ∈ as
e.n → e
Note that there is no rule for failure. If attribute n is not in as, evaluation fails and a nice
error message is printed in the actual implementation.
A recursive attribute set is desugared to a normal attribute set by replacing all occur-
rences of references to the attributes with the recursive attribute set. For instance, if e =
rec {x = f x y; y = x;}, then e is desugared to
{ x = f (e.x) (e.y);
y = e.x;
or in full,
{ x = f ((rec {x = f x y; y = x;}).x)
((rec {x = f x y; y = x;}).y);
y = (rec {x = f x y; y = x;}).x;
This desugaring is implemented by the REC rule:
REC : rec {as1/as2} {subst(subs,{as1})∪as2}
subs = {n ⇝ (rec {as1/as2}).n | n ∈ names(as1 ∪ as2)}
Contrary to what the reader might expect, this substitution does not lead to a potential ex-
plosion in the size of expressions in our implementation, since we use ATerms that employ
maximal sharing to store equal subterms exactly once, as discussed in Section 4.4.
A let-expression is just syntactic sugar that automatically selects the attribute body from
a recursive set of attributes:
LET : let {as1/as2} (rec {as1/as2}).body
Function calls Function calls to single-argument functions (i.e., lambdas) are just plain
β -reduction in the λ -calculus [10].
→ x: e3
e1 e2 subst({x ⇝ e2},e3)
As we shall see in Theorem 2 (page 85), the expression e2 contains no free variables.
Therefore, there is no danger of name capture in subst.
Multi-argument function calls, i.e., functions that accept and open an attribute set, are a
bit more complicated:
{fs}: e3 ∧ e2
→ {as} ∧ match
e1 e2 subst(subs,e3)
4. The Nix Expression Language
match = (∀n ∈ names(as) : n ∈ argNames(fs)) ∧ (∀n ∈ fs : n ∈ names(as))
subs = {n ⇝ arg(n) | n ∈ argNames(fs)}
e if n = e ∈ as
arg(n) =
e if n ∈ names(as) ∧ n ? e ∈ fs
Note that a multi-argument function call is strict in its argument—the attribute set—but
not in the values of the attributes. The Boolean value match ascertains whether each actual
argument matches a formal argument, and whether each formal argument without a default
matches an actual argument (note that ∀n ∈ fs does not apply to the formal arguments with
defaults, which would be ∀n ? e ∈ fs). The function arg(n) determines the actual value to
use for formal argument n, taking it from as if it has an attribute n, and using the default in
fs otherwise.
Conditionals Conditional expressions first evaluate the condition expression. It must
evaluate to a Boolean. (Evaluation fails if it is not.) The conditional then evaluates to one
of its alternatives.
if e1 then e2 else e3 → e2
if e1 then e2 else e3 → e3
Assertions An assertion assert e1; e2 evaluates to e2 if e1 evaluates to true. Otherwise,
evaluation fails.
assert e1; e2 → e2
Withs With-expressions with e1; e2 substitute the attributes defined in the attribute set e1
into the resulting expression e2.
→ {as}
with e1; e2 subst({n ⇝ e | n = e ∈ as}, e2)
Operators The equality operators are defined as follows.
1 ∧e2
2∧e1 =e2
1 ∧e2
2∧e1 =e2
e1 == e2 true
e1 == e2 false
Equality between expressions (e
1 =
is syntactic. Of course, inequality (e1 != e2) is
defined as the negation of equality.
4.3. Semantics
The Boolean operators are straightforward, although it is worth noting that conjunction
and disjunction (“and” and “or”) are lazy, i.e., “short-circuited”.
!e → true
!e → false
e1 && e2 false
e1 && e2 false
e1 && e2 true
e1 || e2 true
e1 || e2 true
e1 || e2 false
e1 → e2 true
e1 → e2 true
e1 → e2 false
The update operator e1 // e2 yields the right-biased union of the attribute sets e1 and e2,
that is, the set consisting of the attributes in e1 and e2, with attributes in e2 overriding those
in e1 if there are name conflicts.
→ {as2}
e1 // e2 → {rightUnion(as1,as2)}
rightUnion(as1, as2) = as2 ∪ {n = e | n = e ∈ as1 ∧ n ∈ names(as2)}
Addition is defined between strings and between paths.
e1 + e2 → s1 +s s2
e1 + e2 canonicalise(p1 +s "/"+s p2)
where +s denotes string concatenation.
Finally, there is an operator ? to check attribute set membership. This is used to guard
the use of attributes that may not be defined, e.g.,
builder = if args ? builder then args.builder else bash;
It is implemented as follows.
→ {as} ∧ n ∈ names(as)
e ? n → true
→ {as} ∧ n ∈ names(as)
e ? n → false
4. The Nix Expression Language
Primops Primitive operations, or primops, are functions that are built into the language.
They need not be defined or imported; they are in scope by default. But since they are
normal identifiers, they can be overridden by local definitions.
The most important primop, indeed the raison d’être for the Nix expression language,
is the primop derivation. It translates a set of attributes describing a build action to a store
derivation, which can then be built. The translation process is performed by the function
instantiate, given in Section 5.4. What is relevant here is that instantiate(as) takes a set of
attributes as, translates them to a store derivation, and returns an identical set of attributes,
but with three attributes added: the attribute type with string value "derivation", and the
attributes drvPath and outPath containing the store paths of the store derivation and its
output, respectively.
So intuitively, derivation just calls instantiate:
→ {as}
derivation e → {instantiate(as)}
However, as I argued at the beginning of this chapter (page 63), we want derivations to be
very lazy: the expensive call to instantiate should only be made if we actually need drvPath
and outPath. For this reason, derivation postpones the computation of those attributes by
using an indirection:
→ {as}
derivation e → {rightUnion(as, as)}
as = {type = "derivation", drvPath = e.drvPath, outPath = e.outPath}
e = derivation! {as}
The internal (not user-accessible) primop derivation! performs the actual instantiation:
DERIVATION! : derivation! {as} → {instantiate(as)}
Thus, when we evaluate attributes that result from a call to derivation, there is essentially
no runtime cost unless we evaluate the drvPath or outPath attributes.
The next primop is import. The unary operation import p reads the file p, parses it as a
Nix expression, applies certain checks, and evaluates and returns the expression. The valid-
ity checks are that the expression must be closed (i.e., not contain any unbound variables),
that no attribute set definition defines two attributes with the same name, and similarly
that no multi-argument function definition has two formal arguments with the same name.
The requirement that imported expressions are closed is rather important: it implies that
no imported expression can refer to the local scope of the caller. Also, any relative path
constants occurring in the expression are absolutised.
import e → loadExpr(p)
4.4. Implementation
If p denotes a directory, the path "/default.nix" is automatically appended. This is an
organisational convenience for people who want to store each component’s Nix expression
and builder in a separate directory (as in, e.g., Figure 2.5).
The function loadExpr takes care of loading, parsing and processing the file. The astute
reader will note that loadExpr depends on the contents of the file system, that is, it depends
on some mapping of paths to file system contents. This mapping is left implicit in the
evaluation rules. It may also appear to make the evaluation calculus impure, implying
that equational reasoning (an important property of purely functional languages) no longer
holds. For instance, it might appear that import p == import p does not always evaluate to
true. However, as we will see below, due to maximal laziness this is not in fact the case.
The call import p is only evaluated once for a given p.
The import primitive is overloaded to also work on derivations:
→ {as} ∧ "type" = et ∈ as ∧ et
"drvPath" = ep ∈ as ∧ ep
import e → loadExpr(build(p))
The function build builds a store derivation. It is defined in Section 5.5 and returns the
output path of the derivation. Overloading import to work on derivations allows Nix ex-
pressions to generate and use other Nix expressions. We will use this feature in Chapter 10
to implement build management with Nix, as it allows the automatic generation of lists of
inputs from other inputs (e.g., the list of header files imported by a C source file).
Finally, the nullary primops true, false, and null are constant symbols: they represent
themselves, and have no evaluation rules. There is also an ever-growing set of utility
functions (added in an admittedly ad hoc fashion) such as map, baseNameOf, and so on;
but the semantics of those is not very relevant to a discussion of the foundations of the Nix
expression language.
4.4. Implementation
Maximal laziness Nix expression evaluation is implemented using the ATerm library
[166], which is a library for the efficient storage and runtime manipulation of terms. The
Nix expression parser yields an ATerm encoding of the term. For example, the expression
(x: x) 123
yields the ATerm
The Pos node indicates position information, which is stored for certain language con-
structs (notably functions and attributes) to provide better error messages.
A very nice property of the ATerm library is its maximal sharing: if two terms are
syntactically equal, then they occupy the same location in memory. This means that a
shallow pointer equality test is sufficient to perform a deep syntactic equality test. Maximal
sharing is implemented through a hash table. Whenever a new term is created, the term is
looked up in the hash table. If the term already exists, the address of the term obtained from
4. The Nix Expression Language
eval(e) :
if cache[e] = ε :
if cache[e] = blackhole :
Abort; infinite recursion detected.
return cache[e]
else :
cache[e] ← blackhole
e eval(e)
cache[e] ← e
return e
Figure 4.7.: Evaluation caching
the hash table is returned. Otherwise, the term is allocated, initialised, and added to the
hash table. A garbage collector takes care of freeing terms that are no longer referenced.
Maximal sharing is extremely useful in the implementation of a Nix expression inter-
preter since it allows easy caching of evaluation results, which speeds up expression eval-
uation by removing unnecessary evaluation of identical terms. The interpreter maintains a
hash lookup table cache : ATerm ATerm that maps ATerms representing Nix expressions
to their normal form. Figure 4.7 shows pseudo-code for the caching evaluation function
eval, which “wraps” the evaluation rules defined in Section 4.3.4 in a caching layer. The
function eval simply implements those evaluation rules. It is assumed that eval calls back
into eval to evaluate subterms (i.e., every time a rule uses the relation
→ in a condition),
and that it aborts with an appropriate error message if e does not evaluate to a good normal
form. Thus we obtain the desired caching. The special value ε denotes that no mapping
exists in the cache for the expression7. Note that thanks to maximal sharing, the lookup
cache[e] is very cheap: it is a lookup of a pointer in a hash table.
The function eval also perform a trick known as blackholing [134, 110] that allows
detection of certain simple kinds of infinite recursion. When we evaluate an expression e,
we store in the cache a preliminary “fake” normal form blackhole. If, during the evaluation
of e, we need to evaluate e again, the cache will contain blackhole as the normal form for
e. Due to the determinism and purity of the language, this necessarily indicates an infinite
loop, since if we start evaluating e again, we will eventually encounter it another time, and
so on.
Note that blackholing as implemented here differs from conventional blackholing, which
overwrites a value being evaluated with a black hole. This allows discovery of self-
referential values, e.g., x = ... x ...;. But it does not detect infinite recursions like this:
(rec {f = x: f x;}).f 10
since every recursive call to f creates a new value of x, and so blackholing will not catch
the infinite recursion. In contrast, our blackholing does detect it, since it is keyed on
maximally shared ATerms that represent syntactically equal expressions. The example
7In the ATerm library, this means that the table lookup returned a null pointer.
4.4. Implementation
above is evaluated as follows:
(rec {f = x: f x;}).f 10
{f = x: (rec {f = x: f x;}).f x;}.f 10
(x: (rec {f = x: f x;}).f x) 10
(rec {f = x: f x;}).f 10
This final expression is equal to the first (which is blackholed at this time), and so an
infinite recursion is signalled.
The current evaluation cache never forgets the evaluation result of any term. This obvi-
ously does not scale very well, so one might want to clear or prune the cache eventually,
possibly using a least-recently used (LRU) eviction scheme. However, the size of current
Nix expressions (such as those produced during the evaluation of Nixpkgs) has not com-
pelled me to implement cache pruning yet. It should also be noted that due to maximal
sharing, cached values are stored quite efficiently.
The expression caching scheme described here makes the Nix expression evaluator max-
imally lazy. Languages such as Haskell are non-strict, meaning that values such as func-
tion arguments or let-bindings are evaluated only when necessary. A stronger property is
laziness, which means that these values are evaluated at most once. (Even though the se-
mantics of Haskell only requires non-strictness, in practice all implementations are lazy.)
Finally, maximal laziness means that syntactically identical terms are evaluated at most
once. This is not the case with mere laziness: a compiler is not obliged (or generally capa-
ble) to arrange for the evaluation result of two terms occurring in different locations in the
program, or in different invocations of a function, to be shared. For instance, there is no
guarantee that a lazy Haskell implementation will evaluate the expression product [1..1000]
only once when evaluating variable z in the following program:
x = product [1..1000]
y = product [1..1000]
z = x + y
Maximal laziness simplifies the implementation of the Nix expression evaluator. For
instance, it is not necessary to implement sharing of β -redexes. Without sharing, it is
typically necessary to implement the rule β -REDUCE as follows:
→ x: e3
e1 e2 let x = e2 in e3
where we give let a special “destructive update” semantics so that the evaluation result of
x is written back into the right-hand side of the let-binding [51]. This is typically how
laziness is implemented in functional languages: x is a pointer that points to a piece of
memory containing code and environment pointers (the closure or thunk [134]), which
after evaluation is overwritten with the actual result.
Error messages If Nix expression evaluation fails, or we import a syntactically or oth-
erwise invalid Nix expression, a backtrace is printed of the evaluation stack to give the user
some clue as to the cause of the problem. For instance, if we try to install Subversion from
a Nix expression that calls the one in Figure 2.9 with compressionSupport set to true, but
with zlib set to null, we get the following error message:
4. The Nix Expression Language
$ nix-env -f ./system/populate-cache.nix -i subversion
error: while evaluating the attribute `subversion'
at `./system/all-packages-generic.nix', line 1062:
while evaluating the function
at `(...)/subversion-1.2.x/default.nix', line 1:
assertion failed
at `(...)/subversion-1.2.x/default.nix', line 19
This tells the user not only what assertion triggered the error, but also where the incor-
rect call to the Subversion function was made (namely, in all-packages-generic.nix at line
Backtraces in lazy languages are tricky, as a failing value may be finally evaluated in
a piece of code far removed from where the value was “produced” [55]. Consider for
let {
f = b: {x = assert b; 123;};
body = (f false).x;
Its evaluation will yield the following backtrace:
error: while evaluating the file `foo.nix':
while evaluating the attribute `body' at `foo.nix', line 3:
while evaluating the attribute `x' at `foo.nix', line 2:
assertion failed at `foo.nix', line 2
Note the absense of a reference to the function f. In a strict language, f would appear in the
backtrace. But also note that since attributes are annotated with position information, we
still get the position of the definition of the attribute x in the body of f.
Another problem with backtraces in lazy languages is that tail-call optimisation [151]
may remove useful stack frames. However, the current Nix expression evaluator does not
optimise tail-calls.
In practice, however, the stack traces printed by Nix are very helpful in everyday use,
as the example above shows. The trace may not show every pertinent code location, and
it may not show them in a meaningful order, but the locations that it does give usually
give enough of a clue to figure out the cause of the problem. A useful improvement is
a kind of slicing for assertion failures (which are the most common type of errors, apart
from missing attributes or function arguments). Currently, the interpreter simply prints that
an assertion has failed, i.e., its condition expression evaluated to false. So it is useful to
print an explanation as to why it is false. For example, if a guard expression p && x != null
evaluates to false, Nix should print whether p is false, or x != null is false. If p is false, it
should give an explanation as to how p obtained that value. In an interpreter (as opposed
to a compiled implementation), this is not hard to implement since the original expression
is still known, not just its normal form.
Optimising substitution There is an important opportunity for optimisation in the im-
plementation of β -reduction: if we substitute a term, e.g., replacing free occurrences of
4.4. Implementation
x in e1 by e2 in the β -reduction of (x : e1) e2, we never need to substitute under the re-
placements of x. Consider the expression (x : y : e1) e2 e3, where e2 is a large expression.
With normal substitution, we first replace all occurrences of x in e1 with e2. Then, we
replace all occurrences of y in the resulting term with e3. This substitution also descends
into the e2 replacements of x, even though those subterms are closed. Since e2 is large, this
is inefficient.
The optimisation is that we can mark replacement terms to indicate to the substitution
function that it need not descend into such subterms. This is possible because of the fol-
lowing theorems.
Lemma 1. Every term to which a semantic rule is applied during evaluation is closed.
Proof. The property follows by induction on the application of semantic rules. The base
case is the initial terms produced by the parser. Here the induction hypothesis hold trivially
since the parser checks that these terms are closed and aborts with an error otherwise.
For the inductive step, we must show two things.
First, we must show that each rule that takes a closed term produces a closed term. This
can be easily verified. For instance, β -REDUCE yields the body of the function, e3, with
its sole free variable x substituted by the argument. e3 cannot have additional free variables
because the function call e1 e2 is closed, and function calls do not bind variables. Likewise,
the expression yielded by SELECT must be closed because the attribute set in which it was
contained is closed, and attribute sets do not bind variables.
Second, we must show that subterms evaluated in the conditions of rules are closed. This
can also be easily verified. For instance, β -REDUCE recursively evaluates the expression
e1 in a function call e1 e2. Since by the induction hypothesis e1 e2 is closed, it follows that
e1 is also closed, as function calls do not bind variables.
Theorem 2. All substitution terms occurring during evaluation are closed.
Proof. This property follows by inspecting all calls to subst in the evaluation rules and
observing that the substitution terms (i.e., the expressions in subs) are always closed. For
instance, in the rule β -REDUCE, subs = {x ⇝ e2}. The term e2 is closed because by
Lemma 1 the call e1 e2 is closed, and function calls bind no variables. A similar argument
holds for the subst calls in REC, β -REDUCE’ and WITH.
Since substitution terms are always closed, we can adapt the variable case of the substi-
tution function subst in Figure 4.6 as follows:
closed(e) if (x ⇝ closed(e)) ∈ subs
subst(subs, x) =
closed(e) if (x ⇝ e) ∈ subs
That is, replacement terms e are placed inside a wrapper closed(e). (The first case merely
prevents redundant wrapping, e.g., closed(closed(e)), which reduces the effectiveness of
caching and blackholing.) The wrapper denotes that e is a closed subterm under which
no substitution is necessary, since it has no free variables. To actually make use of this
optimisation, we also add a case to subst to stop at closed terms:
subst(subs, closed(e)) = closed(e)
4. The Nix Expression Language
Of course, during evaluation we must get rid of closed eventually. That’s easy:
CLOSED : closed(e) → e
as a closed term is semantically equivalent to the term that it wraps.
5. The Extensional Model
In Chapter 2, we have seen that Nix offers many advantages over existing deployment
systems, such as reliable dependency information, side-by-side deployment of versions
and variants, atomic upgrades and rollbacks, and so on. And as explained in Chapter 3,
it obtains these properties by imposing a “pointer discipline” on file systems, thus treat-
ing the component deployment problem similar to memory management in programming
The current and subsequent chapter formalise the operation of the Nix system. This
includes the translation of Nix expressions into store derivations, building store derivations,
the substitute mechanism, garbage collection, the invariants of the Nix store that must hold
to ensure correct deployment, and so on.
There are actually two different variants or models of Nix. The original model is the
extensional model, described in this chapter. It is the model with which we have had the
most experience. It implements all of the deployment properties described in Chapter 2.
The next chapter describes the intensional model which is more powerful in that it allows
the sharing of a Nix store between mutually untrusted users; however, it also places slightly
stricter requirements on components. Most of the aspects of the extensional model carry
over without modification to the intensional model.
This chapter discusses all aspects of the semantics of the extensional model: the basics of
cryptographic hashing (Section 5.1), the notion of File System Objects and the organisation
of the Nix store (Section 5.2), the operation of adding sources and other “atomic” values
to the store (Section 5.3), translating Nix expressions to store derivations (Section 5.4),
building store derivations (Section 5.5), and garbage collection (Section 5.6). It concludes
with an explanation of the notion of extensionality (Section 5.7), which sets up the next
chapter on the intensional model.
5.1. Cryptographic hashes
Since Nix heavily uses cryptographic hash functions in store paths and in other places,
this section introduces some properties and notations that will be used in the remainder of
Part II.
A hash function [36] is a function h : A → B that maps values from a possibly infinite
domain A onto a finite range B. For instance, h(x ∈ N) = x mod 27 is a hash function that
maps natural numbers to the numbers [0..26]. Given value x ∈ A and y = h(x) ∈ B, the
value x is called the preimage and y is called the hash of x. Since A is typically larger than
B, there must necessarily exist values x1, x2 ∈ A such that h(x) = h(y). These are collisions
of the hash function h. A good hash function will have the property that collisions can be
expected to occur with low probability given the sets of values that will be fed into the hash
function in typical usage patterns.
5. The Extensional Model
A cryptographic hash function [145] is a hash function that maps arbitrary-length byte
sequences onto fixed-length byte sequences, with a number of special properties:
• Preimage resistance: given a value y ∈ B, it should be computationally infeasible to
find an x ∈ A such that y = h(x). Computational infeasibility means that ideally the
most efficient way to find x is a brute-force attack, that is, trying all possible values
in A. This means computing |A|/2 hashes on average.
• Collision resistance: it should be infeasible to find any x1, x2 ∈ A such that h(x1) =
h(x2). Due to the birthday paradox [145], a brute√rce attack will find a collision
with probability1
|B| hashes.
2 aftercomputingapproximately
• Second preimage resistance: given an x1 ∈ A, it should be infeasible to find another
x2 ∈ A such that h(x1) = h(x2). The difference with collision resistance is that here
x1 is given.
For our purposes collision resistance and second preimage resistance are what matters—
it should not be feasible to find collisions. This is since we do not ever want two different
components in the Nix store to be stored in the same store path. Preimage resistance—
infeasibility of inverting the hash—is not important. We do not care if it is possible to find
a component given a store path.
Prominent examples of cryptographic hash functions are MD5 [141] and SHA-1 [127].
Nix initially used MD5, which produces 128-bit hashes. However, MD5 has recently been
broken [178] in the sense that it is now possible to find collisions with a work effort of a
few hours on conventional hardware. That is, collision resistance has been broken, but this
technique does not find preimages or second preimages.
In the context of Nix, a collision attack such as the one against MD5 means that it
is possible to construct two components that hash to the same store path. Specifically, an
attacker could create two components, one benevolent and one malevolent (e.g., containing
a Trojan horse—a malicious component masquerading as legitimate software), with the
same MD5 hash. The attacker would then publish a Nix expression that uses fetchurl
to download the benevolent component from a server, and finally replace the benevolent
component with the malevolent one on the server. Since on download the hash continues
to match, the deception would not be noticed. Constructing such components is quite easy
(see, e.g., [156, Section 8.4.4], and [120, 105] for examples of the construction of harmless
and harmful executables with matching MD5 hashes). Thus collisions of the underlying
hash are a serious concern.
SHA-1 is a very widely used hash function. It produces 160 bit hashes, making it more
resistant to birthday attacks in theory as it requires 280 hashes on average in a brute-force
attack. However, recent attacks [177, 146] have reduced the effort of finding a collision to
less than 263 hashes. This is still a substantial effort, but it creates enough doubt regarding
the future security of SHA-1 to require a transition away from it (“Attacks always get
better; they never get worse.” [147]).
There are a number of strengthened variants of SHA-1 that produce larger hashes. These
are SHA-224, SHA-256, SHA-384, and SHA-512 [127]. While there are strong doubts
about the future security of these hashes, they represent the best choice until a next-
generation hash function becomes available (e.g., through a similar process as the AES
selection [126]).
5.1. Cryptographic hashes
The Nix system uses SHA-256 hashes. However, MD5 and SHA-1 are available in
functions such as fetchurl for compatibility. The use of these hashes is still safe if they
apply to components from a trusted source, since it remains infeasible for a third-party
attacker to find second preimages. However, we should migrate away from these hashes to
Notation Byte sequences are finite-length sequences of bytes. The size of byte sequence
c is denoted |c|. Given a byte sequence c of size |c|, we write c[i], 0 ≤ i < |c| to denote
the (i + 1)’th byte in the sequence. In examples, we write byte sequences initialised from
ASCII characters between quotes, e.g., "Hello" is a byte sequence of length 5.
Given a byte sequence c, we write h = hasht (c) to denote the byte sequence obtained
from the cryptographic hash function algorithm t, where t ∈ {md5, sha1, sha256}.
The function printHash16 returns a hexadecimal string representation of a byte sequence,
useful for printing hashes. It is defined as follows:
printHash16(c) =
where + and ∑ denote string concatenation, and digits16 is the byte sequence of hexadeci-
mal ASCII characters, i.e., "0123456789abcdef". For example,
printHash16(hashmd5("Hello World")) = "b10a8db164e0754105b7a99be72e3fe5"
There is also a function printHash32 that returns a base-32 string representation of a byte
sequence. It produces representations of length ⌈|c| × 8/ lg 32⌉ = ⌈|c| ×8
shorter than the representations produced by printHash16 (which have length |c| × 2), it is
useful inter alia for the representation of hashes in store paths. It is defined as follows:
printHash32(c) =
c[ j] × 256j )/32i mod 32]
0≤ j<|c|
That is, interpreting c as a base-256 encoding of some number, with digits from least
significant to most significant (i.e., a little-endian number), we print the number in base-
32 with digits from most significant to least significant.
(Note that the outer summation
denotes string concatenation, while the inner summation denotes integer addition.) The set
of digits is
digits32 = "0123456789abcdfghijklmnpqrsvwxyz"
i.e., the alphanumerics excepting the letters e, o, u, and t. This is to reduce the possibility
that hash representations contain character sequences that are potentially offensive to some
users (a known possibility with alphanumeric representations of numbers [11]). Here is an
example of printHash32:
printHash32(hashsha1("Hello World")) = "s23c9fs0v32pf6bhmcph5rbqsyl5ak8a"
It is sometimes necessary to truncate a hash to a specific number of bytes n. For instance,
below we will truncate 32-byte SHA-256 hashes to 20 bytes (of course, this reduces the
5. The Extensional Model
security of the hash). This truncation is done by cyclically XORing the input with an
initially zero-filled byte array of length n. Formally, the result of truncate(n, c) is defined
by the equation
truncate(n, c)[i] =
c[ j]
0≤ j<|c|, j mod n=i
where ⊕ denotes the XOR operator.
5.2. The Nix store
5.2.1. File system objects
The Nix store is a directory in the file system that contains file system objects (FSOs)1, such
as files and directories. In the context of the Nix store, FSOs typically represent software
components, user environments and store derivations. As we shall see below, Nix must
perform various operations on FSOs, such as computing cryptographic hashes over their
contents, scanning for strings, rewriting strings, and so on. Thus, it is useful to formalise
the notion of FSOs.
Operating systems have different kinds of objects that live in a file system. All modern
operating systems at the very least have hierarchical directories and files that consist of
unstructured sequences of bytes (as opposed to, say, files consisting of records). Beyond
that, there are many extensions:
• Symbolic links on Unix.
• Permissions, ranging from MS-DOS’s read-only attribute, through Unix’s access
rights, to complex Access Control Lists (Windows NT, various Unixes).
• Extended attributes (OS/2, Windows NT) that allow arbitrary name/value pairs to be
associated with files.
• Streams (Windows NT) or resource forks (Mac OS) that extend the notion of a file
as a sequence of bytes to being sequences of bytes.
• Forests of directory hierarchies (e.g., as formed by drive letters in Windows).
• Hard links that turn file systems into directed acyclic graphs instead of trees.
• Device nodes, FIFOs, named pipes, sockets, and other types of files with special
Fortunately, for our purposes—which is storing components in the Nix store—most of
these features are either not relevant (drive letters, device nodes, etc.), can be ignored (hard
links), or are never used in practice (streams). Some others, such as permissions, we will
ignore to make life simpler.
1In the Unix tradition, the term file refers to any file system object (i.e., an FSO), while the term regular file
refers to what most people call a file. This thesis uses the term FSO to avoid the ambiguity.
5.2. The Nix store
data FSO = Regular Executable ByteStream
| Directory [(FileName, FSO)]
| SymLink ByteStream
data Executable = Executable
| NonExecutable
Figure 5.1.: Abstract syntax for file system objects
What we absolutely cannot ignore for deployment of Unix software is regular files,
directories, and symbolic links. Also, while we can get away with disregarding most file
permissions, the executable bit must be maintained2. This leads to the grammar for FSOs
shown in Figure 5.1. (The data type for FSOs shown in this figure uses the Haskell-like
notation described in Section 1.7.) A ByteStream is a sequence of 0 or more bytes, and
a FileName is a sequence of 1 or more bytes not equal to 0. For example, the file system
Directory [
("foo", Regular NonExecutable "Hello World"),
("bar", Directory [
("xyzzy", Symlink "../foo")
defines a directory with two files: a non-executable file foo with contents Hello World, and a
subdirectory bar containing a symlink called xyzzy to foo in the parent directory. Note that
the top-level directory is anonymous. Also note that we do not keep any meta-information
about files other than the executable bit; notably, time stamps are discarded.
The type of FSOs defined in Figure 5.1 is not actually used in the implementation of Nix.
They are merely a device used in this chapter to formalise certain aspects of the operation
of Nix. In the pseudo-code algorithms in this chapter, I will use the imperative functions
fso ← readPath(p) to denote reading the FSO fso stored at path p, and writePath(p, fso)
to denote writing FSO fso to path p. For presentational simplicity, I ignore error handling
To perform operations on FSOs such as computing cryptographic hashes, scanning for
references, and so on, it is useful to be able to serialise FSOs into byte sequences, which
can then be deserialised back into FSOs that are stored in the file system. Examples of
such serialisations are the ZIP and TAR file formats. However, for our purposes these
formats have two problems:
• They do not have a canonical serialisation, meaning that given an FSO, there can
be many different serialisations. For instance, TAR files can have variable amounts
of padding between archive members; and some archive formats leave the order
2Nix should also support setuid executables, which are programs that are executed under a different user ID
than the caller. However, it is not entirely clear how to support these in a clean way.
5. The Extensional Model
of directory entries undefined. This is bad because we use serialisation to compute
cryptographic hashes over FSOs, and therefore require the serialisation to be unique.
Otherwise, the hash value can depend on implementation details or environment
settings of the serialiser. The canonicity of archives will be particularly important in
Section 7.5, when we apply binary patches to archives.
• They store more information than we have in our notion of FSOs, such as time
stamps. This can cause FSOs that Nix should consider equal to hash to different
values on different machines, just because the dates differ.3
• As a practical consideration, the TAR format is the only truly universal format in the
Unix environment. It has many problems, such as an inability to deal with long file
names and files larger than 233 bytes. Current implementations such as GNU Tar
work around these limitations in various ways.
For these reasons, Nix has its very own archive format—the Nix Archive (NAR) format.
Figure 5.2 shows the serialisation function serialise(fso) that converts an FSO to a byte
sequence containing a NAR archive. The auxiliary function concatMap( f , xs) applies the
function f to every element of the list xs and concatenates the resulting byte sequences.
The function sortEntries sorts the list of entries in a directory by name, comparing the
byte sequences of the names lexicographically. The function serialise is typically used in
conjunction with readPath, e.g.,
c ← serialise(readPath(p)).
Of course, there is also a deserialisation operation deserialise(c) (typically used with
writePath) that converts a byte sequence c containing a NAR archive to an FSO. It is the
inverse of serialise and is not shown here. The following identity holds:
deserialise(serialise(fso)) = fso
Note that deserialise is a partial function, since most byte sequences are not valid NAR
5.2.2. Store paths
A store object is a top-level file system object in the Nix store, that is, a direct child of
the Nix store directory. Indirect subdirectories of the Nix store are not store objects (but
contained in FSOs that are store objects).
A store path is the full path of a store object. It has the following anatomy:
The first part is the path of the Nix store, denoted storeDir. The second is a 160-bit cryp-
tographic hash in base-32 representation. Finally, there is a symbolic name intended for
human consumption. An example of a store path is:
3This is no longer a big issue, since Nix nowadays canonicalises all FSOs added to the store by setting irrelevant
metadata to fixed values (see page 112). For instance, it sets the time stamps on all files to 0 (1 Jan 1970,
00:00:00 UTC). However, metadata such as the last-accessed time stamp might still cause non-canonicity.
The Nix store
serialise(fso) = str("nix-archive-1") + serialise(fso)
serialise(fso) = str("(") + seralise′′(fso) + str(")")
serialise′′(Regular exec contents) =
str("type") + str("regular")
+ str("executable")+str(""),ifexec=Executable
if exec = NonExecutable
+str("contents") + str(contents)
serialise′′(SymLink target) =
str("type") + str("symlink")
+str("target") + str(target)
serialise′′(Directory entries) =
str("type") + str("directory")
+concatMap(serialiseEntry, sortEntries(entries))
serialiseEntry((name, fso)) =
str("entry") + str("(")
+str("name") + str(name)
+str("node") + serialise(fso)
str(s) = int(|s|) + pad(s)
int(n) = the 64-bit little endian representation of the number n
pad(s) = the byte sequence s, padded with 0s to a multiple of 8
Figure 5.2.: serialise: Serialising file system objects into NAR archive
On the other hand,
is not a store path (but it has a prefix that is a store path).
Usually, storeDir is /nix/store, and in this thesis we will tacitly assume that it is.
binary deployment purposes (Section 5.5.3), it is important that Nix store locations on the
source and target machines match.
The symbolic name consists of one or more characters drawn from the set of letters (in
either case), digits, and the special characters in the set "+-._?=".
We will assume that store paths are always fully canonicalised (see page 73), i.e., they
are absolute, do not contain . or .. elements, and do not have redundant separators (/). The
store path above is in canonical form; a non-canonical example of the same path is:
5. The Extensional Model
No component of the store location storeDir is allowed to be a symbolic link. The
reason is that some builders canonicalise paths by resolving symlink components. These
paths may then be stored in derivation outputs as retained dependencies. Then, if storeDir
has different symlink components on different machines, builds on these machines might
not be interchangeable.
In the remainder, we denote the infinite set of syntactically correct store paths as Path.
Syntactic correctness depends on the value of storeDir—the location of the Nix store—but
we leave that implicit. We will use the convenience function hashPart(p) to extract the
hash part of a store path. Likewise, namePart(p) yields the symbolic name part of p.
How do we compute the hash part of store paths? The hash computation has some
important properties. First, both the Nix store path and the symbolic name are part of the
hash. This means that Nix stores in different locations yield different hash parts, and that
sources or outputs that are identical except for their symbolic names have different hash
parts. This is important because the hash scanning approach to dependency determination
described in Section 3.4 only looks at the hash parts of paths, not the symbolic name.
Second, different types of store objects—notably sources and outputs—have different
hashes. This prevents users from adding sources that “impersonate” outputs. In unshared
Nix stores, this is not a major issue, but it is important for security in a shared store that
uses the technique described in Section 6.2 to enable sharing of locally built outputs.
These properties are achieved in the function makePath, which computes store paths as
makePath(type, descr, name) =
nixStore + "/" + printHash32(truncate(20, hashsha256(s))) + "-" + name
s = type+":sha256:"+descr+":"+nixStore+":"+name
(Examples will be shown in the remainder of this chapter.) Thus, the hash part of the
store path is a base-32 representation of a 160-bit truncation of a SHA-256 hash of the
variable elements that must be represented in the hash, namely, the location of the Nix store
nixStore, the type of the store object (e.g., "source"), a unique descriptor string describing
the store object (e.g., a cryptographic hash of the contents of the FSO in case of sources),
and the symbolic name.
Why do we truncate the SHA-256 hash to 160 bits? Nix originally used base-16 (hex-
adecimal) representations of 128-bit MD5 hashes, giving a hash part of 128/ lg 16 = 32
characters. Since MD5 was recently broken (see Section 5.1), a decision was made to
switch to 160-bit SHA-1 hashes. The hexadecimal representation was at that time replaced
with a base-32 representation, which has the pleasant property of keeping the length at
160/ lg 32 = 32 characters. (Some builders strip off hash parts from store paths using a
hard-coded constant of 32 characters (clearly a bad practice), so these did not need to be
changed.) However, as attacks against SHA-1 were also published, SHA-256 was used
instead. Unfortunately, hash parts of ⌈256/ lg 32⌉ = 52 characters are a bit too long: to-
gether with the symbolic name, they exceed the maximum file name lengths of some file
systems. In particular, the Joliet extension to ISO-9660 has a maximum file name length
of 64 characters [38]. This would make it difficult to burn Nix stores on a CD-ROM, a
5.2. The Nix store
potentially useful distribution method. For this reason the SHA-256 hash is truncated to
160 bits. The assumption is that this is no less secure than 160-bit SHA-1, and possibly
more secure due to its different structure. However, given that most modern hashes share
the same basic design, the security of all of them might be ephemeral.
5.2.3. Path validity and the closure invariant
Nix maintains meta-information about store paths in a number of database tables. This
information includes the following:
• The set of valid paths, which are the paths that are “finished”, i.e., whose contents
will no longer change. For instance, the output path of a derivation isn’t marked
as valid until the builder that produces it finishes successfully. The contents of an
invalid path are undefined (except when the FSO is under construction, in which
case a lock is used to indicate this fact, as we shall see below).
• The references graph (i.e., the deployment analogue of the pointer graph in program-
ming languages). For each valid path, Nix maintains the set of references to other
valid paths discovered by scanning for hash parts.
• The substitutes that have been registered by tools such as nix-pull (Section 2.6).
• Traceability information: for each valid output path, Nix remembers the derivation
that built it.
The current Nix implementation stores this information in a transactional Berkeley DB
database. As we shall see, transactions are important to ensure that the store invariants
described below always hold, that interrupted operations can be restarted, and that multiple
operations can be safely executed in parallel.
The most important pieces of information in the database are the set of valid paths and
the references graph, which are defined here. The other database tables are defined below.
The type of a database table t is given as t : A → B, where A is the type of the key and B is
the type of the value corresponding to a key; t[x] denotes the value corresponding to key x.
The special value ε indicates that no entry occurs for a value in the given mapping. Setting
of table entries is denoted by t[x] ← y.
The mapping valid : Path Hash serves two purposes. First, it lists the set of valid
paths—the paths that have been successfully built, added as a source, or obtained through
a substitute. It does not include paths that are currently being built, or that have been left
over from failed operations. Thus, valid[p] = ε means that path p is valid. The following
invariant states that valid paths should exist.
Invariant 1 (Validity invariant). If a path is valid, it exists in the Nix store:
∀p ∈ Path : valid[p] = ε → pathExists(p)
Second, the valid mapping stores a SHA-256 cryptographic hash of the contents of each
valid path. That is, when the path is made valid, the valid entry for the path is set as follows:
valid[p] ← "sha256:" + printHash32(hashsha256(serialise(readPath(p))))
5. The Extensional Model
(The name of the hash algorithm is incorporated to facilitate future upgrading of the hash
algorithm.) For instance, given a valid path /nix/store/bwacc7a5c5n3...-hello-2.1.1 with an
FSO whose serialisation has SHA-256 hash 085xkvh8plc0..., we have
valid["/nix/store/bwacc7a5c5n3...-hello-2.1.1"] = "sha256:085xkvh8plc0..."
The purpose of storing these content hashes is to detect accidental tampering with store
objects, which are supposed to be read-only. The command nix-store --verify --check-
contents checks that the contents of store paths are still consistent with their stored hashes.
That is, the following invariant must hold.
Invariant 2 (Stored hash invariant). The contents at valid store paths correspond to the
cryptographic hashes stored in the valid table:
∀p ∈ Path : valid[p] = ε →
valid[p] = "sha256:" + printHash32(hashsha256(serialise(readPath(p))))
The mapping references : Path → {Path} maintains the references graph, i.e., the set
of store paths referenced by each store object. For sources, the references are usually
empty. For store derivations, they are exactly the union of the store paths listed in its
inputSrcs and inputDrvs fields. For output paths, they are found through scanning using
the method outlined in Section 3.4. As we will see in Section 5.6.3, the references graph
is acyclic except for trivial cycles defined by self-references, i.e., when p ∈ references[p].
Self-references occur when a builder stores its output path in its output.
An important property of the Nix store is that at all times the following invariant holds.
Invariant 3 (Closure invariant). The set of valid paths is closed under the references
∀p ∈ Path : valid[p] = ε → ∀preferences[p] : valid[p] = ε
The closure of a valid path p is the set of all paths reachable by traversing references
from p:
closure(p) = {p} ∪
As I explained in Section 2.1, the closure is vitally important for correct deployment, since
the closure of a component is the set of paths that might be accessed in an execution
involving the component. The closure is what must be copied in its entirety when we
deploy the component. Invariant 3 (closure) and Invariant 1 (validity) together give us
complete deployment: if a path p is valid, then all its potential runtime dependencies are
also valid; and all these paths exist in the file system.
The references graph is also maintained in the opposite direction. Nix maintains a map-
ping referers : Path → {Path} that defines the transpose of the graph defined by the refer-
ences mapping4. Thus, the following invariant must hold.
4The referers table should properly be spelled referrers. However, the Nix implementation unwittingly followed
the unfortunate tradition established by the HTTP standard, which has a misspelled Referer header field [57].
5.3. Atoms
Invariant 4 (Integrity invariant). The graphs defined by the references and referers map-
pings are each other’s transposes:
∀p, p Path : p ∈ references(p) ↔ p referers(p)
We can also compute the closure of a valid path under the referers relation (the referrer
closureT (p) = {p} ∪
closureT (p)
This is the set of all paths that can reach path p in the references graph. Note that while
closure(p) is constant for a valid path (i.e., can never change due to the immutable nature
of valid objects), closureT (p) is variable, since additional referrers can become valid (e.g.,
due to building), and existing referrers can become invalid (due to garbage collection).
The main application of referrers is to allow users to gain insight in the dependency
graph. The command nix-store --query --referers allows users to query what paths refer to
a given path. For instance,
$ nix-store -q --referers /nix/store/wa2xjf809y7k...-glibc-2.3.5
/nix/store/vfps6jpav2h6...-gnutar-1.15.1 [...]
shows the components in the store that use a particular instance of Glibc. Likewise, the
command nix-store --query --referers-closure prints all paths that directly or indirectly refer
to the given path.
5.3. Atoms
A basic operation on the Nix store is to add atoms to the store. Atoms are store objects that
are not derived, that is, are not built by performing a store derivation. The foremost exam-
ples of atoms are sources referenced from derivations in Nix expressions (e.g.,
in Figure 2.6), and store derivations.
The operation addToStore(fso, refs, name) shown in Figure 5.3 adds an FSO fso to the
Nix store. The store path is computed from the cryptographic hash of the serialisation of
fso, and the symbolic name name. The references entry for the resulting path is set to refs.
The operation works as follows. It computes the cryptographic hash over the serialisa-
tion of the FSO, and uses that and name to construct the store path p 45 . If path p is not
valid 47 , the FSO is written to path p 48 . The path is marked as valid, with the references
of p set to refs 49 . The references are set by the helper function setReferences 50 , which
also updates the corresponding referrer mappings. The remainder of the code is concerned
with concurrency.
The hash part of path p is essentially a cryptographic hash of the atom being added.
Therefore there is a correspondence between the file name of the store object created
by addToStore, and its contents. As a consequence, knowing the contents of an FSO
(and its symbolic name) also tells one its store path. This property is known as content-
addressability. It does not hold for all store objects: derivation outputs are not content-
addressable, since their output paths are computed before they are built (as we will see in
5. The Extensional Model
addToStore(fso, refs, name) :
c ← serialise(fso)
h ← hashsha256(c)
p ← makePath("source", printHash16(h), name)
if valid[p] = ε :
lock ← acquirePathLock(p)
if valid[p] = ε :
if pathExists(p) :
writePath(p, fso) 48
in a database transaction :
valid[p] ← h
setReferences(p, refs)
releasePathLock(p, lock)
return p
setReferences(p, refs) 50
references[p] ← refs
for each p ∈ refs :
← {p}
Figure 5.3.: addToStore: Adding atoms to the Nix store
Section 5.4). This is in fact the defining characteristic of the extensional model.
In the
intensional model described in Chapter 6, content-addressability is extended to all store
Concurrency We have to take into account the remote possibility that parallel Nix pro-
cesses simultaneously try to add the same atom. If this happens, the operations should be
idempotent, and the contents of a path should never change after it has been marked as
valid. There should be no race conditions, e.g.:
1. Process A detects that the target path p is invalid, and starts writing the atom to p.
2. Process B detects that p is invalid, and wants to write the atom to the target path, but
there is an obstructing file at p. As it cannot make assumptions about invalid paths,
it deletes p and starts writing the atom to p.
3. While B is half-way through, A finishes writing and marks the path as valid. Note
that some of A’s writes have been deleted.
4. Before B finishes writing, A starts the builder of a derivation that has p as input. The
builder uses the incorrect contents of p.
There are several solutions to prevent races. The simplest one is to use a global mutex
lock on the Nix store for all operations that modify the store. These operations are the
5.3. Atoms
acquirePathLock(p) :
while true:
fd ← open(p + ".lock", O_WRONLY|O_CREAT, 0666)
fcntl(fd, F_SETLKW, ...F_WRLCK...) 51
if size of file fd = 0 :
return fd 53
releasePathLock(p, fd) :
deleteAndSignal(p + ".lock", fd)
deleteAndSignal(p, fd) :
write(fd, "deleted") 54
close(fd) 55
Figure 5.4.: acquirePathLock and releasePathLock: Locking on Unix
addition of atoms, the building of derivations (Section 5.5), and garbage collection (Sec-
tion 5.6). Every Nix process acquires the lock on startup, and releases it when finished.
If the lock is already held, processes block until it becomes available. This approach is
clearly unacceptable because of the unbounded latency it introduces for Nix operations.
A better solution is fine-grained locking, where a process acquires a mutex lock on the
specific store path that it wants to add. The locking operations lock ← acquirePathLock(p)
and releasePathLock(p, lock) essentially acquire and release a lock on a temporary file
p.lock, i.e., p with the extension .lock appended.
Figure 5.4 shows pseudo-code for an implementation of these operations on Unix. It
uses the POSIX standard system call fcntl, which locks byte ranges in files [152]. The
advantage of POSIX locks is that they are released automatically when the process holding
them dies, either normally or abnormally. In contrast, the use of file existence as a mutex
makes it harder to recover gracefully from abnormal termination and system crashes.
A mostly aesthetic complication is that we don’t want the lock files to hang around
indefinitely. They should be removed when they are no longer necessary, i.e., when p
has been registered as being valid. Deleting the lock file after we have released the lock
introduces a subtle race, however:
1. Process A creates the lock file p.lock and locks it.
2. Process B opens the lock file, attempts to acquire a lock, and blocks.
3. Process A sees that p is invalid, and begins to write to p. However, it is interrupted
before it can make p valid.
4. As part of its cleanup, process A releases the lock, closes and deletes the lock file,
and terminates. Note that B still has the file open; on Unix it is possible to delete
open files.
5. The Extensional Model
5. Process B acquires the lock.
6. Process B sees that p is invalid, and begins to write to it.
7. Process C creates the lock file p.lock and locks it. This is a different file than the one
currently opened by B—it has a different inode number. Thus C can acquire a lock.
8. Process C sees that p is invalid, and begins to write to it.
9. Now processes B and C are concurrently writing to p.
To prevent this race, we have to signal to process B that the lock file it has open has
become “stale”, and that it should restart. This is done by the function deleteAndSignal,
called by the deleting process: it writes an arbitrary string (the deletion token) to the deleted
file 54 . If another process subsequently acquires a lock 51 , but sees that the size of the file
is non-zero 52 , then it knows that its lock is stale.
On Windows, there is a much simpler solution: we just try to delete the file, which will
fail if it is currently opened by any other process. Thus, the file will be deleted by the last
process to release the lock, provided no new lockers have opened the lock file.
The dual checks for path validity are an optimisation ( 46 , 47 ). Strictly speaking, the
initial path validity check 46 can be omitted. However, the extra check prevents a lock
creation and acquisition in the common case where the path is already valid.
5.4. Translating Nix expressions to store derivations
Nix does most of its work on store derivations, introduced in Section 2.4. Store derivations
are the result of the evaluation of high-level Nix expressions. Nix expressions can express
variability, store derivations cannot—a store derivation encodes a single, specific, constant
build action.
The command nix-instantiate performs precisely this translation: it takes a Nix expres-
sion and evaluates it to normal formal using the expression semantics presented in Sec-
tion 4.3.4. The normal form should be a call to derivation, or a nested structure of lists and
attribute sets that contain calls to derivation. In any case, these derivation Nix expressions
are subsequently translated to store derivations using the method described in this section.
The resulting store derivations can then be built, as described in the next section. The
command nix-store --realise does exactly that. The high-level package management com-
mand nix-env combines translation and building as a convenience to users.
The abstract syntax of store derivations is shown in Figure 5.5 in a Haskell-like [135]
syntax (see Section 1.7). The store derivation example shown in Figure 2.13 is a value
of this data type. The outputHash and outputHashAlgo fields implement so-called fixed-
output derivations and are discussed in Section 5.4.1. The other fields were explained in
Section 2.4.
So suppose that we have an expression e that evaluates to derivation e, and e evaluates
to an attribute set {as}. According to the DERIVATION and DERIVATION! rules (page 80),
e then (essentially) evaluates to the result of calling instantiate(as). instantiate is a function
that translates the argument attribute set of a call to derivation to a store derivation, writes
5.4. Translating Nix expressions to store derivations
data StoreDrv = StoreDrv {
output : Path,
outputHash : String,
outputHashAlgo : String,
inputDrvs : [Path],
inputSrcs : [Path],
system : String,
builder : Path,
args : [String],
envVars : [(String, String)]
Figure 5.5.: Abstract syntax of store derivations
as a side-effect the store derivation to the Nix store5, and returns the original attribute set,
with the following three attributes added:
drvPath contains the path of the store derivation.
outPath contains the output path of the store derivation, that is, d.output.
type is set to the value "derivation". This allows instantiate to detect that attribute sets
occurring in its arguments are actually subderivations. It also allows nix-instantiate
and nix-env to see whether evaluation results are actually derivations.
The function instantiate is shown in Figure 5.6. Its main job is to construct a store
derivation 56 which is written to the Nix store using addToStore 65 . The hard part is in
filling in the fields of the store derivation from the attributes as. In particular, we need to
compute the following bits of information:
• We need an output path (the field output). The output path reflects all information
that goes into the derivation. Therefore we initially leave output empty 57 , then
compute a cryptographic hash over this initial derivation and use makePath to con-
struct the actual path 64 . The hash is computed using the function hashDrv, which
conceptually just computes a SHA-256 hash over a concrete syntax representation
of the store derivation. However, the notion of fixed-output derivations complicates
matters a bit. The actual operation of hashDrv is discussed below. The final output
path is also placed in the out environment variable, in order to communicate the path
to the builder.
• We need to compute the environment variable bindings envVars 63 . envVars is a set
of name/value tuples. Each attribute n = e in {as} is mapped to an environment
variable binding. This translation is done by the function processBinding shown in
Figure 5.7. It takes e and returns, among other things, a string representation of
5Aficionados of purely functional programming languages may be dismayed by the impurity of producing a
store derivation as a side-effect. However, the side-effect is not observable in the Nix expression evaluation,
and the idempotency of the operation ensures that equational reasoning still holds.
5. The Extensional Model
instantiate(as) :
name ← eval(
if name ends in ".drv" : abort
if name contains invalid characters : abort
d ← StoreDrv { 56
output = "" 57
outputHash = eval(as.outputHash) if attr. exists, "" otherwise
outputHashAlgo = "" 58
("r:" if eval(as.outputHashMode) = "recursive", "" otherwise) +
(eval(as.outputHashAlgo) if attr. exists, "" otherwise)
inputDrvs = {processBinding(e).drvs | n = e ∈ as} 59
inputSrcs = {processBinding(e).srcs | n = e ∈ as} 60
system = eval(as.system) // Must evaluate to a string.
builder = concSp(processBinding(as.builder).res) 61
args = map(λ e . processBinding(e).res, as.args) 62
envVars = {(n, concSp(processBinding(e).res)) | n = e ∈ as} 63
∪ {(out, "")}
d.output makePath("output:out", hashDrv(d), name) 64
d.envVars["out"] ← d.output
p ← addToStore(printDrv(d), d.inputDrvs ∪ d.inputSrcs, name+".drv")
return {outPath = d.output, drvPath = p}
Figure 5.6.: instantiate: Instantiation of store derivations from a Nix expression
the normal form of e. processBinding is discussed in more detail below. However, it
should be pointed out here that processBinding returns a list of strings, since attribute
values can be lists of values. For instance, it is common to communicate a list of
paths of components to a builder:
buildInputs = [libxml2 openssl zlib];
Since the value of an environment variable is a plain string, the list of
strings returned by processBinding is concatenated with spaces separating the el-
ements; e.g., "/nix/store/ngp50703j8qn...-libxml2-2.6.17/nix/store/8ggmblhvzciv...-
openssl-0.9.7f/nix/store/kiqnkwp5sqdg...-zlib-1.2.1" where ⊔ denotes a space.
• Likewise, processBinding is used to set the command-line arguments from the args
attribute 62 , and the builder from the builder attribute 61 . Since args is a list, we do
not need to concatenate the strings returned by processBindings.
• The system field is initialised from the system attribute.
• The set of input derivations inputDrvs (the build-time dependencies) is the set of
store paths of derivations that occur in the attributes 59 . These are discovered by
5.4. Translating Nix expressions to store derivations
processBinding(e) :
if e = true :
return {drvs = 0, srcs = 0, res = ["1"]}
if e = false :
return {drvs = 0, srcs = 0, res = [""]}
if e = a string s :
return {drvs = 0, srcs = 0, res = [s]}
if e = null :
return {drvs = 0, srcs = 0, res = [""]}
if e = a path p :
p addToStore(readPath(p), 0, baseName(p))
return {drvs = 0, srcs = {p}, res = [p]}
if e = an attribute set {as} with as.type = "derivation" :
// Note that {as} is the result of a call to derivation.
return {drvs = {eval(as.drvPath)}, srcs = 0, res = [eval(as.outPath)]}
if e = a list [ls] :
ps ← map(processBinding, ls)
{ drvs =
p∈ps p.drvs,srcs=p∈ps p.srcs
, res = concatMap(λ p . p.res, ps)}
Figure 5.7.: processBinding: Processing derivation attributes
• The set of input sources inputSrcs is the set of store paths of sources that occur in
the attributes 60 . These are also discovered by processBinding.
Processing the attributes Clearly, the core of the translation is the function process-
Binding, shown in Figure 5.7. It evaluates a Nix expression, inspects it, and returns three
pieces of information in a structure:
• A list of strings that represent the normal form of the expression (res).
• The set of store paths of derivations occurring in the expression (drvs).
• The set of store paths of sources occurring in the expression (srcs).
Simple values like strings, Booleans, and null map to singleton string lists. For instance,
true is translated to ["1"], and false is translated to [""]. The latter is so that we can write in
shell scripts:
if test -n "$var"; then ... fi
to test whether Boolean variable $var is set.
5. The Extensional Model
For lists, processBinding is applied recursively to the list elements 68 . The resulting
string lists are concatenated into a single list. Thus, nested lists are allowed; they are
flattened automatically.
For paths, processBinding copies the referenced FSO to the Nix store using addToStore
and returns the resulting path in srcs and res 66 . Thus all referenced sources end up in the
Nix store. The file name of the source (returned by the function baseName) is used as the
symbolic name of the store path.
For attribute sets, processBinding checks whether the type attribute of the derivation (if
present) is equal to derivation. If so, the value denotes the result of a call to derivation. This
is where instantiate recurses into subderivations: instantiate calls processBinding, which
calls eval, which eventually calls instantiate for derivations occurring in the input attributes.
Let us look at an example. Suppose that we have a derivation
args = ["-e" ./ python];
where python is a variable that evaluates to another derivation. Since this is a list, process-
Binding will recurse into the list elements. This yields the following results. For "-e", we
{drvs = 0, srcs = 0, res = ["-e"]}.
For ./, the file in the current directory is added to the Nix store using
addToStore. Supposing that the file has SHA-256 hash 49b96daeab53... in hexadecimal,
addToStore will call makePath with the following arguments:
makePath("source", "49b96daeab53...", "").
makePath constructs the hash part by hashing and truncating the following string
into 043577cf3vlb
The store path thus becomes /nix/store/, and
processBinding returns
{ drvs = 0, srcs = {/nix/store/}
, res = ["/nix/store/"] }.
For python, if we suppose that it evaluates to an attribute set with drvPath = /nix/store/-
mi4p0ck4jqds...-python-2.4.1.drv and outPath = /nix/store/s3dc4m2zg9bb...-python-2.4.1,
processBinding returns
{ drvs = {/nix/store/mi4p0ck4jqds...-python-2.4.1.drv}, srcs = 0
, res = ["/nix/store/s3dc4m2zg9bb...-python-2.4.1"] }.
Note that the path of the store derivation is placed in drvs (and will therefore eventually end
up in the inputDrvs of the referring derivation), since the derivation is needed to recursively
build this dependency when we build the current derivation. On the other hand, the path of
the output of the derivation is placed in res (and will end up in an environment variable or
command-line argument), since that is what the builder is interested in.
5.4. Translating Nix expressions to store derivations
("system","i686-linux")] )
Figure 5.8.: ATerm representation of the Hello derivation
Finally, the results of the recursive calls are combined for the list, and we get
{ drvs = {/nix/store/mi4p0ck4jqds...-python-2.4.1.drv}
, srcs = {/nix/store/}
, res = ["-e" "/nix/store/"
"/nix/store/s3dc4m2zg9bb...-python-2.4."] }.
Writing the derivation When the final output path of the derivation has been computed,
we write a string representation of the store derivation to the Nix store using addTo-
Store 65 . The function printDrv returns a byte sequence that represents the store derivation.
The contents of the byte sequence is a textual ATerm. ATerms [166] are a simple format
for the exchange of terms. For the Hello example, the ATerm representation is shown in
Figure 5.86.
I will not formalise the translation to ATerms here, which is straightforward. However,
the reader may be wondering about the presence of the "out" strings in the ATerm encoding
of the inputDrvs field, as these do not occur in the abstract syntax, and about the fact that
output is a list here, also containing the mysterious "out" string. This is actually for future
compatibility with a Nix semantics that supports multiple output paths. Every derivation
would produce a set of labelled outputs, the default output having label out. Multiple out-
puts allow more fine-grained deployment. In Section 6.7, I show a semantics that supports
Note that in the call to addToStore, we set the references of the new store object to the
union of the store derivations (inputDrvs) and sources (inputSrcs) used by the derivation.
Thus, the references graph is maintained for store derivations. This means that we can
query the closure of a derivation, which is the set of files necessary to do a build of the
6Whitespace has been added to improve legibility. Actual textual ATerms as produced by the function ATwrite-
ToString() in the ATerm API do not contain whitespace. In other words, ATwriteToString() produces a canonical
representation of an ATerm, and thus of the store derivation it represents.
5. The Extensional Model
derivation and all its dependencies. As we saw in Section 2.4, this is useful for source
On the other hand, the path in the output field is not a reference of the store derivation,
since the output in general does not exist yet, and because we want garbage collection of
derivations and outputs to be independent.
5.4.1. Fixed-output derivations
The only remaining aspect of the translation is the computation of the output path by
hashDrv. As stated above, conceptually the output path is constructed from a SHA-256
hash over the ATerm representation of the initial store derivation (i.e., the one with output
set to the empty string 57 ). However, the matter is complicated by the notion of fixed-
output derivations.
Fixed-output derivations are derivations of which we know the output in advance. More
precisely, the cryptographic hash of the output path is known to the Nix expression writer.
The rationale for fixed-output derivations is derivations such as those produced by the
fetchurl function. This function downloads a file from a given URL. To ensure that the
downloaded file has not been modified, the caller must also specify a cryptographic hash
of the file. For example,
fetchurl {
url =;
md5 = "70c9ccf9fac07f762c24f2df2290784d";
It sometimes happens that the URL of the file changes, e.g., because servers are reorganised
or no longer available. We then must update the call to fetchurl, e.g.,
fetchurl {
url =;
md5 = "70c9ccf9fac07f762c24f2df2290784d";
If a fetchurl derivation followed the normal translation scheme, the output paths of the
derivation and all derivations depending on it would change. For instance, if we were to
change the URL of the Glibc source distribution—a component on which almost all other
components depend—massive rebuilds will ensue. This is unfortunate for a change which
we know cannot have a real effect as it propagates upwards through the dependency graph.
Fixed-output derivations solve this problem by allowing a derivation to state to Nix that
its output will hash to a specific value. When Nix builds the derivation (Section 5.5), it
will hash the output and check that the hash corresponds to the declared value. If there is
a hash mismatch, the build fails and the output is not registered as valid. For fixed-output
derivations, the computation of the output path only depends on the declared hash and hash
algorithm, not on any other attributes of the derivation.
Figure 5.9 shows the definition of the fetchurl function. The expression presented here
is somewhat simplified from the actual expression in Nixpkgs, which accepts SHA-1 and
SHA-256 hashes in addition to MD5 hashes. A fixed-derivation output is declared by
defining the following attributes:
5.4. Translating Nix expressions to store derivations
{stdenv, curl}: # The `curl' program is used for downloading.
{url, md5}:
stdenv.mkDerivation {
name = baseNameOf (toString url);
builder = ./;
buildInputs = [curl];
# This is a fixed-output derivation;
# the output has to have MD5 hash `md5'.
outputHashMode = "flat";
outputHashAlgo = "md5";
outputHash = md5;
inherit url;
Figure 5.9.: pkgs/build-support/fetchurl/default.nix: fixed-output derivations in fetchurl
The attribute outputHashAlgo specifies the hash algorithm: outputHashAlgo ∈ {
"md5", "sha1", "sha256"}.
The attribute outputHash specifies the hash in either hexadecimal or base-32 nota-
tion, as autodetected based on the length of the string.
The attribute outputHashMode ∈ {"recursive", "flat"} states whether the hash is com-
puted over the serialisation of the output path p, i.e., hash(serialise(readPath(p))),
or over the contents of the single non-executable regular file at p, respectively. The
latter (which is the default) produces the same hashes as standard Linux commands
such as md5sum and sha1sum.
Recursive mode is useful for tools that download directory hierarchies instead of
single files. For instance, Nixpkgs contains a function fetchsvn that exports revisions
of directory hierarchies from Subversion repositories, e.g.,
fetchsvn {
url =;
rev = 3297;
md5 = "2a074a3df23585c746cbcae0e93099c3";
For historical reasons, the outputHashMode attribute is encoded in the outputHash-
Algo field 58 .
These three attributes are used in the output path computation. This means that, for
instance, the url attribute in fetchurl derivation is disregarded. Thus, instantiation of the
two fetchurl calls shown above produces two store derivations that have the same output
5. The Extensional Model
hashDrv(d) :
if d.outputHash = ""
return hashsha256("fixed:out:" + d.outputHashAlgo
+":" + d.outputHash
+":" + d.output)
else :
d ← d { // I.e., d is a modified copy of d
inputDrvs = {hashDrv(parseDrv(readPath(p))) | p ∈ d.inputDrvs} ) 69
return hashsha256(printDrv(d))
Figure 5.10.: hashDrv: Hashing derivations modulo fixed-output derivations
So how do we compute the hash part of the output path of a derivation? This is done
by the function hashDrv, shown in Figure 5.10. It distinguishes between two cases. If the
derivation is a fixed-output derivation, then it computes a hash over just the outputHash
If the derivation is not a fixed-output derivation, we replace each element in the deriva-
tion’s inputDrvs with the result of a call to hashDrv for that element.
(The derivation at
each store path in inputDrvs is converted from its on-disk ATerm representation back to a
StoreDrv by the function parseDrv.) In essence, hashDrv partitions store derivations into
equivalence classes, and for hashing purpose it replaces each store path in a derivation
graph with its equivalence class.
The recursion in Figure 5.10 is inefficient: it will call itself once for each path by which
a subderivation can be reached, i.e., O(Vk) times for a derivation graph with V derivations
and with out-degree of at most k. In the actual implementation, memoisation is used to
reduce this to O(V + E) complexity for a graph with E edges.
5.5. Building store derivations
After we have translated a Nix expression to a graph of store derivations, we want to
perform the build actions described by them. The command nix-store --realise does this
explicitly, and nix-env and nix-build do it automatically after store derivation instantiation.
The postcondition of a successful build of a derivation is that the output path of the
derivation is valid. This means that a successful derivation is performed only once (at least
until the user runs the garbage collector). If the output is already valid, the build is a trivial
operation. If it is not, the build described by the derivation is executed, and on successful
completion the output path is registered as being valid, preventing future rebuilds.
The build algorithm is also where binary deployment using substitutes is implemented.
If we want to build a derivation with output path p, p is not valid, and we have a substitute
for p, then we can execute the substitute to create p (which the substitute will generally well as the output field. This field is empty when hashDrv is called for “unfinished” derivations from in-
stantiate 64 , but it contains an actual path when hashDrv calls itself recursively for “finished” derivations 69 .
This ensures that the name attribute and the location of the store are taken into account.
5.5. Building store derivations
do by downloading a pre-built binary and unpacking it to p). Substitution is discussed
in Section 5.5.3; here we just assume that there exists a function substitute(p) : Bool that
returns true if path p is valid or it has managed to make p valid by executing a substitute,
and false otherwise.
This section shows how Nix builds store derivations. It first presents a “simple” build
algorithm that recursively builds store derivations. Next, it introduces the concept of build
hooks, which are a mechanism to support distributed multi-platform builds in a transpar-
ent and centralised manner. Finally it shows a build algorithm that can perform multiple
derivations in parallel.
5.5.1. The simple build algorithm
The build algorithm is shown in Figure 5.11. The function build takes as its sole argument
the store path pdrv of the derivation to be built. It returns the output path of the derivation
if the build succeeds, and aborts if it does not. It consists of the following main steps:
First, we need to ensure that the derivation pdrv exists. If it does not, it might be
possible to obtain it through a substitute. It is a fatal error if pdrv cannot be made
valid. In any case, when pdrv is valid, it is read into d, a value of the type StoreDrv
shown in Figure 5.5.
While the primary purpose of substitutes is to speed up the construction of deriva-
tion outputs, they may also be used to distribute the store derivations themselves. In
general, what we distribute to client machines are Nix expressions, and the transla-
tion of those will cause the store derivations to be created locally. However, it is also
possible to forego Nix expressions entirely and work directly on store derivations.
For instance, nix-env allows one to install a store derivation directly:
$ nix-channel --update
$ nix-env -i /nix/store/1ja1w63wbk5q...-hello-2.1.1.drv
If /nix/store/1ja1w63wbk5q...-hello-2.1.1.drv is not valid, it is possible that a sub-
stitute for it is available (e.g., obtained from a subscribed channel). The primary
advantage of skipping Nix expressions in this manner is that it makes one indepen-
dent of the evolution of the Nix expression language, removing a possible source
of incompatibility between the distributor and the clients. This is in fact one of the
main advantages of the two-step build process.
We try to substitute the output path of the derivation. If it is already valid (as stated
above, substitute checks for this trivial case), or it can be produced through a substi-
tute, we’re done right away.
Since the output is not valid and there is no substitute, we have to perform the build
action. We need to build all input derivations in inputDrvs, or, more precisely, we
need to ensure that the output paths of the input derivations are valid. So we recur-
sively build the input derivations.
Note that it is not necessary to ensure that the input sources in inputSrcs are valid.
This is because of the closure invariant: the inputSrcs are all in the references
5. The Extensional Model
build(pdrv) :
if ¬ substitute(pdrv) : abort 70
d ← parseDrv(readPath(pdrv))
if substitute(d.output) : return d.output 71
inputs ← 0
for each p ∈ d.inputsDrvs :
d parseDrv(readPath(p))
closure(d.output) 73
for each p ∈ d.inputsSrcs :
closure(p) 74
lock ← acquirePathLock(d.output) 75
if valid[d.output] = ε :
releasePathLock(d.output, lock)
return d.output
if d.system = thisSystem : abort 76
if pathExists(d.output) : deletePath(d.output)
ptmp createTempDir() 78
Run d.builder in an environment d.envVars
and with arguments d.args and in directory ptmp
if the builder exits with exit code = 0 :
canonicalisePathContents(d.output) 80
if d.outputHash = "" :
Extract mode m and hash algorithm t from d.outputHashAlgo
if m indicates flat mode :
Check that d.output is a non-executable regular file
and that hasht (readFile(d.output)) matches d.outputHash
else :
Check that hasht (serialise(readPath(d.output))) matches d.outputHash
In a database transaction :
valid[d.output] ← hashsha256(serialise(readPath(d.output)))
setReferences(d.output, scanForReferences(d.output, inputs))
deriver[d.output] ← pdrv
releasePathLock(d.output, lock)
return d.output
Figure 5.11.: build: Building store derivations
5.5. Building store derivations
entry for pdrv (cf. Figure 5.6), and thus validity of pdrv implies validity of each
p ∈ d.inputSrcs. Likewise, we know for sure that we can load the input derivations,
since these too are references of pdrv.
We acquire an exclusive lock on the output path. Once we have acquired the lock, we
know that there is no other Nix process building the output path (including through
substitutes, since as we will see, substitute also locks properly).
However, once we have acquired the lock, it is possible that another process got
there first, i.e., saw that the output was invalid, acquire the lock, and did the build.
This can be detected by checking once more whether the output path is valid. If it
is, we’re done.
Thanks to locking and the transactional semantics of registering valid paths (step
82 described below), an interrupted operation (such as building a derivation) can
always be restarted, requiring no further recovery. It also ensures that all operations
are idempotent, i.e., can be repeated any number of times. Operations can also be
run in parallel. In particular, it is safe to run multiple build actions in parallel. Since
locking is fine-grained, as opposed to having a global lock for the entire Nix store,
parallel execution can be used to speed up builds. If more than one CPU is available,
this gives better resource utilisation. However, even if there is only one CPU, we may
get a performance improvement if some of the builds are heavily I/O-bound [6].
It is only possible to build the derivation if the current machine and operating system
match the system field of the derivation. A Nix installation on an i686-linux machine
cannot build a derivation for a powerpc-darwin machine. The platform type of the
running Nix instance is denoted by thisSystem. However, Section 5.5.2 describes an
extension to the build algorithm that enables distributed multi-platform builds.
It is possible that the output path already exists, even though it is not valid. Typically,
this is because a previous build was interrupted before the builder finished or before
the output could be registered as valid. Such a residual output can interfere with the
build, and therefore must be deleted first.
Builds are executed in a temporary directory ptmp that is created here (typically, it is
a fresh subdirectory of /tmp). ptmp is deleted automatically after the builder finishes
(not shown here). The temporary directory provides a place where the builder can
write scratch data. For instance, the unpacking and compiling of the Hello sources
performed by the Hello builder in Figure 2.7 is done in the temporary directory.
The fact that the temporary directory is initially empty reduces the possibility of
interference due to files pre-existing in the builder’s initial current directory.
Here we actually perform the build by running the executable indicated by the builder
field of the derivation, with the command-line arguments args and the environment
variables envVars, and with the current directory set to ptmp. Note that envVars
specifies the complete environment; the builder does not inherit any environment
variables from the caller of the Nix process user. This prevents interference from
environment variables such as PATH.
The Extensional Model
In the real implementation some additional variables are initialised to specific val-
ues. The variable TMPDIR is set to ptmp so that tools that need temporary space write
to that directory, reducing the possibility of interference, left-over files, and security
problems due to /tmp races. The variable HOME, which points to the user’s home
directory, is set to the meaningless value /homeless-shelter. Many programs look
up the home directory corresponding to the current user ID in /etc/passwd or similar
if HOME is not set. Setting HOME prevents such lookups and reduces the possibil-
ity of interference from tools that read configuration files from the home directory.
Likewise, PATH is set to the value /path-not-set, since the Unix shell initialises PATH
to an undesirable default such as /usr/local/bin:/bin:/usr/bin, which typically causes
interference. Finally, the variable NIX_STORE is set to storeDir. Some tools need
the location of the store for various purposes (see, e.g., Section 7.1.3).
If the build finishes successfully, the output is canonicalised by the function canon-
icalisePathContents. It sets all file system meta-information not represented in the
canonical serialisation to fixed values. In particular, it does the following:
- The “last modified” timestamps on files are set to 0 seconds in the Unix epoch,
meaning 1/1/1970 00:00:00 UTC.
- The permission on each file is set to octal 0444 or 0555, meaning that the file
is readable by everyone and writable by nobody, but possibly has execute per-
mission. Removing write permission prevents the path from being accidentally
modified when it is used as an input to other derivations.
- Special permissions, such as set-user-ID and set-group-ID [152], are removed.
- A fatal error is flagged if a file is found that is not a regular file, a directory, or
a symlink.
The actual implementation of canonicalisePathContents(p) is not shown here, but
in its effect is equal to deserialising a canonical serialisation of the path:
writePath(p, readPath(p))
where writePath takes care to create files with canonical values for meta-information
not represented in the canonical serialisation.
If d is a fixed-output derivation (Section 5.4.1), then we have to check that the output
produced by the builder matches the cryptographic hash declared in the derivation.
Recall that there are two modes of fixed-output derivation: flat mode that requires the
output to be single non-executable file, and recursive mode that applies to arbitrary
outputs. Flat mode hashes the plain file contents (read by the function readFile),
while recursive mode hashes the serialisation of the entire FSO.
Since the build has finished successfully, we must register the path as valid and set its
references mapping. This must be done in a database transaction to maintain the in-
variants. Thanks to the ACID properties of the underlying Berkeley DB database, it
is always safe to interrupt any Nix operation, and Nix can always recover gracefully
5.5. Building store derivations
scanForReferences(p, paths) :
c ← serialise(readPath(p))
refs ← 0
for each p ∈ paths :
if find(c, hashPart(p)) = 0 :
return refs
Figure 5.12.: scanForReferences: Scanning for store path references in build results
from non-corrupting crashes, including power failures. In the case of operating sys-
tem or hardware crashes, this requires the file system containing the Nix store to be
data-journaled [142], or to otherwise guarantee that once data has been committed
to disk, no rollback to previous contents will occur.
Following the stored hash invariant (page 96), the valid entry for the output path is
set to the SHA-256 of the serialisation of the path contents.
The references entry is set to the set of store paths referenced from the output con-
tents. As described in Chapter 3, the references are found by scanning for the hash
parts of the input paths. The set of input paths is the union of the closures of the
input sources 74 and the outputs of the input derivations 73 . Note that it is quite
possible for the output to contain a reference to a path that is not in inputSrcs or
inputDrvs, but that is in the closure of those paths. Such a reference can be obtained
by the builder indirectly by dereferencing one of its immediate inputs. Given the
full set of input paths, the function scanForReferences returns the set of referenced
The deriver mapping maintains a link from outputs to the derivations that built them.
It is described below.
Scanning for references The function scanForReferences, shown in Figure 5.12, de-
termines the references in a path p to other store paths.
We only need to scan for store paths that are inputs, i.e., that are in the closure of in-
putSrcs and the outputs of inputDrvs. It is not necessary to scan for references to arbitrary
store paths, since the only way that a builder can introduce a reference to a path not in the
input set, is if it were to read the Nix store to obtain arbitrary store paths. While it is easy
to construct a contrived builder that does this, builders do not do so in practice. Indeed,
if this were an issue, we could block such behaviour by making the Nix store unreadable
but traversable (i.e., by clearing read permission but enabling execute permission on the
directory on Unix systems).
Thus scanForReferences also takes a set of potentially referenced paths. It then searches
for the hash part of each path in the serialisation of the contents of the FSO at p. To re-
iterate from Section 3.4, we only care about the hash part because that’s the most easily
identifiable part of a store path, and least likely to be hidden in some way. The search for
the hash part is done using an ordinary string searching algorithm, denoted as find(s,t),
5. The Extensional Model
which yields the integer offsets of the occurrences of the byte sequence t in the byte se-
quence s (an empty set indicating no matches). If and only if there is a match, we conclude
that path p has a reference to the store path p.
For the string search, the Boyer-Moore algorithm [18] is particularly suited because the
hash parts consist of a small subset of the set of byte values, allowing the algorithm to
jump through the input quickly in many cases. For instance, if we encounter any byte not
in the set of base-32 characters (see digits32 in Section 5.1), we can immediately skip 32
characters. Especially for binary files this will happen quite often. Thus it is reasonable to
expect that the search achieves Boyer-Moore’s Θ(n/m) average running time in most cases
(where n is the length of the serialisation in bytes, and m is the length of the hash part).
If it is desirable to support encodings of the hash part other than plain ASCII (and to
date, it has not been necessary to do), scanForReferences can be adapted appropriately.
For instance, to detect UTF-16 encodings of the hash part, we should just scan for the byte
sequence s[0] 0 s[1] 0 s[2] 0 . . . s[31] 0, where s = hashPart(p), p ∈ paths, and 0 denotes
a byte with value 08.
Traceability For many applications it is important to query which derivation built a given
store path (if any). Section 11.1 gives the example of blacklisting, which allows users to
determine whether the build graph of a component contains certain “bad” sources. In
Software Configuration Management, this is known as traceability—the ability to trace
derived artifacts back to the source artifacts from which they were constructed.
To enable traceability, Nix maintains a database table deriver : Path Path that maps
output paths built by derivations to those derivations. The following invariant maintains
Invariant 5 (Traceability invariant). The deriver of a valid path p, if defined, must be a
valid store derivation with output path p.
∀p ∈ Path : (valid[p] = ε ∧deriver[p] = ε) →
(valid[deriver[p]] = ε ∧ parseDrv(readPath(deriver[p])).output = p)
However, this invariant is actually optional in the current implementation, because not
all users want traceability. The disadvantage of full traceability is that the entire derivation
graph of each output path has to be kept in the store for the lifetime of those paths.
It is worth noting at this point that Nix in no way maintains a link between the build
graph (store derivations and sources) in the Nix store on the one hand, and the Nix expres-
sions from which they were instantiated on the other hand. If the latter are under version
management control, there is no way to trace store paths back to the artifacts in the version
management system from which they ultimately originated. This is most certainly a desir-
able feature, but it is not clear how such information can be maintained without coupling
Nix too closely to the version management system (as is the case in Vesta and ClearCase,
which integrate version management and build management, as discussed in Section 7.6).
8This assumes a little-endian encoding of UTF-16 (i.e., UTF-16LE) [34]. For the big-endian variant (UTF-
16BE), the 0s should prefix the hash characters.
Building store derivations
let {
pkgsLinux = ...;
pkgsDarwin = ...;
fooC = derivation {
name = "fooC";
system = "i686-linux";
src = ./foo.tar.gz;
builder = ...;
inherit (pkgsLinux) stdenv ... tiger;
fooBin = derivation {
name = "foo";
system = "powerpc-darwin";
src = fooC;
builder = ...;
inherit (pkgsDarwin) stdenv ...;
body = fooBin;
Figure 5.13.: Example of a distributed, two-platform build
5.5.2. Distributed builds
The build algorithm in Figure 5.11 builds derivations only for the current platform (thisSys-
tem). Through a relatively simple extension called build hooks, Nix supports distributed
multi-platform builds.
Figure 5.13 shows a hypothetical example of a Nix expression that requires a distributed
multi-platform capable build system. The scenario is that we want to compile a package
foo, written in a hypothetical language called Tiger, for Mac OS X (powerpc-darwin). Un-
fortunately, the Tiger compiler does not run on that platform, but it does run on Linux
(i686-linux) and can compile to C code. Thus, we have two derivations. The first derivation