NixOS Planet

April 30, 2020

Sander van der Burg

Deploying container and application services with Disnix

As described in many previous blog posts, Disnix's purpose is to deploy service-oriented systems -- systems that can be decomposed into inter-connected service components, such as databases, web services, web applications and processes -- to networks of machines.

To use Disnix effectively, two requirements must be met:

  • A system must be decomposed into independently deployable services, and these services must be packaged with Nix.
  • Services may require other services that provide environments with essential facilities to run them. In Disnix terminology, these environments are called containers. For example, to host a MySQL database, Disnix requires a MySQL DBMS as a container, to run a Java web application archive you need a Java Servlet container, such as Apache Tomcat, and to run a daemon it needs a process manager, such as systemd, launchd or supervisord.

Disnix was originally designed to only deploy the (functional) application components (called services in Disnix terminology) of which a service-oriented systems consists, but it was not designed to handle the deployment of any underlying container services.

In my PhD thesis, I called Disnix's problem domain service deployment. Another problem domain that I identified was infrastructure deployment that concerns the deployment of machine configurations, including container services.

The fact that these problem domains are separated means that, if we want to fully deploy a service-oriented system from scratch, we basically need to do infrastructure deployment first, e.g. install a collection of machines with system software and these container services, such as MySQL and Apache Tomcat, and once that is done, we can use these machines as deployment targets for Disnix.

There are a variety of solutions available to automate infrastructure deployment. Most notably, NixOps can be used to automatically deploy networks of NixOS configurations, and (if desired) automatically instantiate virtual machines in a cloud/IaaS environment, such as Amazon EC2.

Although combining NixOps for infrastructure deployment with Disnix for service deployment works great in many scenarios, there are still a number of concerns that are not adequately addressed:

  • Infrastructure and service deployment are still two (somewhat) separated processes. Although I have developed an extension toolset (called DisnixOS) to combine Disnix with the deployment concepts of NixOS and NixOps, we still need to run two kinds of deployment procedures. Ideally, it would be nice to fully automate the entire deployment process with only one command.
  • Although NixOS (and NixOps that extends NixOS' concepts to networks of machines and the cloud) do a great job in fully automating the deployments of machines, we can only reap their benefits if we can permit ourselves use to NixOS, which is a particular Linux distribution flavour -- sometimes you may need to deploy services to conventional Linux distributions, or different kinds of operating systems (after all, one of the reasons to use service-oriented systems is to be able to use a diverse set of technologies).

    The Nix package manager also works on other operating systems than Linux, such macOS, but there is no Nix-based deployment automation solution that can universally deploy infrastructure components to other operating systems (the only other infrastructure deployment solution that provides similar functionality to NixOS is the the nix-darwin repository, that can only be used on macOS).
  • The NixOS module system does not facilitate the deployment of multiple instances of infrastructure components. Although this is probably a very uncommon use case, it is also possible to run two MySQL DBMS services on one machine and use both of them as Disnix deployment targets for databases.

In a Disnix-context, services have no specific meaning or shape and can basically represent anything -- a satellite tool providing a plugin system (called Dysnomia) takes care of most of their deployment steps, such as their activation and deactivation.

A couple of years ago, I have demonstrated with a proof of concept implementation that we can use Disnix and Dysnomia's features to deploy infrastructure components. This deployment approach is also capable of deploying multiple instances of container services to one machine.

Recently, I have revisited that idea again and extended it so that we can now deploy a service-oriented system including most underlying container services with a single command-line instruction.

About infrastructure deployment solutions


As described in the introduction, Disnix's purpose is service deployment and not infrastructure deployment. In the past, I have been using a variety of solutions to manage the underlying infrastructure of service-oriented systems:

  • In the very beginning, while working on my master thesis internship (in which I built the first prototype version of Disnix), there was not much automation at all -- for most of my testing activities I manually created VirtualBox virtual machines and manually installed NixOS on them, with all essential container servers, such as Apache Tomcat and MySQL, because these were the container services that my target system required.

    Even after some decent Nix-based automated solutions appeared, I still ended up doing manual deployments for non-NixOS machines. For example, I still remember the steps I had to perform to prepare myself for the demo I gave at NixCon 2015, in which I configured a small heterogeneous network consisting of an Ubuntu, NixOS, and Windows machine. It took me many hours of preparation time to get the demo right.
  • Some time later, for a research paper about declarative deployment and testing, we have developed a tool called nixos-deploy-network that deploys NixOS configurations in a network of machines and is driven by a networked NixOS configuration file.
  • Around the same time, I have also developed a similar tool called: disnixos-deploy-network that uses Disnix's deployment mechanisms to remotely deploy a network of NixOS configurations. It was primarily developed to show that Disnix's plugin system: Dysnomia, could also treat entire NixOS configurations as services.
  • When NixOps appeared (initially it was called Charon), I have also created facilities in the DisnixOS toolset to integrate with it -- for example DisnixOS can automatically convert a NixOps configuration to a Disnix infrastructure model.
  • And finally, I have created a proof of concept implementation that shows that Disnix can also treat every container service as a Disnix service and deploy it.

The idea behind the last approach is that we deploy two systems in sequential order with Disnix -- the former consisting of the container services and the latter of the application services.

For example, if we want to deploy a system that consists of a number of Java web applications and MySQL databases, such as the infamous Disnix StaffTracker example application (Java version), then we must first deploy a system with Disnix that provides the containers: the MySQL DBMS and Apache Tomcat:

$ disnix-env -s services-containers.nix \
-i infrastructure-bare.nix \
-d distribution-containers.nix \
--profile containers

As described in earlier blog posts about Disnix, deployments are driven by three configuration files -- the services model captures all distributable components of which the system consists (called services in a Disnix-context), the infrastructure model captures all target machines in the network and their relevant properties, and the distribution model specifies the mappings of services in the services model to the target machines (and container services already available on the machines in the network).

All the container services in the services model provide above refer to systemd services, that in addition to running Apache Tomcat and MySQL, also do the following:

  • They bundle a Dysnomia plugin that can be used to manage the life-cycles of Java web applications and MySQL databases.
  • They bundle a Dysnomia container configuration file capturing all relevant container configuration properties, such as the MySQL TCP port the daemon listens to, and the Tomcat web application deployment directory.

For example, the Nix expression that configures Apache Tomcat has roughly the following structure:


{stdenv, dysnomia, httpPort, catalinaBaseDir, instanceSuffix ? ""}:

stdenv.mkDerivation {
name = "simpleAppservingTomcat";
...
postInstall = ''
# Add Dysnomia container configuration file for a Tomcat web application
mkdir -p $out/etc/dysnomia/containers
cat > $out/etc/dysnomia/containers/tomcat-webapplication${instanceSuffix} <<EOF
tomcatPort=${toString httpPort}
catalinaBaseDir=${catalinaBaseDir}
EOF

# Copy the Dysnomia module that manages an Apache Tomcat web application
mkdir -p $out/libexec/dysnomia
ln -s ${dysnomia}/libexec/dysnomia/tomcat-webapplication $out/libexec/dysnomia
'';
}

First, the Nix expression will build and configure Apache Tomcat (this is left out of the example to keep it short). After Apache Tomcat has been built and configured, the Nix expression generates the container configuration file and copies the tomcat-webapplication Dysnomia module from the Dysnomia toolset.

The disnix-env command-line instruction shown earlier, deploys container services to target machines in the network, using a bare infrastructure model that does not provide any container services except the init system (which is systemd on NixOS). The profile parameter specifies a Disnix profile to tell the tool that we are deploying a different kind of system than the default.

If the command above succeeds, then we have all required container services at our disposal. The deployment architecture of the resulting system may look as follows:


In the above diagram, the light grey colored boxes correspond to machines in a network, the dark grey boxes to container environments, and white ovals to services.

As you may observe, we have deployed three services -- to the test1 machine we have deployed an Apache Tomcat service (that itself is managed by systemd), and to the test2 machine we have deployed both Apache Tomcat and the MySQL server (both their lifecycles are managed with systemd).

We can run the following command to generate a new infrastructure model that provides the properties of these newly deployed container services:

$ disnix-capture-infra infrastructure-bare.nix > infrastructure.nix

As shown earlier, the retrieved infrastructure model provides all relevant configuration properties of the MySQL and Apache Tomcat containers that we have just deployed, because they expose their configuration properties via container configuration files.

By using the retrieved infrastructure model and running the following command, we can deploy our web application and database components:

$ disnix-env -s services.nix \
-i infrastructure.nix \
-d distribution.nix \
--profile services

In the above command-line invocation, the services model contains all application components, and the distribution model maps these application components to the corresponding target machines and their containers.

As with the previous disnix-env command invocation, we provide a --profile parameter to tell Disnix that we are deploying a different system. If we would use the same profile parameter as in the previous example, then Disnix will undeploy the container services and tries to upgrade the system with the application services, which will obviously fail.

If the above command succeeds, then we have successfully deployed both the container and application services that our example system requires, resulting in a fully functional and activated system with a deployment architecture that may have the following structure:


As may you may observe by looking at the diagram above, we have deployed a system that consists of a number of MySQL databases, Java web services and Java web applications.

The diagram uses the same notational conventions used in the previous diagram. The arrows denote inter-dependency relationships, telling Disnix that one service depends on another, and that dependency should be deployed first.

Exposing services as containers


The Disnix service container deployment approach that I just described works, but it is not an integrated solution -- it has a limitation that is comparable to the infrastructure and services deployment separation that I have explained earlier. It requires you to run two deployments: one for the containers and one for the services.

In the blog post that I wrote a couple of years ago, I also explained that in order to fully automate the entire process with a single command, this might eventually lead to "a layered deployment approach" -- the idea was to combine several system deployment processes into one. For example, you might want to deploy a service manager in the first layer, the container services for application components in the second, and in the third the application components themselves.

I also argued that it is probably not worth spending a lot of effort in automating multiple deployment layers -- for nearly all systems that I deployed there were only two "layers" that I need to keep track of -- the infrastructure layer providing container services, and a service layer providing the application services. NixOps sufficed as a solution to automate the infrastructure parts for most of my use cases, except for deployment to non-NixOS machines, and deploying multiple instances of container services, which is a very uncommon use case.

However, I got inspired to revisit this problem again after I completed my work described in the previous blog post -- in my previous blog post, I have created a process manager-agnostic service management framework that works with a variety of process managers on a variety of operating systems.

Combining this framework with Disnix, makes it possible to also easily deploy container services (most of them are daemons) to non-NixOS machines, including non-Linux machines, such as macOS and FreeBSD from the same declarative specifications.

Moreover, this framework also provides facilities to easily deploy multiple instances of the same service to the same machine.

Revisiting this problem also made me think about the "layered approach" again, and after some thinking I have dropped the idea. The problem of using layers is that:

  • We need to develop another tool that integrates the deployment processes of all layers into one. In addition to the fact that we need to implement more automation, this introduces many additional technical challenges -- for example, if we want to deploy three layers and the deployment of the second fails, how are we going to do a rollback?
  • A layered approach is somewhat "imperative" -- each layer deploys services that include Dysnomia modules and Dysnomia container configuration files. The Disnix service on each target machine performs a lookup in the Nix profile that contains all packages of the containers layer to find the required Dysnomia modules and container configuration files.

    Essentially, Dysnomia modules and container configurations are stored in a global namespace. This means the order in which the deployment of the layers is executed is important and that each layer can imperatively modify the behaviour of each Dysnomia module.
  • Because we need to deploy the system on layer-by-layer basis, we cannot for example, deploy multiple services in another layer that have no dependency in parallel, making a deployment process slower than it should be.

After some thinking, I came up with a much simpler approach -- I have introduced a new concept to the Disnix services model that makes it possible to annotate services with a specification of the container services that it provides. This information can be used by application services that need to deploy to this container service.

For example, we can annotate the Apache Tomcat service in the Disnix services model as follows:

{ pkgs, system, distribution, invDistribution
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "systemd"
}:

let
constructors = import ../../../nix-processmgmt/examples/services-agnostic/constructors.nix {
inherit pkgs stateDir runtimeDir logDir cacheDir tmpDir forceDisableUserChange processManager;
};
in
rec {
simpleAppservingTomcat = rec {
name = "simpleAppservingTomcat";
pkg = constructors.simpleAppservingTomcat {
inherit httpPort;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
};
httpPort = 8080;
catalinaBaseDir = "/var/tomcat/webapps";
type = "systemd-unit";
providesContainers = {
tomcat-webapplication = {
httpPort = 8080;
catalinaBaseDir = "/var/tomcat/webapps";
};
};
};

GeolocationService = {
name = "GeolocationService";
pkg = customPkgs.GeolocationService;
dependsOn = {};
type = "tomcat-webapplication";
};

...
}

In the above example, the simpleAppservingTomcat service refers to an Apache Tomcat server that serves Java web applications for one particular virtual host. The providesContainers property tells Disnix that the service is a container provider, providing a container named: tomcat-webapplication with the following properties:

  • For HTTP traffic, Apache Tomcat should listen on TCP port 8080
  • The Java web application archives (WAR files) should be deployed to the Catalina Servlet container. By copying the WAR files to the /var/tomcat/webapps directory, they should be automatically hot-deployed.

The other service in the services model (GeolocationService) is a Java web application that should be deployed to a Apache Tomcat container service.

If in a Disnix distribution model, we map the Apache Tomcat service (simpleAppservingTomcat) and the Java web application (GeolocationService) to the same machine:

{infrastructure}:

{
simpleAppservingTomcat = [ infrastructure.test1 ];
GeolocationService = [ infrastructure.test1 ];
}

Disnix will automatically search for a suitable container service provider for each service.

In the above scenario, Disnix knows that simpleAppservingTomcat provides a tomcat-webapplication container. The GeolocationService uses the type: tomcat-webapplication indicating that it needs to deployed to a Apache Tomcat servlet container.

Because these services have been deployed to the same machine Disnix will make sure that Apache Tomcat gets activated before the GeolocationService, and uses the Dysnomia module that is bundled with the simpleAppservingTomcat to handle the deployment of the Java web application.

Furthermore, the properties that simpleAppservingTomcat exposes in the providesContainers attribute set, are automatically propagated as container parameters to the GeolocationService Nix expression, so that it knows where the WAR file should be copied to, to automatically hot-deploy the service.

If Disnix does not detect a service that provides a required container deployed to the same machine, then it will fall back to its original behaviour -- it automatically propagates the properties of a container in the infrastructure model, and assumes the the container service is already deployed by an infrastructure deployment solution.

Simplifications


The notation used for the simpleAppservingTomcat service (shown earlier) refers to an attribute set. An attribute set also makes it possible to specify multiple container instances. However, it is far more common that we only need one single container instance.

Moreover, there is some redundancy -- we need to specify certain properties in two places. Some properties can both belong to a service, as well as the container properties that we want to propagate to the services that require it.

We can also use a shorter notation to expose only one single container:

simpleAppservingTomcat = rec {
name = "simpleAppservingTomcat";
pkg = constructors.simpleAppservingTomcat {
inherit httpPort;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
};
httpPort = 8080;
catalinaBaseDir = "/var/tomcat/webapps";
type = "systemd-unit";
providesContainer = "tomcat-webapplication";
};

In the above example, we have rewritten the service configuration of simpleAppserviceTomcat to use the providesContainer attribute referring to a string. This shorter notation will automatically expose all non-reserved service properties as container properties.

For our example above, this means that it will automatically expose httpPort, and catalinaBaseDir and ignores the remaining properties -- these remaining properties have a specific purpose for the Disnix deployment system.

Although the notation above simplifies things considerably, the above example still contains a bit of redundancy -- some of the container properties that we want to expose to application services, also need to be propagated to the constructor function requiring us to specify the same properties twice.

We can eliminate this redundancy by encapsulating the creation of the service properties attribute set a constructor function. With a constructor function, we can simply write:

simpleAppservingTomcat = constructors.simpleAppservingTomcat {
httpPort = 8080;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";
};

Example: deploying container and application services as one system


By applying the techniques described in the previous section to the StaffTracker example (e.g. distributing a simpleAppservingTomcat and mysql to the same machines that host Java web applications and MySQL databases), we can deploy the StaffTracker system including all its required container services with a single command-line instruction:

$ disnix-env -s services-with-containers.nix \
-i infrastructure-bare.nix \
-d distribution-with-containers.nix

The corresponding deployment architecture visualization may look as follows:


As you may notice, the above diagram looks very similar to the previously shown deployment architecture diagram of the services layer.

What has been added are the container services -- the ovals with the double borders denote services that are also container providers. The labels describe both the name of the service and the containers that it provides (behind the arrow ->).

Furthermore, all the services that are hosted inside a particular container environment (e.g. tomcat-webapplication) have a local inter-dependency on the corresponding container provider service (e.g. simpleAppservingTomcat), causing Disnix to activate Apache Tomcat before the web applications that are hosted inside it.

Another thing you might notice, is that we have not completely eliminated the dependency on an infrastructure deployment solution -- the MySQL DBMS and Apache Tomcat service are deployed as systemd-unit requiring the presence of systemd on the target system. Systemd should be provided as part of the target Linux distribution, and cannot be managed by Disnix because it runs as PID 1.

Example: deploying multiple container service instances and application services


One of my motivating reasons to use Disnix as a deployment solution for container services is to be able to deploy multiple instances of them to the same machine. This can also be done in a combined container and application services deployment approach.

To allow, for example, to have two instance of Apache Tomcat to co-exist on one machine, we must configure them in such a way their resources do not conflict:

{ pkgs, system, distribution, invDistribution
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "systemd"
}:

let
constructors = import ../../../nix-processmgmt/examples/service-containers-agnostic/constructors.nix {
inherit pkgs stateDir runtimeDir logDir cacheDir tmpDir forceDisableUserChange processManager;
};
in
rec {
simpleAppservingTomcat-primary = constructors.simpleAppservingTomcat {
instanceSuffix = "-primary";
httpPort = 8080;
httpsPort = 8443;
serverPort = 8005;
ajpPort = 8009;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";
};

simpleAppservingTomcat-secondary = constructors.simpleAppservingTomcat {
instanceSuffix = "-secondary";
httpPort = 8081;
httpsPort = 8444;
serverPort = 8006;
ajpPort = 8010;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";
};

...
}

The above partial services model defines two Apache Tomcat instances, that have been configured to listen to different TCP ports (for example the primary Tomcat instance listens to HTTP traffic on port 8080, whereas the secondary instance listens on port 8081), and serving web applications from a different deployment directories. Because their properties do not conflict, they can co-exist on the same machine.

With the following distribution model, we can deploy multiple container providers to the same machine and distribute application services to them:

{infrastructure}:

{
# Container providers

mysql-primary = [ infrastructure.test1 ];
mysql-secondary = [ infrastructure.test1 ];
simpleAppservingTomcat-primary = [ infrastructure.test2 ];
simpleAppservingTomcat-secondary = [ infrastructure.test2 ];

# Application components

GeolocationService = {
targets = [
{ target = infrastructure.test2;
container = "tomcat-webapplication-primary";
}
];
};
RoomService = {
targets = [
{ target = infrastructure.test2;
container = "tomcat-webapplication-secondary";
}
];
};
StaffTracker = {
targets = [
{ target = infrastructure.test2;
container = "tomcat-webapplication-secondary";
}
];
};
staff = {
targets = [
{ target = infrastructure.test1;
container = "mysql-database-secondary";
}
];
};
zipcodes = {
targets = [
{ target = infrastructure.test1;
container = "mysql-database-primary";
}
];
};
...
}

In the first four lines of the distribution model shown above, we distribute the container providers. As you may notice, we distribute two MySQL instances that should co-exist on machine test1 and two Apache Tomcat instances that should co-exist on machine test2.

In the remainder of the distribution model, we map Java web applications and MySQL databases to these container providers. As explained in the previous blog post about deploying multiple container service instances, if no container is specified in the distribution model, Disnix will auto map the service to the container that has the same name as the service's type.

In the above example, we have two instances of each container service with a different name. As a result, we need to use the more verbose notation for distribution mappings to instruct Disnix to which container provider we want to deploy the service.

Deploying the system with the following command-line instruction:

$ disnix-env -s services-with-multicontainers.nix \
-i infrastructure-bare.nix \
-d distribution-with-multicontainers.nix

results in a running system that may has the following deployment architecture:


As you may notice, we have MySQL databases and Java web application distributed over mutiple container providers residing on the same machine. All services belong to the same system, deployed by a single Disnix command.

A more extreme example: multiple process managers


By exposing services as container providers in Disnix, my original requirements were met. Because the facilities are very flexible, I also discovered that there is much more I could do.

For example, on more primitive systems that do not have systemd, I could also extend the services and distribution models in such a way that I can deploy supervisord as a process manager first (as a sysvinit-script that does not require any process manager service), then use supervisord to manage MySQL and Apache Tomcat, and then use the Dysnomia plugin system to deploy the databases and Java web applications to these container services managed by supervisord:


As you may notice, the deployment architecture above looks similar to the first combined deployment example, with supervisord added as an extra container provider service.

More efficient reuse: expose any kind of service as container provider


In addition to managed processes (which the MySQL DBMS and Apache Tomcat services are), any kind of Disnix service can act as a container provider.

An example of such a non-process managed container provider could be Apache Axis2. In the StaffTracker example, all data access is provided by web services. These web services are implemented as Java web applications (WAR files) embedding an Apache Axis2 container that embeds an Axis2 Application Archive (AAR file) providing the web service implementation.

Every web application that is a web service includes its own implementation of Apache Axis2.

It is also possible to deploy a single Axis2 web application to Apache Tomcat, and treat each Axis2 Application Archive as a separate deployment unit using the axis2-webservice identifier as a container provider for any service of the type: axis2-webservice:

{ pkgs, system, distribution, invDistribution
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, cacheDir ? "${stateDir}/cache"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "systemd"
}:

let
constructors = import ../../../nix-processmgmt/examples/service-containers-agnostic/constructors.nix {
inherit pkgs stateDir runtimeDir logDir cacheDir tmpDir forceDisableUserChange processManager;
};

customPkgs = import ../top-level/all-packages.nix {
inherit system pkgs stateDir;
};
in
rec {
### Container providers

simpleAppservingTomcat = constructors.simpleAppservingTomcat {
httpPort = 8080;
commonLibs = [ "${pkgs.mysql_jdbc}/share/java/mysql-connector-java.jar" ];
type = "systemd-unit";
};

axis2 = customPkgs.axis2 {};

### Web services

HelloService = {
name = "HelloService";
pkg = customPkgs.HelloService;
dependsOn = {};
type = "axis2-webservice";
};

HelloWorldService = {
name = "HelloWorldService";
pkg = customPkgs.HelloWorldService;
dependsOn = {
inherit HelloService;
};
type = "axis2-webservice";
};

...
}

In the above partial services model, we have defined two container providers:

  • simpleAppservingTomcat that provides a Servlet container in which Java web applications (WAR files) can be hosted.
  • The axis2 service is a Java web application that acts as a container provider for Axis2 web services.

The remaining services are Axis2 web services that can be embedded inside the shared Axis2 container.

If we deploy the above example system, e.g.:

$ disnix-env -s services-optimised.nix \
-i infrastructure-bare.nix \
-d distribution-optimised.nix

may result in the following deployment architecture:


As may be observed when looking at the above architecture diagram, the web services deployed to the test2 machine, use a shared Axis2 container, that is embedded as a Java web application inside Apache Tomcat.

The above system has a far better degree of reuse, because it does not use redundant copies of Apache Axis2 for each web service.

Although it is possible to have a deployment architecture with a shared Axis2 container, this shared approach is not always desirable to use. For example, database connections managed by Apache Tomcat are shared between all web services embedded in an Axis2 container, which is not always desirable from a security point of view.

Moreover, an unstable web service embedded in an Axis2 container might also tear the container down causing the other web services to crash as well. Still, the deployment system does not make it difficult to use a shared approach, when it is desired.

Conclusion


With this new feature addition to Disnix, that can expose services as container providers, it becomes possible to deploy both container services and application services as one integrated system.

Furthermore, it also makes it possible to:

  • Deploy multiple instances of container services and deploy services to them.
  • For process-based service containers, we can combine the process manager-agostic framework described in the previous blog post, so that we can use them with any process manager on any operating system that it supports.

The fact that Disnix can now also deploy containers does not mean that it no longer relies on external infrastructure deployment solutions anymore. For example, you still need target machines at your disposal that have Nix and Disnix installed and need to be remotely connectable, e.g. through SSH. For this, you still require an external infrastructure deployment solution, such as NixOps.

Furthermore, not all container services can be managed by Disnix. For example, systemd, that runs as a system's PID 1, cannot be installed by Disnix. Instead, it must already be provided by the target system's Linux distribution (In NixOS' case it is Nix that deploys it, but it is not managed by Disnix).

And there may also be other reasons why you may still want to use separated deployment processes for container and service deployment. For example, you may want to deploy to container services that cannot be managed by Nix/Disnix, or you may work in an organization in which two different teams take care of the infrastructure and the services.

Availability


The new features described in this blog post are part of the current development versions of Dysnomia and Disnix that can be obtained from my GitHub page. These features will become generally available in the next release.

Moreover, I have extended all my public Disnix examples with container deployment support (including the Java-based StaffTracker and composition examples shown in this blog post). These changes currently reside in the servicesascontainers Git branches.

The nix-processmgmt repository contains shared constructor functions for all kinds of system services, e.g. MySQL, Apache HTTP server, PostgreSQL and Apache Tomcat. These functions can be reused amongst all kinds of Disnix projects.

by Sander van der Burg (noreply@blogger.com) at April 30, 2020 08:39 PM

April 23, 2020

Craige McWhirter

Building Daedalus Flight on NixOS

NixOS Daedalus Gears by Craige McWhirter

Daedalus Flight was recently released and this is how you can build and run this version of Deadalus on NixOS.

If you want to speed the build process up, you can add the IOHK Nix cache to your own NixOS configuration:

iohk.nix:

nix.binaryCaches = [
  "https://cache.nixos.org"
  "https://hydra.iohk.io"
];
nix.binaryCachePublicKeys = [
  "hydra.iohk.io:f/Ea+s+dFdN+3Y/G+FDgSq+a5NEWhJGzdjvKNGv0/EQ="
];

If you haven't already, you can clone the Daedalus repo and specifically the 1.0.0 tagged commit:

$ git clone --branch 1.0.0 https://github.com/input-output-hk/daedalus.git

Once you've cloned the repo and checked you're on the 1.0.0 tagged commit, you can build Daedalus flight with the following command:

$ nix build -f . daedalus --argstr cluster mainnet_flight

Once the build completes, you're ready to launch Daedalus Flight:

$ ./result/bin/daedalus

To verify that you have in fact built Daedalus Flight, first head to the Daedalus menu then About Daedalus. You should see a title such as "DAEDALUS 1.0.0". The second check, is to press [Ctl]+d to access Daedalus Diagnostocs and your Daedalus state directory should have mainnet_flight at the end of the path.

If you've got these, give yourself a pat on the back and grab yourself a refreshing bevvy while you wait for blocks to sync.

Daedalus FC1 screenshot

by Craige McWhirter at April 23, 2020 11:28 PM

April 18, 2020

nixbuild.net

Binary Cache Support

Up until now, nixbuild.net has not supported directly fetching build dependencies from binary caches like cache.nixos.org or Cachix. All build dependencies have instead been uploaded from the user’s local machine to nixbuild.net the first time they’ve been needed.

Today, this bottleneck has been removed, since nixbuild.net now can fetch build dependencies directly from binary caches, without taxing users’ upload bandwidth.

By default, the official Nix binary cache (cache.nixos.org) is added to all nixbuild.net accounts, but a nixbuild.net user can freely decide on which caches that should be queried for build dependencies (including Cachix caches).

An additional benefit of the new support for binary caches is that users that trust the same binary caches automatically share build dependencies from those caches. This means that if one user’s build has triggered a download from for example cache.nixos.org, the next user that comes along and needs the same build dependency doesn’t have to spend time on downloading that dependency.

For more information on how to use binary caches with nixbuild.net, see the documentation.

by nixbuild.net (support@nixbuild.net) at April 18, 2020 12:00 AM

April 13, 2020

Graham Christensen

Erase your darlings

I erase my systems at every boot.

Over time, a system collects state on its root partition. This state lives in assorted directories like /etc and /var, and represents every under-documented or out-of-order step in bringing up the services.

“Right, run myapp-init.”

These small, inconsequential “oh, oops” steps are the pieces that get lost and don’t appear in your runbooks.

“Just download ca-certificates to … to fix …”

Each of these quick fixes leaves you doomed to repeat history in three years when you’re finally doing that dreaded RHEL 7 to RHEL 8 upgrade.

“Oh, touch /etc/ipsec.secrets or the l2tp tunnel won’t work.”

Immutable infrastructure gets us so close

Immutable infrastructure is a wonderfully effective method of eliminating so many of these forgotten steps. Leaning in to the pain by deleting and replacing your servers on a weekly or monthly basis means you are constantly testing and exercising your automation and runbooks.

The nugget here is the regular and indiscriminate removal of system state. Destroying the whole server doesn’t leave you much room to forget the little tweaks you made along the way.

These techniques work great when you meet two requirements:

  • you can provision and destroy servers with an API call
  • the servers aren’t inherently stateful

Long running servers

There are lots of cases in which immutable infrastructure doesn’t work, and the dirty secret is those servers need good tools the most.

Long-running servers cause long outages. Their runbooks are outdated and incomplete. They accrete tweaks and turn in to an ossified, brittle snowflake — except its arms are load-bearing.

Let’s bring the ideas of immutable infrastructure to these systems too. Whether this system is embedded in a stadium’s jumbotron, in a datacenter, or under your desk, we can keep the state under control.

FHS isn’t enough

The hard part about applying immutable techniques to long running servers is knowing exactly where your application state ends and the operating system, software, and configuration begin.

This is hard because legacy operating systems and the Filesystem Hierarchy Standard poorly separate these areas of concern. For example, /var/lib is for state information, but how much of this do you actually care about tracking? What did you configure in /etc on purpose?

The answer is probably not a lot.

You may not care, but all of this accumulation of junk is a tarpit. Everything becomes harder: replicating production, testing changes, undoing mistakes.

New computer smell

Getting a new computer is this moment of cleanliness. The keycaps don’t have oils on them, the screen is perfect, and the hard drive is fresh and unspoiled — for about an hour or so.

Let’s get back to that.

How is this possible?

NixOS can boot with only two directories: /boot, and /nix.

/nix contains read-only system configurations, which are specified by your configuration.nix and are built and tracked as system generations. These never change. Once the files are created in /nix, the only way to change the config’s contents is to build a new system configuration with the contents you want.

Any configuration or files created on the drive outside of /nix is state and cruft. We can lose everything outside of /nix and /boot and have a healthy system. My technique is to explicitly opt in and choose which state is important, and only keep that.

How this is possible comes down to the boot sequence.

For NixOS, the bootloader follows the same basic steps as a standard Linux distribution: the kernel starts with an initial ramdisk, and the initial ramdisk mounts the system disks.

And here is where the similarities end.

NixOS’s early startup

NixOS configures the bootloader to pass some extra information: a specific system configuration. This is the secret to NixOS’s bootloader rollbacks, and also the key to erasing our disk on each boot. The parameter is named systemConfig.

On every startup the very early boot stage knows what the system’s configuration should be: the entire system configuration is stored in the read-only /nix/store, and the directory passed through systemConfig has a reference to the config. Early boot then manipulates /etc and /run to match the chosen setup. Usually this involves swapping out a few symlinks.

If /etc simply doesn’t exist, however, early boot creates /etc and moves on like it were any other boot. It also creates /var, /dev, /home, and any other core directories that must be present.

Simply speaking, an empty / is not surprising to NixOS. In fact, the NixOS netboot, EC2, and installation media all start out this way.

Opting out

Before we can opt in to saving data, we must opt out of saving data by default. I do this by setting up my filesystem in a way that lets me easily and safely erase the unwanted data, while preserving the data I do want to keep.

My preferred method for this is using a ZFS dataset and rolling it back to a blank snapshot before it is mounted. A partition of any other filesystem would work just as well too, running mkfs at boot, or something similar. If you have a lot of RAM, you could skip the erase step and make / a tmpfs.

Opting out with ZFS

When installing NixOS, I partition my disk with two partitions, one for the boot partition, and another for a ZFS pool. Then I create and mount a few datasets.

My root dataset:

# zfs create -p -o mountpoint=legacy rpool/local/root

Before I even mount it, I create a snapshot while it is totally blank:

# zfs snapshot rpool/local/root@blank

And then mount it:

# mount -t zfs rpool/local/root /mnt

Then I mount the partition I created for the /boot:

# mkdir /mnt/boot
# mount /dev/the-boot-partition /mnt/boot

Create and mount a dataset for /nix:

# zfs create -p -o mountpoint=legacy rpool/local/nix
# mkdir /mnt/nix
# mount -t zfs rpool/local/nix /mnt/nix

And a dataset for /home:

# zfs create -p -o mountpoint=legacy rpool/safe/home
# mkdir /mnt/home
# mount -t zfs rpool/safe/home /mnt/home

And finally, a dataset explicitly for state I want to persist between boots:

# zfs create -p -o mountpoint=legacy rpool/safe/persist
# mkdir /mnt/persist
# mount -t zfs rpool/safe/persist /mnt/persist

Note: in my systems, datasets under rpool/local are never backed up, and datasets under rpool/safe are.

And now safely erasing the root dataset on each boot is very easy: after devices are made available, roll back to the blank snapshot:

{
  boot.initrd.postDeviceCommands = lib.mkAfter ''
    zfs rollback -r rpool/local/root@blank
  '';
}

I then finish the installation as normal. If all goes well, your next boot will start with an empty root partition but otherwise be configured exactly as you specified.

Opting in

Now that I’m keeping no state, it is time to specify what I do want to keep. My choices here are different based on the role of the system: a laptop has different state than a server.

Here are some different pieces of state and how I preserve them. These examples largely use reconfiguration or symlinks, but using ZFS datasets and mount points would work too.

Wireguard private keys

Create a directory under /persist for the key:

# mkdir -p /persist/etc/wireguard/

And use Nix’s wireguard module to generate the key there:

{
  networking.wireguard.interfaces.wg0 = {
    generatePrivateKeyFile = true;
    privateKeyFile = "/persist/etc/wireguard/wg0";
  };
}

NetworkManager connections

Create a directory under /persist, mirroring the /etc structure:

# mkdir -p /persist/etc/NetworkManager/system-connections

And use Nix’s etc module to set up the symlink:

{
  etc."NetworkManager/system-connections" = {
    source = "/persist/etc/NetworkManager/system-connections/";
  };
}

Bluetooth devices

Create a directory under /persist, mirroring the /var structure:

# mkdir -p /persist/var/lib/bluetooth

And then use systemd’s tmpfiles.d rules to create a symlink from /var/lib/bluetooth to my persisted directory:

{
  systemd.tmpfiles.rules = [
    "L /var/lib/bluetooth - - - - /persist/var/lib/bluetooth"
  ];
}

SSH host keys

Create a directory under /persist, mirroring the /etc structure:

# mkdir -p /persist/etc/ssh

And use Nix’s openssh module to create and use the keys in that directory:

{
  services.openssh = {
    enable = true;
    hostKeys = [
      {
        path = "/persist/ssh/ssh_host_ed25519_key";
        type = "ed25519";
      }
      {
        path = "/persist/ssh/ssh_host_rsa_key";
        type = "rsa";
        bits = 4096;
      }
    ];
  };
}

ACME certificates

Create a directory under /persist, mirroring the /var structure:

# mkdir -p /persist/var/lib/acme

And then use systemd’s tmpfiles.d rules to create a symlink from /var/lib/acme to my persisted directory:

{
  systemd.tmpfiles.rules = [
    "L /var/lib/acme - - - - /persist/var/lib/acme"
  ];
}

Answering the question “what am I about to lose?”

I found this process a bit scary for the first few weeks: was I losing important data each reboot? No, I wasn’t.

If you’re worried and want to know what state you’ll lose on the next boot, you can list the files on your root filesystem and see if you’re missing something important:

# tree -x /
├── bin
│   └── sh -> /nix/store/97zzcs494vn5k2yw-dash-0.5.10.2/bin/dash
├── boot
├── dev
├── etc
│   ├── asound.conf -> /etc/static/asound.conf
... snip ...

ZFS can give you a similar answer:

# zfs diff rpool/local/root@blank
M	/
+	/nix
+	/etc
+	/root
+	/var/lib/is-nix-channel-up-to-date
+	/etc/pki/fwupd
+	/etc/pki/fwupd-metadata
... snip ...

Your stateless future

You may bump in to new state you meant to be preserving. When I’m adding new services, I think about the state it is writing and whether I care about it or not. If I care, I find a way to redirect its state to /persist.

Take care to reboot these machines on a somewhat regular basis. It will keep things agile, proving your system state is tracked correctly.

This technique has given me the “new computer smell” on every boot without the datacenter full of hardware, and even on systems that do carry important state. I have deployed this strategy to systems in the large and small: build farm servers, database servers, my NAS and home server, my raspberry pi garage door opener, and laptops.

NixOS enables powerful new deployment models in so many ways, allowing for systems of all shapes and sizes to be managed properly and consistently. I think this model of ephemeral roots is yet another example of this flexibility and power. I would like to see this partitioning scheme become a reference architecture and take us out of this eternal tarpit of legacy.

April 13, 2020 12:00 AM

April 11, 2020

Graham Christensen

ZFS Datasets for NixOS

The outdated and historical nature of the Filesystem Hierarchy Standard means traditional Linux distributions have to go to great lengths to separate “user data” from “system data.”

NixOS’s filesystem architecture does cleanly separate user data from system data, and has a much easier job to do.

Traditional Linuxes

Because FHS mixes these two concerns across the entire hierarchy, splitting these concerns requires identifying every point across dozens of directories where the data is the system’s or the user’s. When adding ZFS to the mix, the installers typically have to create over a dozen datasets to accomplish this.

For example, Ubuntu’s upcoming ZFS support creates 16 datasets:

rpool/
├── ROOT
│   └── ubuntu_lwmk7c
│       ├── log
│       ├── mail
│       ├── snap
│       ├── spool
│       ├── srv
│       ├── usr
│       │   └── local
│       ├── var
│       │   ├── games
│       │   └── lib
│       │       ├── AccountServices
│       │       ├── apt
│       │       ├── dpkg
│       │       └── NetworkManager
│       └── www
└── USERDATA

Going through the great pains of separating this data comes with significant advantages: a recursive snapshot at any point in the tree will create an atomic, point-in-time snapshot of every dataset below.

This means in order to create a consistent snapshot of the system data, an administrator would only need to take a recursive snapshot at ROOT. The same is true for user data: take a recursive snapshot of USERDATA and all user data is saved.

NixOS

Because Nix stores all of its build products in /nix/store, NixOS doesn’t mingle these two concerns. NixOS’s runtime system, installed packages, and rollback targets are all stored in /nix.

User data is not.

This removes the entire complicated tree of datasets to facilitate FHS, and leaves us with only a few needed datasets.

Datasets

Design for the atomic, recursive snapshots when laying out the datasets.

In particular, I don’t back up the /nix directory. This entire directory can always be rebuilt later from the system’s configuration.nix, and isn’t worth the space.

One way to model this might be splitting up the data into three top-level datasets:

tank/
├── local
│   └── nix
├── system
│   └── root
└── user
    └── home

In tank/local, I would store datasets that should almost never be snapshotted or backed up. tank/system would store data that I would want periodic snapshots for. Most importantly, tank/user would contain data I want regular snapshots and backups for, with a long retention policy.

From here, you could add a ZFS dataset per user:

tank/
├── local
│   └── nix
├── system
│   └── root
└── user
    └── home
        ├── grahamc
        └── gustav

Or a separate dataset for /var:

tank/
├── local
│   └── nix
├── system
│   ├── var
│   └── root
└── user

Importantly, this gives you three buckets for independent and regular snapshots.

The important part is having /nix under its own top-level dataset. This makes it a “cousin” to the data you do want backup coverage on, making it easier to take deep, recursive snapshots atomically.

Properties

  • Enable compression with compression=on. Specifying on instead of lz4 or another specific algorithm will always pick the best available compression algorithm.
  • The dataset containing journald’s logs (where /var lives) should have xattr=sa and acltype=posixacl set to allow regular users to read their journal.
  • Nix doesn’t use atime, so atime=off on the /nix dataset is fine.
  • NixOS requires (as of 2020-04-11) mountpoint=legacy for all datasets. NixOS does not yet have tooling to require implicitly created ZFS mounts to settle before booting, and mountpoint=legacy plus explicit mount points in hardware-configuration.nix will ensure all your datasets are mounted at the right time.

I don’t know how to pick ashift, and usually just allow ZFS to guess on my behalf.

Partitioning

I only create two partitions:

  1. /boot formatted vfat for EFI, or ext4 for BIOS
  2. The ZFS dataset partition.

There are spooky articles saying only give ZFS entire disks. The truth is, you shouldn’t split a disk into two active partitions. Splitting the disk this way is just fine, since /boot is rarely read or written.

Note: If you do partition the disk, make sure you set the disk’s scheduler to none. ZFS takes this step automatically if it does control the entire disk.

On NixOS, you an set your scheduler to none via:

{ boot.kernelParams = [ "elevator=none" ]; }

Clean isolation

NixOS’s clean separation of concerns reduces the amount of complexity we need to track when considering and planning our datasets. This gives us flexibility later, and enables some superpowers like erasing my computer on every boot, which I’ll write about on Monday.

April 11, 2020 12:00 AM

March 27, 2020

nixbuild.net

New nixbuild.net Resources

On the support side of the nixbuild.net service, two new resources have been published:

  • docs.nixbuild.net, collecting all available documentation for nixbuild.net users.

  • The nixbuild.net feedback repository on GitHub, providing a way to report issues or ask questions related to the service.

These resources are mainly useful for nixbuild.net beta users, but they are open to anyone. And anyone is of course welcome to request a free beta account for evaluating nixbuild.net, by just sending me an email.

by nixbuild.net (support@nixbuild.net) at March 27, 2020 12:00 AM

March 23, 2020

Matthew Bauer

Announcing Nixiosk

Today I’m announcing a project I’ve been working on for the last few weeks. I’m calling it Nixiosk which is kind of a smashing together of the words NixOS and Kiosk. The idea is to have an easy way to make locked down, declarative systems

My main application of this is my two Raspberry Pi systems that I own. Quite a few people have installed NixOS on these systems, but usually they are starting from some prebuilt image. A major goal of this project is to make it easy to build these images yourself. For this to work, I’ve had to make lots of changes to NixOS cross-compilation ecosystem, but the results seem to be very positive. I also want the system to be locked down so that no user can login directly on the machine. Instead, all administration is done on a remote machine, and deployed through SSH and Nix remote builders.

Right now, I have RetroArch (a frontend for a bunch of emulators) on my Raspberry Pi 4, and Epiphany (a web browser) on my Raspberry Pi 0. Both systems seem to be working pretty well.

GitHub: https://github.com/matthewbauer/nixiosk

1 Deploying

1.1 Install Nix

If you haven’t already, you need to install Nix. This can be done through the installer:

$ bash <(curl -L https://nixos.org/nix/install)

1.2 Cache

To speed things up, you should setup a binary cache for nixiosk. This can be done easily through Cachix. First, install Cachix:

$ nix-env -iA cachix -f https://cachix.org/api/v1/install

Then, use the nixiosk cache:

$ cachix use nixiosk

1.3 Configuration

To make things simple, it just reads from an ad-hoc JSON file that describe the hardware plus some other customizations. It looks like this:

{
    "hostName": "nixiosk",
    "hardware": "raspberryPi4",
    "authorizedKeys": [],
    "program": {
        "package": "epiphany",
        "executable": "/bin/epiphany",
        "args": ["https://en.wikipedia.org/"]
    },
    "networks": {
        "my-router": "0000000000000000000000000000000000000000000000000000000000000000",
    },
    "locale": {
        "timeZone": "America/New_York",
        "regDom": "US",
        "lang": "en_US.UTF-8"
    },
    "localSystem": {
        "system": "x86_64-linux",
        "sshUser": "me",
        "hostName": "my-laptop-host",
    }
}

Here’s a basic idea of what each of these fields do:

  • hostName: Name of the host to use. If mDNS is configured on your network, this can be used to identify the IP address of the device via “<hostName>.local”.
  • hardware: A string describing what hardware we are using. Valid values currently are “raspberryPi0”, “raspberryPi1”, “raspberryPi2”, “raspberryPi3”, “raspberryPi4”.
  • authorizedKeys: A list of SSH public keys that are authorized to make changes to your device. Note this is required because no passwords will be set for this system.
  • program: What to do in the kiosk. This should be a Nixpkgs attribute (package), an executable in that package, and a list of args.
  • networks: This is a name/value pairing of SSIDs to PSK passphrases. This can be found with the wpa_passphrase(8) command from wpa_supplicant.
  • locale: This provides some information of what localizations to use. You can set regulation domain, language, time zone via “regDom”, “lang”, and “timeZone”. If unspecified, defaults to US / English / New York.
  • localSystem: Information on system to use for remote builder. Optional.

1.4 Initial deployment

The deployment is pretty easy provided you have Nix installed. Here are some steps:

$ git clone https://github.com/matthewbauer/nixiosk.git
$ cd nixiosk/
$ cp nixiosk.json.sample nixiosk.json

Now you need to make some changes to nixiosk.json to reflect what you want your system to do. The important ones are ‘authorizedKeys’ and ‘networks’ so that your systems can startup and you can connect to it.

If you have an SSH key setup, you can get its value with:

$ cat $HOME/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC050iPG8ckY/dj2O3ol20G2lTdr7ERFz4LD3R4yqoT5W0THjNFdCqavvduCIAtF1Xx/OmTISblnGKf10rYLNzDdyMMFy7tUSiC7/T37EW0s+EFGhS9yOcjCVvHYwgnGZCF4ec33toE8Htq2UKBVgtE0PMwPAyCGYhFxFLYN8J8/xnMNGqNE6iTGbK5qb4yg3rwyrKMXLNGVNsPVcMfdyk3xqUilDp4U7HHQpqX0wKrUvrBZ87LnO9z3X/QIRVQhS5GqnIjRYe4L9yxZtTjW5HdwIq1jcvZc/1Uu7bkMh3gkCwbrpmudSGpdUlyEreaHOJf3XH4psr6IMGVJvxnGiV9 mbauer@dellbook

which will give you a line for “authorizedKeys” like:

"authorizedKeys": ["ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC050iPG8ckY/dj2O3ol20G2lTdr7ERFz4LD3R4yqoT5W0THjNFdCqavvduCIAtF1Xx/OmTISblnGKf10rYLNzDdyMMFy7tUSiC7/T37EW0s+EFGhS9yOcjCVvHYwgnGZCF4ec33toE8Htq2UKBVgtE0PMwPAyCGYhFxFLYN8J8/xnMNGqNE6iTGbK5qb4yg3rwyrKMXLNGVNsPVcMfdyk3xqUilDp4U7HHQpqX0wKrUvrBZ87LnO9z3X/QIRVQhS5GqnIjRYe4L9yxZtTjW5HdwIq1jcvZc/1Uu7bkMh3gkCwbrpmudSGpdUlyEreaHOJf3XH4psr6IMGVJvxnGiV9 mbauer@dellbook"],

and you can get a PSK value for your WiFi network with:

$ nix run nixpkgs.wpa_supplicant -c wpa_passphrase my-network
network={
        ssid="my-network"
        #psk="abcdefgh"
        psk=17e76a6490ac112dbeba996caa7cd1387c6ebf6ce721ef704f92b681bb2e9000
}

so your .json file looks like:

"networks": {
  "my-network": "17e76a6490ac112dbeba996caa7cd1387c6ebf6ce721ef704f92b681bb2e9000",
},

Now, after inserting your Raspberry Pi SD card into the primary slot, you can deploy to it with:

$ ./deploy.sh /dev/mmcblk0

You can now eject your SD card and insert it into your Raspberry Pi. It will boot immediately to an Epiphany browser, loading en.wikipedia.org.

Troubleshooting steps can be found in the README.

1.5 Redeployments

You can pretty easily make changes to a running system given you have SSH access. This is as easy as cloning the running config:

$ git clone ssh://root@nixiosk.local/etc/nixos/configuration.git nixiosk-configuration
$ cd nixiosk-configuration

Then, make some changes in your repo. After your done, you can just run ‘git push’ to redeploy.

$ git add .
$ git commit
$ git push

You’ll see the NixOS switch-to-configuration log in your command output. If all is successful, the system should immediately reflect your changes. If not, the output of Git should explain what went wrong.

Note, that some versions of the Raspberry Pi like the 0 and the 1 are not big enough to redeploy the whole system. You will probably need to setup remote builders. This is described in the README.

2 Technology

Here are some of the pieces that make the Kiosk system possible:

  • Cage / Wayland: Cage is a Wayland compositor that allows only one application to display at a time. This makes the system a true Kiosk.
  • NixOS - A Linux distro built on top of functional package management.
  • Basalt: A tool to manage NixOS directly from Git. This allows doing push-to-deploy directly to NixOS.
  • Plymouth: Nice graphical boot animations. Right now, it uses the NixOS logo but in the future this should be configurable so that you can include your own branding.
  • OpenSSH: Since no direct login is available, SSH is required for remote administration.
  • Avahi: Configures mDNS registration for the system, allowing you to remember host names instead of IP addresses.

I would also like to include some more tools to make administration easier:

  • ddclient / miniupnp: Allow registering external IP address with a DNS provider. This would enable administration outside of the device’s immediate network.

3 Project

You can try it out right now if you have an Raspberry Pi system. Other hardware is probably not too hard, but may require tweaking. The project page is available at https://github.com/matthewbauer/nixiosk and issues and pull requests are welcomed.

March 23, 2020 12:00 AM

March 18, 2020

Cachix

Proposal for improving Nix error messages

I’m lucky to be in touch with a lot of people that use Nix day to day. One of the most occouring annoyances that pops up more frequently with those starting with Nix are confusing error messages. Since Nix community has previously succesfully stepped up and funded removal of Perl to reduce barriers for source code contributions, I think we ought to do the same for removing barriers when using Nix.

by Domen Kožar (support@cachix.org) at March 18, 2020 08:00 AM

February 28, 2020

Flying Circus

Our new NixOS 19.03 Platform Is Ready for Production

We have developed our third-generation platform which is now based on NixOS 19.03. All provided components have been ported to the new platform and VMs are already running in production.

Most of our development work is done for the new platform and new features will be available only for it. We pull in security updates from upstream regularly and will follow new NixOS releases more quickly in the future. The old NixOS 15.09 platform still receives critical security and bug fixes.

Effective March 6, VMs created via customer self-service will use the 19.03 platform.

You can find the documentation for the new platform here:

https://flyingcircus.io/doc/guide/platform_nixos_2/index.html

We recommend user profiles (done with buildEnv) in case your application needs specific packages in its environment:

https://flyingcircus.io/doc/guide/platform_nixos_2/user_profile.html

Upgrading 15.09 Machines

Upgrading existing VMs online is supported and we have already done that for a number of VMs.
Sometimes however, it can be better to create new NixOS VMs in parallel and set up your applications there.

Most managed components will just work after the upgrade. We are working on instructions for specific things that should be done before or after the upgrade.

If you’re a customer with a support contract in the “Guided” or “Managed” service classes
then we’ll approach you directly and discuss when and how to upgrade VMs in the coming months.

If you’re a customer in the “Hosted” service class then we recommend contacting our support team to discuss the upgrade.

If you have questions …

As always: if you have any questions or comments then let us know and send us an email to support@flyingcircus.io.

by ts_fcio at February 28, 2020 02:16 PM

February 18, 2020

nixbuild.net

Introducing nixbuild.net

Exactly one month ago, I announced the nixbuild.net service. Since then, there have been lots of work on functionality, performance and stability of the service. As of today, nixbuild.net is exiting alpha and entering private beta phase. If you want to try it out, just send me an email.

Today, I’m also launching the nixbuild.net blog, which is intended as an outlet for anything related to the nixbuild.net service. Announcements, demos, technical articles and various tips and tricks. We’ll start out with a proper introduction of nixbuild.net; why it was built, what it can help you with and what the long-term goals are.

Why nixbuild.net?

Nix has great built-in support for distributing builds to remote machines. You just need to setup a standard Nix enviroment on your build machines, and make sure they are accessible via SSH. Just like that, you can offload your heavy builds to a couple of beefy build servers, saving your poor laptop’s fan from spinning up.

However, just when you’ve tasted those sweet distributed builds you very likely run into the issue of scaling.

What if you need a really big server to run your builds, but only really need it once or twice per day? You’ll be wasting a lot of money keeping that build server available.

And what if you occasionally have lots and lots of builds to run, or if your whole development team wants to share the build servers? Then you probably need to add more build servers, which means more wasted money when they are not used.

So, you start looking into auto-scaling your build servers. This is quite easy to do if you use some cloud provider like AWS, Azure or GCP. But, this is where Nix will stop cooperating with you. It is really tricky to get Nix to work nicely together with an auto-scaled set of remote build machines. Nix has only a very coarse view of the “current load” of a build machine and can therefore not make very informed decisions on exactly how to distribute the builds. If there are multiple Nix instances (one for each developer in your team) fighting for the same resources, things get even trickier. It is really easy to end up in a situation where a bunch of really heavy builds are fighting for CPU time on the same build server while the other servers are idle or running lightweight build jobs.

If you use Hydra, the continous build system for Nix, you can find scripts for using auto-scaled AWS instances, but it is still tricky to set it up. And in the end, it doesn’t work perfectly since Nix/Hydra has no notion of “consumable” CPU/memory resources so the build scheduling is somewhat hit-and-miss.

Even if you manage to come up with a solution that can handle your workload in an acceptable manner, you now have a new job: maintaining uniquely configured build servers. Possibly for your whole company.

Through my consulting company, Immutable Solutions, I’ve done a lot of work on Nix-based deployments, and I’ve always struggled with half-baked solutions to the Nix build farm problem. This is how the idea of the nixbuild.net service was born — a service that can fill in the missing pieces of the Nix distributed build puzzle and package it as a simple, no-maintenance, cost-effective service.

Who are We?

nixbuild.net is developed and operated by me (Rickard Nilsson) and my colleague David Waern. We both have extensive experience in building Nix-based solutions, for ourselves and for various clients.

We’re bootstrapping nixbuild.net, and we are long-term committed to keep developing and operating the service. Today, nixbuild.net can be productively used for its main purpose — running Nix builds in a scalable and cost-effective way — but there are lots of things that can (and will) be built on top of and around that core. Read more about this below.

What does nixbuild.net Look Like?

To the end-user, a person or team using Nix for building software, nixbuild.net behaves just like any other remote build machine. As such, you can add it as an entry in your /etc/nix/machines file:

beta.nixbuild.net x86_64-linux - 100 1 big-parallel,benchmark

The big-parallel,benchmark assignment is something that is called system features in Nix. You can use that as a primitive scheduling strategy if you have multiple remote machines. Nix will only submit builds that have been marked as requiring a specific system feature to machines that are assigned that feature.

The number 100 in the file above tells Nix that it is allowed to submit up to 100 simultaneous builds to beta.nixbuild.net. Usually, you use this property to balance builds between remote machines, and to make sure that a machine doesn’t run too many builds at the same time. This works OK when you have rather homogeneous builds, and only one single Nix client is using a set of build servers. If multiple Nix clients use the same set of build servers, this simplistic scheduling breaks down, since a given Nix client loses track on how many builds are really running on a server.

However, when you’re using nixbuild.net, you can set this number to anything really, since nixbuild.net will take care of the scheduling and scaling on its own, and it will not let multiple Nix clients step on each other’s toes. In fact each build that nixbuild.net runs is securely isolated from other builds and by default gets exclusive access to the resources (CPU and memory) it has been assigned.

Apart from setting up the distributed Nix machines, you need to configure SSH. When you register an account on nixbuild.net, you’ll provide us with a public SSH key. The corresponding private key is used for connecting to nixbuild.net. This private key needs to be readable by the user that runs the Nix build. This is usually the root user, if you have a standard Nix setup where the nix-daemon process runs as the root user.

That’s all there is to it, now we can run builds using nixbuild.net!

Let’s try building the following silly build, just so we can see some action:

let pkgs = import <nixpkgs> { system = "x86_64-linux"; };

in pkgs.runCommand "silly" {} ''
  n=0
  while (($n < 12)); do
    date | tee -a $out
    sleep 10
    n=$(($n + 1))
  done
''

This build will run for 2 minutes and output the current date every ten seconds:

$ nix-build silly.nix
these derivations will be built:
  /nix/store/cy14fc13d3nzl65qp0sywvbjnnl48jf8-silly.drv
building '/nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv' on 'ssh://beta.nixbuild.net'...
Mon Feb 17 20:53:47 UTC 2020
Mon Feb 17 20:53:57 UTC 2020
Mon Feb 17 20:54:07 UTC 2020

You can see that Nix is telling us that the build is running on nixbuild.net!

The nixbuild.net Shell

nixbuild.net supports a simple shell interface that you can access through SSH. This shell allows you to retrieve information about your builds on the service.

For example, we can list the currently running builds:

$ ssh beta.nixbuild.net shell
nixbuild.net> list builds --running
10524 2020-02-17 21:05:20Z [40.95s] [Running]
      /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv

We can also get information about any derivation or nix store path that has been built:

nixbuild.net> show drv /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv
Derivation
  path = /nix/store/khvphdj3q7nyim46jk97fjp174damrik-silly.drv
  builds = 1
  successful builds = 1

Outputs
  out -> /nix/store/8c7sndr3npwmskj9zzp4347cnqh5p8q0-silly

Builds
  10524 2020-02-17 21:05:20Z [02:01] [Built]

This shell is under development, and new features are added continuously. A web-based frontend will also be implemented.

The Road Ahead

To finish up this short introduction to nixbuild.net, let’s talk a bit about our long-term goals for the service.

The core purpose of nixbuild.net is to provide Nix users with pay-per-use distributed builds that are simple to set up and integrate into any workflow. The build execution should be performant and secure.

There are a number of features that basically just are nice side-effects of the design of nixbuild.net:

  • Building a large number of variants of the same derivation (a build matrix or some sort of parameter sweep) will take the same time as running a single build, since nixbuild.net can run all builds in parallel.

  • Running repeated builds to find issues related to non-determinism/reproducability will not take longer than running a single build.

  • A whole team/company can share the same account in nixbuild.net letting builds be shared in a cost-effective way. If everyone in a team delegates builds to nixbuild.net, the same derivation will never have to be built twice. This is similar to having a shared Nix cache, but avoids having to configure a cache and perform network uploads for each build artifact. Of course, nixbuild.net can be combined with a Nix cache too, if desired.

Beyond the above we have lots of thoughts on where we want to take nixbuild.net. I’m not going to enumerate possible directions here and now, but one big area that nixbuild.net is particularly suited for is advanced build analysis and visualisation. The sandbox that has been developed to securely isolate builds from each other also gives us a unique way to analyze exactly how a build behaves. One can imagine nixbuild.net being able give very detailed feedback to users about build bottlenecks, performance regressions, unused dependencies etc.

With that said, our primary focus right now is to make nixbuild.net a robust workhorse for your Nix builds, enabling you to fully embrace Nix without being limited by local compute resources. Please get in touch if you want try out nixbuild.net, or if you have any questions or comments!

by nixbuild.net (support@nixbuild.net) at February 18, 2020 12:00 AM

February 15, 2020

Sander van der Burg

A declarative process manager-agnostic deployment framework based on Nix tooling

In a previous blog post written two months ago, I have introduced a new experimental Nix-based process framework, that provides the following features:

  • It uses the Nix expression language for configuring running process instances, including their dependencies. The configuration process is based on only a few simple concepts: function definitions to define constructors that generate process manager configurations, function invocations to compose running process instances, and Nix profiles to make collections of process configurations accessible from a single location.
  • The Nix package manager delivers all packages and configuration files and isolates them in the Nix store, so that they never conflict with other running processes and packages.
  • It identifies process dependencies, so that a process manager can ensure that processes are activated and deactivated in the right order.
  • The ability to deploy multiple instances of the same process, by making conflicting resources configurable.
  • Deploying processes/services as an unprivileged user.
  • Advanced concepts and features, such as namespaces and cgroups, are not required.

Another objective of the framework is that it should work with a variety of process managers on a variety of operating systems.

In my previous blog post, I was deliberately using sysvinit scripts (also known as LSB Init compliant scripts) to manage the lifecycle of running processes as a starting point, because they are universally supported on Linux and self contained -- sysvinit scripts only require the right packages installed, but they do not rely on external programs that manage the processes' life-cycle. Moreover, sysvinit scripts can also be conveniently used as an unprivileged user.

I have also developed a Nix function that can be used to more conveniently generate sysvinit scripts. Traditionally, these scripts are written by hand and basically require that the implementer writes the same boilerplate code over and over again, such as the activities that start and stop the process.

The sysvinit script generator function can also be used to directly specify the implementation of all activities that manage the life-cycle of a process, such as:


{createSystemVInitScript, nginx, stateDir}:
{configFile, dependencies ? [], instanceSuffix ? ""}:

let
instanceName = "nginx${instanceSuffix}";
nginxLogDir = "${stateDir}/${instanceName}/logs";
in
createSystemVInitScript {
name = instanceName;
description = "Nginx";
activities = {
start = ''
mkdir -p ${nginxLogDir}
log_info_msg "Starting Nginx..."
loadproc ${nginx}/bin/nginx -c ${configFile} -p ${stateDir}
evaluate_retval
'';
stop = ''
log_info_msg "Stopping Nginx..."
killproc ${nginx}/bin/nginx
evaluate_retval
'';
reload = ''
log_info_msg "Reloading Nginx..."
killproc ${nginx}/bin/nginx -HUP
evaluate_retval
'';
restart = ''
$0 stop
sleep 1
$0 start
'';
status = "statusproc ${nginx}/bin/nginx";
};
runlevels = [ 3 4 5 ];

inherit dependencies instanceName;
}

In the above Nix expression, we specify five activities to manage the life-cycle of Nginx, a free/open source web server:

  • The start activity initializes the state of Nginx and starts the process (as a daemon that runs in the background).
  • stop stops the Nginx daemon.
  • reload instructs Nginx to reload its configuration
  • restart restarts the process
  • status shows whether the process is running or not.

Besides directly implementing activities, the Nix function invocation shown above can also be used on a much higher level -- typically, sysvinit scripts follow the same conventions. Nearly all sysvinit scripts implement the activities described above to manage the life-cycle of a process, and these typically need to be re-implemented over and over again.

We can also generate the implementations of these activities automatically from a high level specification, such as:


{createSystemVInitScript, nginx, stateDir}:
{configFile, dependencies ? [], instanceSuffix ? ""}:

let
instanceName = "nginx${instanceSuffix}";
nginxLogDir = "${stateDir}/${instanceName}/logs";
in
createSystemVInitScript {
name = instanceName;
description = "Nginx";
initialize = ''
mkdir -p ${nginxLogDir}
'';
process = "${nginx}/bin/nginx";
args = [ "-c" configFile "-p" stateDir ];
runlevels = [ 3 4 5 ];

inherit dependencies instanceName;
}

You could basically say that the above createSystemVInitScript function invocation makes the configuration process of a sysvinit script "more declarative" -- you do not need to specify the activities that need to be executed to manage processes, but instead, you specify the relevant characteristics of a running process.

From this high level specification, the implementations for all required activities will be derived, using conventions that are commonly used to write sysvinit scripts.

After completing the initial version of the process management framework that works with sysvinit scripts, I have also been investigating other process managers. I discovered that their configuration processes have many things in common with the sysvinit approach. As a result, I have decided to explore these declarative deployment concepts a bit further.

In this blog post, I will describe a declarative process manager-agnostic deployment approach that we can integrate into the experimental Nix-based process management framework.

Writing declarative deployment specifications for managed running processes


As explained in the introduction, I have also been experimenting with other process managers than sysvinit. For example, instead of generating a sysvinit script that manages the life-cycle of a process, such as the Nginx server, we can also generate a supervisord configuration file to define Nginx as a program that can be managed with supervisord:


{createSupervisordProgram, nginx, stateDir}:
{configFile, dependencies ? [], instanceSuffix ? ""}:

let
instanceName = "nginx${instanceSuffix}";
nginxLogDir = "${stateDir}/${instanceName}/logs";
in
createSupervisordProgram {
name = instanceName;
command = "mkdir -p ${nginxLogDir}; "+
"${nginx}/bin/nginx -c ${configFile} -p ${stateDir}";
inherit dependencies;
}

Invoking the above function will generate a supervisord program configuration file, instead of a sysvinit script.

With the following Nix expression, we can generate a systemd unit file so that Nginx's life-cycle can be managed by systemd:


{createSystemdService, nginx, stateDir}:
{configFile, dependencies ? [], instanceSuffix ? ""}:

let
instanceName = "nginx${instanceSuffix}";
nginxLogDir = "${stateDir}/${instanceName}/logs";
in
createSystemdService {
name = instanceName;
Unit = {
Description = "Nginx";
};
Service = {
ExecStartPre = "+mkdir -p ${nginxLogDir}";
ExecStart = "${nginx}/bin/nginx -c ${configFile} -p ${stateDir}";
Type = "simple";
};

inherit dependencies;
}

What you may probably notice when comparing the above two Nix expressions with the last sysvinit example (that captures process characteristics instead of activities), is that they all contain very similar properties. Their main difference is a slightly different organization and naming convention, because each abstraction function is tailored towards the configuration conventions that each target process manager uses.

As discussed in my previous blog post about declarative programming and deployment, declarativity is a spectrum -- the above specifications are (somewhat) declarative because they do not capture the activities to manage the life-cycle of the process (the how). Instead, they specify what process we want to run. The process manager derives and executes all activities to bring that process in a running state.

sysvinit scripts themselves are not declarative, because they specify all activities (i.e. shell commands) that need to be executed to accomplish that goal. supervisord configurations and systemd services configuration files are (somewhat) declarative, because they capture process characteristics -- the process manager executes derives all required activities to bring the process in a running state.

Despite the fact that I am not specifying any process management activities, these Nix expressions could still be considered somewhat a "how specification", because each configuration is tailored towards a specific process manager. A process manager, such as syvinit, is a means to accomplish something else: getting a running process whose life-cycle can be conveniently managed.

If I would revise the above specifications to only express what I kind of running process I want, disregarding the process manager, then I could simply write:


{createManagedProcess, nginx, stateDir}:
{configFile, dependencies ? [], instanceSuffix ? ""}:

let
instanceName = "nginx${instanceSuffix}";
nginxLogDir = "${stateDir}/${instanceName}/logs";
in
createManagedProcess {
name = instanceName;
description = "Nginx";
initialize = ''
mkdir -p ${nginxLogDir}
'';
process = "${nginx}/bin/nginx";
args = [ "-c" configFile" -p" "${stateDir}/${instanceName}" ];

inherit dependencies instanceName;
}

The above Nix expression simply states that we want to run a managed Nginx process (using certain command-line arguments) and before starting the process, we want to initialize the state by creating the log directory, if it does not exists yet.

I can translate the above specification to all kinds of configuration artifacts that can be used by a variety of process managers to accomplish the same outcome. I have developed six kinds of generators allowing me to target the following process managers:


Translating the properties of the process manager-agnostic configuration to a process manager-specific properties is quite straight forward for most concepts -- in many cases, there is a direct mapping between a property in the process manager-agnostic configuration to a process manager-specific property.

For example, when we intend to target supervisord, then we can translate the process and args parameters to a command invocation. For systemd, we can translate process and args to the ExecStart property that refers to a command-line instruction that starts the process.

Although the process manager-agnostic abstraction function supports enough features to get some well known system services working (e.g. Nginx, Apache HTTP service, PostgreSQL, MySQL etc.), it does not facilitate all possible features of each process manager -- it will provide a reasonable set of common features to get a process running and to impose some restrictions on it.

It is still possible work around the feature limitations of process manager-agnostic deployment specifications. We can also influence the generation process by defining overrides to get process manager-specific properties supported:


{createManagedProcess, nginx, stateDir}:
{configFile, dependencies ? [], instanceSuffix ? ""}:

let
instanceName = "nginx${instanceSuffix}";
nginxLogDir = "${stateDir}/${instanceName}/logs";
in
createManagedProcess {
name = instanceName;
description = "Nginx";
initialize = ''
mkdir -p ${nginxLogDir}
'';
process = "${nginx}/bin/nginx";
args = [ "-c" configFile" -p" "${stateDir}/${instanceName}" ];

inherit dependencies instanceName;

overrides = {
sysvinit = {
runlevels = [ 3 4 5 ];
};
};
}

In the above example, we have added an override specifically for sysvinit to tell that the init system that the process should be started in runlevels 3, 4 and 5 (which implies the process should stopped in the remaining runlevels: 0, 1, 2, and 6). The other process managers that I have worked with do not have a notion of runlevels.

Similarly, we can use an override to, for example, use systemd-specific features to run a process in a Linux namespace etc.

Simulating process manager-agnostic concepts with no direct equivalents


For some process manager-agnostic concepts, process managers do not always have direct equivalents. In such cases, there is still the possibility to apply non-trivial simulation strategies.

Foreground processes or daemons


What all deployment specifications shown in this blog post have in common is that their main objective is to bring a process in a running state. How these processes are expected to behave is different among process managers.

sysvinit and BSD rc scripts expect processes to daemonize -- on invocation, a process spawns another process that keeps running in the background (the daemon process). After the initialization of the daemon process is done, the parent process terminates. If processes do not deamonize, the startup process execution blocks indefinitely.

Daemons introduce another complexity from a process management perspective -- when invoking an executable from a shell session in background mode, the shell can you tell its process ID, so that it can be stopped when it is no longer necessary.

With deamons, an invoked processes forks another child process (or when it supposed to really behave well: it double forks) that becomes the daemon process. The daemon process gets adopted by the init system, and thus remains in the background even if the shell session ends.

The shell that invokes the executable does not know the PIDs of the resulting daemon processes, because that value is only propagated to the daemon's parent process, not the calling shell session. To still be able to control it, a well-behaving daemon typically writes its process IDs to a so-called PID file, so that it can be reliably terminated by a shell command when it is no longer required.

sysvinit and BSD rc scripts extensively use PID files to control daemons. By using a process' PID file, the managing sysvinit/BSD rc script can tell you whether a process is running or not and reliably terminate a process instance.

"More modern" process managers, such as launchd, supervisord, and cygrunsrv, do not work with processes that daemonize -- instead, these process managers are daemons themselves that invoke processes that work in "foreground mode".

One of the advantages of this approach is that services can be more reliably controlled -- because their PIDs are directly propagated to the controlling daemon from the fork() library call, it is no longer required to work with PID files, that may not always work reliably (for example: a process might abrubtly terminate and never clean its PID file, giving the system the false impression that it is still running).

systemd improves process control even further by using Linux cgroups -- although foreground process may be controlled more reliably than daemons, they can still fork other processes (e.g. a web service that creates processes per connection). When the controlling parent process terminates, and does not properly terminate its own child processes, they may keep running in the background indefintely. With cgroups it is possible for the process manager to retain control over all processes spawned by a service and terminate them when a service is no longer needed.

systemd has another unique advantage over the other process managers -- it can work both with foreground processes and daemons, although foreground processes seem to have to preference according to the documentation, because they are much easier to control and develop.

Many common system services, such as OpenSSH, MySQL or Nginx, have the ability to both run as a foreground process and as a daemon, typically by providing a command-line parameter or defining a property in a configuration file.

To provide an optimal user experience for all supported process managers, it is typically a good thing in the process manager-agnostic deployment specification to specify both how a process can be used as a foreground process and as a daemon:


{createManagedProcess, nginx, stateDir, runtimeDir}:
{configFile, dependencies ? [], instanceSuffix ? ""}:

let
instanceName = "nginx${instanceSuffix}";
nginxLogDir = "${stateDir}/${instanceName}/logs";
in
createManagedProcess {
name = instanceName;
description = "Nginx";
initialize = ''
mkdir -p ${nginxLogDir}
'';
process = "${nginx}/bin/nginx";
args = [ "-p" "${stateDir}/${instanceName}" "-c" configFile ];
foregroundProcessExtraArgs = [ "-g" "daemon off;" ];
daemonExtraArgs = [ "-g" "pid ${runtimeDir}/${instanceName}.pid;" ];

inherit dependencies instanceName;

overrides = {
sysvinit = {
runlevels = [ 3 4 5 ];
};
};
}

In the above example, we have revised Nginx expression to both specify how the process can be started as a foreground process and as a daemon. The only thing that needs to be configured differently is one global directive in the Nginx configuration file -- by default, Nginx runs as a deamon, but by adding the daemon off; directive to the configuration we can run it in foreground mode.

When we run Nginx as daemon, we configure a PID file that refers to the instance name so that multiple instances can co-exist.

To make this conveniently configurable, the above expression does the following:

  • The process parameter specifies the process that needs to be started both in foreground mode and as a daemon. The args parameter specifies common command-line arguments that both the foreground and daemon process will use.
  • The foregroundProcessExtraArgs parameter specifies additional command-line arguments that are only used when the process is started in foreground mode. In the above example, it is used to provide Nginx the global directive that disables the daemon setting.
  • The daemonExtraArgs parameter specifies additional command-line arguments that are only used when the process is started as a daemon. In the above example, it used to provide Nginx a global directive with a PID file path that uniquely identifies the process instance.

For custom software and services implemented in different language than C, e.g. Node.js, Java or Python, it is far less common that they have the ability to daemonize -- they can typically only be used as foreground processes.

Nonetheless, we can still daemonize foreground-only processes, by using an external tool, such as libslack's daemon command:


$ daemon -U -i myforegroundprocess

The above command deamonizes the foreground process and creates a PID file for it, so that it can be managed by the sysvinit/BSD rc utility scripts.

The opposite kind of "simulation" is also possible -- if a process can only be used as a daemon, then we can use a proxy process to make it appear as a foreground process:


export _TOP_PID=$$

# Handle to SIGTERM and SIGINT signals and forward them to the daemon process
_term()
{
trap "exit 0" TERM
kill -TERM "$pid"
kill $_TOP_PID
}

_interrupt()
{
kill -INT "$pid"
}

trap _term SIGTERM
trap _interrupt SIGINT

# Start process in the background as a daemon
${executable} "$@"

# Wait for the PID file to become available.
# Useful to work with daemons that don't behave well enough.
count=0

while [ ! -f "${_pidFile}" ]
do
if [ $count -eq 10 ]
then
echo "It does not seem that there isn't any pid file! Giving up!"
exit 1
fi

echo "Waiting for ${_pidFile} to become available..."
sleep 1

((count=count++))
done

# Determine the daemon's PID by using the PID file
pid=$(cat ${_pidFile})

# Wait in the background for the PID to terminate
${if stdenv.isDarwin then ''
lsof -p $pid +r 3 &>/dev/null &
'' else if stdenv.isLinux || stdenv.isCygwin then ''
tail --pid=$pid -f /dev/null &
'' else if stdenv.isBSD || stdenv.isSunOS then ''
pwait $pid &
'' else
throw "Don't know how to wait for process completion on system: ${stdenv.system}"}

# Wait for the blocker process to complete.
# We use wait, so that bash can still
# handle the SIGTERM and SIGINT signals that may be sent to it by
# a process manager
blocker_pid=$!
wait $blocker_pid

The idea of the proxy script shown above is that it runs as a foreground process as long as the daemon process is running and relays any relevant incoming signals (e.g. a terminate and interrupt) to the daemon process.

Implementing this proxy was a bit tricky:

  • In the beginning of the script we configure signal handlers for the TERM and INT signals so that the process manager can terminate the daemon process.
  • We must start the daemon and wait for it to become available. Although the parent process of a well-behaving daemon should only terminate when the initialization is done, this turns out not be a hard guarantee -- to make the process a bit more robust, we deliberately wait for the PID file to become available, before we attempt to wait for the termination of the daemon.
  • Then we wait for the PID to terminate. The bash shell has an internal wait command that can be used to wait for a background process to terminate, but this only works with processes in the same process group as the shell. Daemons are in a new session (with different process groups), so they cannot be monitored by the shell by using the wait command.

    From this Stackoverflow article, I learned that we can use the tail command of GNU Coreutils, or lsof on macOS/Darwin, and pwait on BSDs and Solaris/SunOS to monitor processes in other process groups.
  • When a command is being executed by a shell script (e.g. in this particular case: tail, lsof or pwait), the shell script can no longer respond to signals until the command completes. To still allow the script to respond to signals while it is waiting for the daemon process to terminate, we must run the previous command in background mode, and we use the wait instruction to block the script. While a wait command is running, the shell can respond to signals.

The generator function will automatically pick the best solution for the selected target process manager -- this means that when our target process manager are sysvinit or BSD rc scripts, the generator automatically picks the configuration settings to run the process as a daemon. For the remaining process managers, the generator will pick the configuration settings that runs it as a foreground process.

If a desired process model is not supported, then the generator will automatically simulate it. For instance, if we have a foreground-only process specification, then the generator will automatically configure a sysvinit script to call the daemon executable to daemonize it.

A similar process happens when a daemon-only process specification is deployed for a process manager that cannot work with it, such as supervisord.

State initialization


Another important aspect in process deployment is state initialization. Most system services require the presence of state directories in which they can store their PID, log and temp files. If these directories do not exist, the service may not work and refuse to start.

To cope with this problem, I typically make processes self initializing -- before starting the process, I check whether the state has been intialized (e.g. check if the state directories exist) and re-initialize the initial state if needed.

With most process managers, state initialization is easy to facilitate. For sysvinit and BSD rc scripts, we just use the generator to first execute the shell commands to initialize the state before the process gets started.

Supervisord allows you to execute multiple shell commands in a single command directive -- we can just execute a script that initializes the state before we execute the process that we want to manage.

systemd has a ExecStartPre directive that can be used to specify shell commands to execute before the main process starts.

Apple launchd and cygrunsrv, however, do not have a generic shell execution mechanism or some facility allowing you to execute things before a process starts. Nonetheless, we can still ensure that the state is going to be initialized by creating a wrapper script -- first the wrapper script does the state initialization and then executes the main process.

If a state initialization procedure was specified and the target process manager does not support scripting, then the generator function will transparently wrap the main process into a wrapper script that supports state initialization.

Process dependencies


Another important generic concept is process dependency management. For example, Nginx can act as a reverse proxy for another web application process. To provide a functional Nginx service, we must be sure that the web application process gets activated as well, and that the web application is activated before Nginx.

If the web application process is activated after Nginx or missing completely, then Nginx is (temporarily) unable to redirect incoming requests to the web application process causing end-users to see bad gateway errors.

The process managers that I have experimented with all have a different notion of process dependencies.

sysvinit scripts can optionally declare dependencies in their comment sections. Tools that know how to interpret these dependency specifications can use it to decide the right activation order. Systems using sysvinit typically ignore this specification. Instead, they work with sequence numbers in their file names -- each run level configuration directory contains a prefix (S or K) followed by two numeric digits that defines the start or stop order.

supervisord does not work with dependency specifications, but every program can optionally provide a priority setting that can be used to order the activation and deactivation of programs -- lower priority numbers have precedence over high priority numbers.

From dependency specifications in a process management expression, the generator function can automatically derive sequence numbers for process managers that require it.

Similar to sysvinit scripts, BSD rc scripts can also declare dependencies in their comment sections. Contrary to sysvinit scripts, BSD rc scripts can use the rcorder tool to parse these dependencies from the comments section and automatically derive the order in which the BSD rc scripts need to be activated.

cygrunsrv also allows you directly specify process dependencies. The Windows service manager makes sure that the service get activated in the right order and that all process dependencies are activated first. The only limitation is that cygrunsrv only allows up to 16 dependencies to be specified per service.

To simulate process dependencies with systemd, we can use two properties. The Wants property can be used to tell systemd that another service needs to be activated first. The After property can be used to specify the ordering.

Sadly, it seems that launchd has no notion of process dependencies at all -- processes can be activated by certain events, e.g. when a kernel module was loaded or through socket activation, but it does not seem to have the ability to configure process dependencies or the activation ordering. When our target process manager is launchd, then we simply have to inform the user that proper activation ordering cannot be guaranteed.

Changing user privileges


Another general concept, that has subtle differences in each process manager, is changing user privileges. Typically for the deployment of system services, you do not want to run these services as root user (that has full access to the filesystem), but as an unprivileged user.

sysvinit and BSD rc scripts have to change users through the su command. The su command can be used to change the user ID (UID), and will automatically adopt the primary group ID (GID) of the corresponding user.

Supervisord and cygrunsrv can also only change user IDs (UIDs), and will adopt the primary group ID (GID) of the corresponding user.

Systemd and launchd can both change the user IDs and group IDs of the process that it invokes.

Because only changing UIDs are universally supported amongst process managers, I did not add a configuration property that allows you to change GIDs in a process manager-agnostic way.

Deploying process manager-agnostic configurations


With a processes Nix expression, we can define which process instances we want to run (and how they can be constructed from source code and their dependencies):


{ pkgs ? import { inherit system; }
, system ? builtins.currentSystem
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager
}:

let
constructors = import ./constructors.nix {
inherit pkgs stateDir runtimeDir logDir tmpDir;
inherit forceDisableUserChange processManager;
};
in
rec {
webapp = rec {
port = 5000;
dnsName = "webapp.local";

pkg = constructors.webapp {
inherit port;
};
};

nginxReverseProxy = rec {
port = 8080;

pkg = constructors.nginxReverseProxy {
webapps = [ webapp ];
inherit port;
} {};
};
}

In the above Nix expression, we compose two running process instances:

  • webapp is a trivial web application process that will simply return a static HTML page by using the HTTP protocol.
  • nginxReverseProxy is a Nginx server configured as a reverse proxy server. It will forward incoming HTTP requests to the appropriate web application instance, based on the virtual host name. If a virtual host name is webapp.local, then Nginx forwards the request to the webapp instance.

To generate the configuration artifacts for the process instances, we refer to a separate constructors Nix expression. Each constructor will call the createManagedProcess function abstraction (as shown earlier) to construct a process configuration in a process manager-agnostic way.

With the following command-line instruction, we can generate sysvinit scripts for the webapp and Nginx processes declared in the processes expression, and run them as an unprivileged user with the state files managed in our home directory:


$ nixproc-build --process-manager sysvinit \
--state-dir /home/sander/var \
--force-disable-user-change processes.nix

By adjusting the --process-manager parameter we can also generate artefacts for a different process manager. For example, the following command will generate systemd unit config files instead of sysvinit scripts:


$ nixproc-build --process-manager systemd \
--state-dir /home/sander/var \
--force-disable-user-change processes.nix

The following command will automatically build and deploy all processes, using sysvinit as a process manager:


$ nixproc-sysvinit-switch --state-dir /home/sander/var \
--force-disable-user-change processes.nix

We can also run a life-cycle management activity on all previously deployed processes. For example, to retrieve the statuses of all processes, we can run:


$ nixproc-sysvinit-runactivity status

We can also traverse the processes in reverse dependency order. This is particularly useful to reliably stop all processes, without breaking any process dependencies:


$ nixproc-sysvinit-runactivity -r stop

Similarly, there are command-line tools to use the other supported process managers. For example, to deploy systemd units instead of sysvinit scripts, you can run:


$ nixproc-systemd-switch processes.nix

Distributed process manager-agnostic deployment with Disnix


As shown in the previous process management framework blog post, it is also possible to deploy processes to machines in a network and have inter-dependencies between processes. These kinds of deployments can be managed by Disnix.

Compared to the previous blog post (in which we could only deploy sysvinit scripts), we can now also use any process manager that the framework supports. The Dysnomia toolset provides plugins that supports all process managers that this framework supports:


{ pkgs, distribution, invDistribution, system
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? "sysvinit"
}:

let
constructors = import ./constructors.nix {
inherit pkgs stateDir runtimeDir logDir tmpDir;
inherit forceDisableUserChange processManager;
};

processType =
if processManager == "sysvinit" then "sysvinit-script"
else if processManager == "systemd" then "systemd-unit"
else if processManager == "supervisord" then "supervisord-program"
else if processManager == "bsdrc" then "bsdrc-script"
else if processManager == "cygrunsrv" then "cygrunsrv-service"
else throw "Unknown process manager: ${processManager}";
in
rec {
webapp = rec {
name = "webapp";
port = 5000;
dnsName = "webapp.local";
pkg = constructors.webapp {
inherit port;
};
type = processType;
};

nginxReverseProxy = rec {
name = "nginxReverseProxy";
port = 8080;
pkg = constructors.nginxReverseProxy {
inherit port;
};
dependsOn = {
inherit webapp;
};
type = processType;
};
}

In the above expression, we have extended the previously shown processes expression into a Disnix service expression, in which every attribute in the attribute set represents a service that can be distributed to a target machine in the network.

The type attribute of each service indicates which Dysnomia plugin needs to manage its life-cycle. We can automatically select the appropriate plugin for our desired process manager by deriving it from the processManager parameter.

The above Disnix expression has a drawback -- in a heteregenous network of machines (that run multiple operating systems and/or process managers), we need to compose all desired variants of each service with configuration files for each process manager that we want to use.

It is also possible to have target-agnostic services, by delegating the translation steps to the corresponding target machines. Instead of directly generating a configuration file for a process manager, we generate a JSON specification containing all parameters that are passed to createManagedProcess. We can use this JSON file to build the corresponding configuration artefacts on the target machine:


{ pkgs, distribution, invDistribution, system
, stateDir ? "/var"
, runtimeDir ? "${stateDir}/run"
, logDir ? "${stateDir}/log"
, tmpDir ? (if stateDir == "/var" then "/tmp" else "${stateDir}/tmp")
, forceDisableUserChange ? false
, processManager ? null
}:

let
constructors = import ./constructors.nix {
inherit pkgs stateDir runtimeDir logDir tmpDir;
inherit forceDisableUserChange processManager;
};
in
rec {
webapp = rec {
name = "webapp";
port = 5000;
dnsName = "webapp.local";
pkg = constructors.webapp {
inherit port;
};
type = "managed-process";
};

nginxReverseProxy = rec {
name = "nginxReverseProxy";
port = 8080;
pkg = constructors.nginxReverseProxy {
inherit port;
};
dependsOn = {
inherit webapp;
};
type = "managed-process";
};
}

In the above services model, we have set the processManager parameter to null causing the generator to print JSON presentations of the function parameters passed to createManagedProcess.

The managed-process type refers to a Dysnomia plugin that consumes the JSON specification and invokes the createManagedProcess function to convert the JSON configuration to a configuration file used by the preferred process manager.

In the infrastructure model, we can configure the preferred process manager for each target machine:


{
test1 = {
properties = {
hostname = "test1";
};
containers = {
managed-process = {
processManager = "sysvinit";
};
};
};

test2 = {
properties = {
hostname = "test2";
};
containers = {
managed-process = {
processManager = "systemd";
};
};
};
}

In the above infrastructure model, the managed-proces container on the first machine: test1 has been configured to use sysvinit scripts to manage processes. On the second test machine: test2 the managed-process container is configured to use systemd to manage processes.

If we distribute the services in the services model to targets in the infrastructure model as follows:


{infrastructure}:

{
webapp = [ infrastructure.test1 ];
nginxReverseProxy = [ infrastructure.test2 ];
}

and the deploy the system as follows:


$ disnix-env -s services.nix -i infrastructure.nix -d distribution.nix

Then the webapp process will distributed to the test1 machine in the network and will be managed with a sysvinit script.

The nginxReverseProxy will be deployed to the test2 machine and managed as a systemd job. The Nginx reverse proxy forwards incoming connections to the webapp.local domain name to the web application process hosted on the first machine.

Discussion


In this blog post, I have introduced a process manager-agnostic function abstraction making it possible to target all kinds of process managers on a variety of operating systems.

By using a single set of declarative specifications, we can:

  • Target six different process managers on four different kinds of operating systems.
  • Implement various kinds of deployment scenarios: production deployments, test deployments as an unprivileged user.
  • Construct multiple instances of processes.

In a distributed-context, the advantage is that we can uniformly target all supported process managers and operating systems in a heterogeneous environment from a single declarative specification.

This is particularly useful to facilitate technology diversity -- for example, one of the key selling points of Microservices is that "any technology" can be used to implement them. In many cases, technology diversity is "restricted" to frameworks, programming languages, and storage technologies.

One particular aspect that is rarely changed is the choice of operating systems, because of the limitations of deployment tools -- most deployment solutions for Microservices are container-based and heavily rely on Linux-only concepts, such as Namespaces and cgroups.

With this process managemenent framework and the recent Dysnomia plugin additions for Disnix, it is possible to target all kinds of operating systems that support the Nix package manager, making the operating system component selectable as well. This, for example, allows you to also pick the best operating system to implement a certain requirement -- for example, when performance is important you might pick Linux, and when there is a strong emphasis on security, you could pick OpenBSD to host a mission criticial component.

Limitations


The following table, summarizes the differences between the process manager solutions that I have investigated:

sysvinitbsdrcsupervisordsystemdlaunchdcygrunsrv
Process typedaemondaemonforegroundforeground
daemon
foregroundforeground
Process control methodPID filesPID filesProcess PIDcgroupsProcess PIDProcess PID
Scripting supportyesyesyesyesnono
Process dependency managementNumeric orderingDependency-basedNumeric orderingDependency-based
+ dependency loading
NoneDependency-based
+ dependency loading
User changing capabilitiesuseruseruser and groupuser and groupuser and groupuser
Unprivileged user deploymentsyes*yes*yesyes*nono
Operating system supportLinuxFreeBSD
>OpenBSD
NetBSD
Many UNIX-like:
Linux
macOS
FreeBSD
Solaris
Linux (+glibc) onlymacOS (Darwin)Windows (Cygwin)

Although we can facilitate lifecycle management from a common specification with a variety of process managers, only the most important common features are supported.

Not every concept can be done in a process manager agnostic way. For example, we cannot generically do any isolation of resources (except for packages, because we use Nix). It is difficult to generalize these concepts because these they are not standardized, e.g. the POSIX standard does not descibe namespaces and cgroups (or similar concepts).

Furthermore, most process managers (with the exception of supervisord) are operating system specific. As a result, it still matters what process manager is picked.

Related work


Process manager-agnostic deployment is not entirely a new idea. Dysnomia already has a target-agnostic 'process' plugin for quite a while, that translates a simple deployment specification (constisting of key-value pairs) to a systemd unit configuration file or sysvinit script.

The features of Dysnomia's process plugin are much more limited compared to the createManagedProcess abstraction function described in this blog post. It does not support any other than process managers than sysvint and systemd, and it can only work with foreground processes.

Furthermore, target agnostic configurations cannot be easily extended -- it is possible to (ab)use the templating mechanism, but it has no first class overridde facilities.

I also found a project called pleaserun that also has the objective to generate configuration files for a variety of process managers (my approach and pleaserunit, both support sysvinit scripts, systemd and launchd).

It seems to use template files to generate the configuration artefacts, and it does not seem to have a generic extension mechanism. Furthermore, it provides no framework to configure the location of shared resources, automatically install package dependencies or to compose multiple instances of processes.

Some remaining thoughts


Although the Nix package manager (not the NixOS distribution), should be portable amongst a variety of UNIX-like systems, it turns out that the only two operating systems that are well supported are Linux and macOS. Nix was reported to work on a variety of other UNIX-like systems in the past, but recently it seems that many things are broken.

To make Nix work on FreeBSD 12.1, I have used the latest stable Nix package manager version with patches from this repository. It turns out that there is still a patch missing to work around in a bug in FreeBSD that incorrectly kills all processes in a process group. Fortunately, when we run Nix as as unprivileged user, this bug does not seem to cause any serious problems.

Recent versions of Nixpkgs turn out to be horribly broken on FreeBSD -- the FreeBSD stdenv does not seem to work at all. I tried switching back to stdenv-native (a stdenv environment that impurely uses the host system's compiler and core executables), but that also no longer seems to work in the last three major releases -- the Nix expression evaluation breaks in several places. Due to the intense amount of changes and assumptions that the stdenv infrastructure currently makes, it was as good as impossible for me to fix the infrastructure.

As another workaround, I reverted back very to a very old version of Nixpkgs (version 17.03 to be precise), that still has a working stdenv-native environment. With some tiny adjustments (e.g. adding some shell aliases for some GNU variants of certain shell executables to stdenv-native), I have managed to get some basic Nix packages working, including Nginx on FreeBSD.

Surprisingly, running Nix on Cygwin was less painful than FreeBSD (because of all the GNUisms that Cygwin provides). Similar to FreeBSD, recent versions of Nixpkgs also appear to be broken, including the Cygwin stdenv environment. By reverting back to release-18.03 (that still has a somewhat working stdenv for Cygwin), I have managed to build a working Nginx version.

As a future improvement to Nixpkgs, I would like to propose a testing solution for stdenv-native. Although I understand that is difficult to dedicate manpower to maintain all unconventional Nix/Nixpkgs ports, stdenv-native is something that we can also convienently test on Linux and prevent from breaking in the future.

Availability


The latest version of my experimental Nix-based process framework, that includes the process manager-agnostic configuration function described in this blog post, can be obtained from my GitHub page.

In addition, the repository also contains some example cases, including the web application system described in this blog post, and a set of common system services: MySQL, Apache HTTP server, PostgreSQL and Apache Tomcat.

by Sander van der Burg (noreply@blogger.com) at February 15, 2020 08:07 PM

January 29, 2020

Cachix

CDN and double storage size

Cachix - Nix binary cache hosting, has grown quite a bit in recent months in terms of day to day usage and that was mostly noticable on bandwidth. Over 3000 GB were served in December 2019. CDN by CloudFlare Increased usage prompted a few backend machine instance upgrades to handle concurrent upload/downloads, but it became clear it’s time to abandon single machine infrastructure. As of today, all binary caches are served by CloudFlare CDN.

by Domen Kožar (support@cachix.org) at January 29, 2020 08:00 AM

January 20, 2020

Mayflower

__structuredAttrs in Nix

In Nix 2 a new parameter to the derivation primitive was added. It changes how information is passed to the derivation builder. Current State In order to show how it changes the handling of parameters to derivation, the first example will show the current state with __structuredAttrs set to false and the stdenv.mkDerivation wrapper around derivation. All parameters are passed to the builder as environment variables, canonicalised by Nix in imitation of shell script conventions:

January 20, 2020 12:00 PM

January 14, 2020

Hercules Labs

Hercules CI & Cachix split up

After careful consideration of how to balance between the two products, we’ve decided to split up. Each of the two products will be a separate entity:

  • Hercules CI becomes part of Robert Hensing’s Ensius B.V.
  • Cachix becomes part of Domen Kožar’s Enlambda OÜ

For customers there will be no changes, except for the point of contact in support requests.

Domen & Robert

January 14, 2020 12:00 AM

November 27, 2019

Mayflower

Windows-on-NixOS, part 1: Migrating bare-metal to a VM

This is part 1 of a series of blog posts explaining how we took an existing Windows installation on hardware and moved it into a VM running on top of NixOS. Background We have a decently-equipped desktop PC sitting in our office, which is designated for data experiments using TensorFlow and such. During off-hours, it’s also used for games, and for that purpose it has Windows installed on it. We decided to try moving Windows into a VM within NixOS so that we could run both operating systems in parallel.

November 27, 2019 06:00 AM