Creating New Build Systems —

Creating New Build Systems

by Mitchell Schmeisser <> — February 24, 2023

In Guix each package must specify a so-called build-system, which knows how to bring a package from its inputs to deployment. Build systems are responsible for setting up the environment and performing build actions within that environment. The most ubiquitous of these is the gnu-build-system. Guix builds packages using this build system via the usual ./configure && make && make install process.

Any package can alter its build system by removing some steps or adding extra ones. This is extremely common and almost every package makes some adjustment to the build process. A build-system in Guix hides away some of the common configuration choices. For example, there is no need to specify make or gcc as native inputs when using the gnu-build-system, because they are added implicitly when a package is lowered into a bag.

Anatomy of a Guix Build System

The job of a build system is to compile our packages into bags. Bags are a lower level representation of a package without all the bells and whistles (Makes sense since we are implementing them here), the bags are further refined to derivations which are used by the build daemon to create an isolated environment suitable for our build phases.

Below is how guix defines a build system. It's surprisingly simple with just three items, two of which are for human consumption.

(define-record-type* <build-system> build-system make-build-system
    (name        build-system-name)         ; symbol
    (description build-system-description)  ; short description
    (lower       build-system-lower))       ; args ... -> bags

The last field lower is a function which takes the list of arguments given in the (package ... (arguments ...)) field. The keyword arguments that we are allowed to supply when writing the package are defined by the build-system.

Code Strata

Guix builds are implemented in two parts.

  1. Code that compiles packages->derivations.

    Derivations are the language the Guix Daemon speaks. They describe everything about how to derive our package from the inputs to the environment and all the code on how to drive the build tools. This code is run in a poorly defined "user" environment. Guix produces derivations that actually can be influenced by undeclared aspects of the environment, like manually installed Guile packages or code added with the `-L` flag.

  2. The guix daemon runs the builder code in as isolated and reproducible build environment to produce the package from its inputs.

    This code is executed in an explicitly defined build environment with nothing being introduced from the host.

Code that runs in the host environment stages code, which will run in isolation. This is where G-Expressions really shine. They provide the syntax to describe this relationship.

Build Phases

All programs are built more or less the same way.

  1. Unpack the source code.

    Whether it's tarball or a version controlled repository, the guix daemon must copy the software's source tree into our build environment.

  2. Patch the shebangs.

    Many projects contain scripts written to aid the build process. In Linux, executable scripts can contain a so-called shebang, which contains an absolute path to the program, which is meant to interpret it: e.g. #!/bin/sh. Most distributions provide a POSIX compliant shell interpreter at this location. Guix System does not do this. Instead, Guix System's sh is yet another component in the store, so all of these files must be patched to point to the new location, which is only known at build time. We cannot rely on our PATH to store this information, because the Linux kernel does not respect such things.

  3. Configure the build.

    Whether it is autotools or CMake or ninja, etc., if you are relying on tools from the host system, then you have a step, which enables the host system to tell you where to find those tools.

  4. Patch the generated shebangs.

    Sometimes the configure phases creates scripts, which run during the build phase, these often contain references to /bin/sh, and so guix must patch these scripts as well.

  5. Build!

    That's right folks we are off to the races, and the program is building. Usually this takes the form of a call to make, with a handful of flags and last minute definitions, but there are other more exotic methods.

  6. Test.

    Now that guix built our software, we should test it before we sent software updates to users. This helps the guix project catch and eleminate software bugs before they impact guix users. Not all packages have tests, and guix developers occasionally disables some packages' testsuites, but the guix policy is to run the software's testsuite, if it is exists.

  7. Install.

    Now that we've verified everything works as expected, it is time to run make install or the equivalent. This phase copies our build artifacts into their final resting place in the store.

  8. Validate the Installation.

    Here guix checks that our outputs do not contain any component paths, which are not specified by the package inputs. That would lead to incomplete deployment, harm reproducible builds, and would be "bad".

There are more steps, but they are not universally applicable. Of course no generic model, such as this, captures all the chaos of the world, and every package has exceptions to it.

Guix implements build phases as a list of lambdas. As such our package can modify this list and add/delete/replace lambdas as they require. It is so common Guix provides a syntax for manipulating build phases: modify-phases.

A build system contains a default set of phases called %standard-phases. Every build system starts with the gnu-build-system, %standard-phases, and modifies it to their needs. It is then provided to the packages using that build system.

The cmake-build-system is nearly identical to the gnu-build-system except two phases.

;; Excerpt from `guix/build/cmake-build-system.scm`
(define %standard-phases
  ;; Everything is the same as the GNU Build System, except for the `configure'
  ;; and 'check' phases.
  (modify-phases gnu:%standard-phases
    (delete 'bootstrap)
    (replace 'check check)
    (replace 'configure configure)))

The Zephyr Build System

Zephyr uses cmake to perform physical component composition. It searches the filesystem and generates scripts, which the toolchain will use to successfully combine those components into a firmware image.

The fact that Zephyr provides this mechanism is one reason I chose to target zephyr in the first place.

This separation of projects in an embedded context is a really great thing. It brings many of the advantages to the Linux world such as code re-use, smaller binaries, more efficient cache/RAM usage, etc. It also allows us to work as independent groups and compose contributions from many teams.

It also brings all of the complexity. Suddenly all of the problems that plague traditional deployment now apply to our embedded system. The fact that the libraries are statically linked at compile time instead of dynamically at runtime is simply an implementation detail.

We now have dependencies which we must track and compose in the correct environments. The Zephyr Project offers a tool to help manage this new complexity: The West Meta-tool! To call it a "meta-tool" is an abuse of language. It is not even meta enough to bootstrap itself and relies on other package managers to get started. In fact, it is not even suitable for managing multiple embedded projects as a composition! The project recommends Yet Another Tool for this very common use case.

In fact, West does not really bring anything new to the table, and we can replace it in the same way we can replace every other language specific package manager, like node (js), Cabol (haskell), Dub (D), etc. PUTTING SOFTWARE ON THE SYSTEM IS THE JOB OF THE PACKAGE MANAGER!.

West is the way it is for a reason. It is very practical to design the package manager in this way, because it enables Windows users to access the build environment. A more in-depth discussion on the material conditions, which lead to this or that design decision of the West tool is beyond the scope of this post. Let's instead explain how to provide a meta-tool and bootstrap complex embedded systems, from the ground up, in a flexible, composable, and reproducible way.

Why not use cmake-build-system?

A fair question! Zephyr's CMake scripts require additional information about the build to be provided at configure time. Most tedius for zephyr-packages is the ZEPHYR_MODULES variable, which must be formatted and passed to CMake on the command line.

Host Side

Our job at this level is to take packages described using the package syntax and lower it into a derivation that the guix-daemon can understand.

Here is the derivation for building hello world for the frdm_k64f (hashes removed for brevity). The package syntax provides a human friendly veneer over this garbage.


Lowering packages to bags

We must provide the build system with a function which the following lambda form.

(lambda* (name #:key #:allow-other-keys) ...)

This means it takes one required argument name and any amount of keys. Individual procedures can specify keys they are interested in such as inputs or outputs.

Which keys are ultimately supported is defined by our lower function and our build phases.

;; Use module-ref instead of referencing the variables directly
;; to avoid circular dependencies.
(define %zephyr-build-system-modules
  `((mfs build zephyr-build-system)

(define default-zephyr-base
  (module-ref (resolve-interface '(mfs packages zephyr))

(define default-zephyr-sdk
  (module-ref (resolve-interface '(mfs packages zephyr))

(define default-ninja
  (module-ref (resolve-interface '(gnu packages ninja))

(define default-cmake
  (module-ref (resolve-interface '(gnu packages cmake))

(define* (lower name
        #:key source inputs native-inputs outputs system target
        (zephyr default-zephyr-base)
        (sdk default-zephyr-sdk)
        (ninja default-ninja)
        (cmake default-cmake)
        #:rest arguments)
  "Return a bag for NAME."
  (define private-keywords `(#:zephyr #:inputs #:native-inputs #:target))
(name name)
(system system)
(target target)
(build-inputs `(,@(if source `(("source" ,source)) '())
        ,@`(("cmake" ,cmake))
        ,@`(("zephyr-sdk" ,sdk))
        ,@`(("zephyr" ,zephyr))
        ,@`(("ninja" ,ninja))
;; Inputs need to be available at build time
;; since everything is statically linked.
(host-inputs inputs)
(outputs outputs)
(build zephyr-build)
(arguments (strip-keyword-arguments private-keywords arguments))))

Here our lower function provides default values for the packages every zephyr package needs, the SDK, CMake, and ZEPHYR_BASE and adds them to the build-inputs.

Notice we also strip out some keywords, which do not get passed to the build function, because they get included as part of the broader abstractions the build system provides.

At this step it would be great to have a parser which could work out the required sdk from a .config, but this requires compiling the kconfig, which requires at least the sdk cmake files. There might be a way to make it happen, but until then if a board needs a different sdk, then they can specify it in an argument keyword.

Lowering Bags to Derivations

Here is the definition for the actual build procedure. There is a lot of abstract trickery going on here, so do not worry if you don't understand it, I barely understand it! It's mostly copy and pasted from the CMake build system.

(define* (zephyr-build name inputs
               #:key guile source
               (outputs '("out")) (configure-flags ''())
               (search-paths '())
               (make-flags ''())
               (out-of-source? #t)
               (tests? #f)
               (test-target "test")
               (parallel-build? #t) (parallel-tests? #t)
               (validate-runpath? #f)
               (patch-shebangs? #t)
               (phases '%standard-phases)
               (system (%current-system))
               (substitutable? #t)
               (imported-modules %zephyr-build-system-modules)

               ;; The modules referenced here contain code
               ;; which will be staged in the build environment with us.
               ;; Our build gexp down below will only be able to access this code
               ;; and we must be careful not to reference anything else.
               (modules '((zephyr build zephyr-build-system)
                      (guix build utils))))
    "Build SOURCE using CMAKE, and with INPUTS. This assumes that SOURCE
    provides a 'CMakeLists.txt' file as its build system."

    ;; This is the build gexp. It handles staging values from our host
    ;; system into code that our build system can run.
    (define build
    (with-imported-modules imported-modules
          (use-modules #$@(sexp->gexp modules))
          #$(with-build-variables inputs outputs
          #~(zephyr-build #:source #+source
                  #:system #$system
                  #:outputs %outputs
                  #:inputs %build-inputs
                  #:board #$board
                  #:search-paths '#$(sexp->gexp
                             (map search-path-specification->sexp
                  #:phases #$(if (pair? phases)
                         (sexp->gexp phases)
                  #:configure-flags #$(if (pair? configure-flags)
                              (sexp->gexp configure-flags)
                  #:make-flags #$make-flags
                  #:out-of-source? #$out-of-source?
                  #:tests? #$tests?
                  #:test-target #$test-target
                  #:parallel-build? #$parallel-build?
                  #:parallel-tests? #$parallel-tests?
                  #:validate-runpath? #$validate-runpath?
                  #:patch-shebangs? #$patch-shebangs?
                  #:strip-binaries? #f)))))

    (mlet %store-monad ((guile (package->derivation (or guile (default-guile))
                              system #:graft? #f)))
    (gexp->derivation name build
              #:system system
              #:target #f
              #:graft? #f
              #:substitutable? substitutable?
              #:guile-for-build guile)))

Finally we define our build system which the package definitions can reference.

(define zephyr-build-system
    (name 'zephyr)
    (description "The standard Zephyr build system")
    (lower lower)))

Easy right?

Build Side

The build side is not as complex as you might initially expect. Our build system is almost exactly the same as the CMake build system except our configure phase passes different values to CMake. Our job is much easier.

Locating Modules

Zephyr CMake requires the zephyr modules which are needed for the build to be supplied on the command line. Unfortunately for us, the documentation is wrong, and the ZEPHYR_MODULES environment variable is not honored. Thus we must implement some other solution for locating modules, until this that is fixed.

Input Scanning - Lucky for us we are keeping detailed information about our dependencies. It is a simple matter to write a file tree walker which collects all the zephyr modules in our inputs.

(define* (find-zephyr-modules directories)
  "Return the list of directories containing zephyr/module.yml found
   under DIRECTORY, recursively. Return the empty list if DIRECTORY is
   not accessible."
  (define (module-directory file)
    (dirname (dirname file)))

  (define (enter? name stat result)
    ;; Skip version control directories.
    ;; Shouldn't be in the store but you never know.
    (not (member (basename name) '(".git" ".svn" "CVS"))))

  (define (leaf name stat result)
    ;; Add module root directory to results
    (if (and (string= "module.yml" (basename name))
         (string= "zephyr" (basename (dirname name))))
      (cons (module-directory name) result)

  (define (down name stat result) result)
  (define (up name stat result) result)
  (define (skip name stat result) result)

  (define (find-modules directory)
    (file-system-fold enter? leaf down up skip error
              '() (canonicalize-path directory)))

  (append-map find-modules directories))

(define (zephyr-modules-cmake-argument modules)
   "Return a proper CMake list from MODULES, a list of filepaths"
   (format #f "-DZEPHYR_MODULES='~{~a~^;~}'" modules))

Here are two functions. The first one find-zephyr-modules walks a list of directories (package inputs) and returns a list of every module it finds. The second one is just for syntactic convenience when writing the CMake invokation. This is also slightly more robust than West's module discovery, because it allows for a single repository to provide multiple modules which are not technically required to be at the top level.

From here we just need to provide alternate implementations of configure and install.

(define* (configure #:key outputs (configure-flags '())
            inputs (out-of-source? #t)
      "Configure the given package."
      (let* ((out        (assoc-ref outputs "out"))
         (abs-srcdir (getcwd))
         (srcdir     (if out-of-source?
                 (string-append "../" (basename abs-srcdir))
    (format #t "source directory: ~s (relative from build: ~s)~%"
        abs-srcdir srcdir)
    (when out-of-source?
      (mkdir "../build")
      (chdir "../build"))
    (format #t "build directory: ~s~%" (getcwd))

    ;; this is required because zephyr tries to optimize
    ;; future calls to the build scripts by keep a cache.
    (setenv "XDG_CACHE_HOME" (getcwd))

    (let ((args `(,srcdir
              ,@(if build-type
                (list (string-append "-DCMAKE_BUILD_TYPE="
              ;; enable verbose output from builds
            (find-zephyr-modules (map cdr inputs)))
      (format #t "running 'cmake' with arguments ~s~%" args)
      (apply invoke "cmake" args))))

    (define* (install #:key outputs #:allow-other-keys)
      (let* ((out (string-append (assoc-ref outputs "out") "/lib/firmware"))
         (dbg (string-append (assoc-ref outputs "debug") "/share/zephyr")))
    (mkdir-p out)
    (mkdir-p dbg)
    (copy-file "zephyr/.config" (string-append dbg "/config"))
    (copy-file "zephyr/" (string-append dbg "/"))
    (copy-file "zephyr/zephyr.elf" (string-append out "/zephyr.elf"))
    (copy-file "zephyr/zephyr.bin" (string-append out "/zephyr.bin"))))

;; Define new standard-phases
(define %standard-phases
      (modify-phases cmake:%standard-phases
    (replace 'configure configure)
    (replace 'install install)))

;; Call cmake build with our new phases
(define* (zephyr-build #:key inputs (phases %standard-phases)
               #:allow-other-keys #:rest args)
  (apply cmake:cmake-build #:inputs inputs #:phases phases args))

One thing to note is the "debug" output. This exists so we don't retain references to our build environment and make the file system closure huge. If you put all of the build outputs in the same store path, then the deployment closure will grow from 2MB to 833MB.

Defining Zephyr Packages

Now that we have a proper build system, it's time to define some packages!

Zephyr Base

Zephyr base contains the Zephyr source code. It is equivalent (in my mind anyway) to the linux kernel, in that packages' definitions' specifications, which target the linux kernel, can be minimal.

The selection of operating system actually comes from the toolchain. For example, we build linux packages with the gnu triplet. When we select the arm-linux-gnueabihf, we are specifying our operating system.

It is the same for Zephyr. When we build for zephyr we use the arm-zephyr-eabi toolchain. However, unlike linux applications, zephyr applications are embedded firmware images and are generally statically linked. Thus this package just consists of its source code and is not compiled directly. We cannot compile it now because applications/modules provide the required Kconfig options.

(define-public zephyr
  (let ((version "3.1.0")
    (commit "zephyr-v3.1.0"))
      (name "zephyr")
      (version version)
      (home-page "")
      (source (origin (method git-fetch)
              (uri (git-reference
                (url "")
                (commit commit)))
              (file-name (git-file-name name version))
               (base32 "1yl5y9757xc3l037k3g1dynispv6j5zqfnzrhsqh9cx4qzd485lx"))
               ;; this patch makes this package work in a symlinked profile
               (search-patches "zephyr-3.1-linker-gen-abs-path.patch"))))
      (build-system copy-build-system)
     '(("." "zephyr-workspace/zephyr"))
     (modify-phases %standard-phases
       (add-after 'unpack 'patch-cmake-scripts
         (lambda* _
           (format #t "~a~&" (getcwd))
           ;; Some cmake scripts assume the presence of a
           ;; git repository in the source directory.
           ;; We will just hard-code that information now
           (substitute* "CMakeLists.txt"
         (("if\\(DEFINED BUILD_VERSION\\)" all)
          (format #f "set(BUILD_VERSION \"~a-~a\")~&~a"
              ,version ,commit all))))))))
       (list python-3
       (list (search-path-specification
          (variable "ZEPHYR_BASE")
          (files '("zephyr-workspace/zephyr")))))
      (synopsis "Source code for zephyr rtos")
      (description "Zephyr rtos source code.")
      (license license:apsl2))))

(define-public zephyr-3.2.0-rc3
      (package (inherit zephyr)
    (version "3.2.0-rc3")
    (source (origin (method git-fetch)
            (uri (git-reference
                  (url "")
                  (commit "v3.2.0-rc3")))
            (file-name (git-file-name "zephyr" version))
             (base32 "06ksd9zj4j19jq0zg3lms13jx0gxzjc41433zgb91cnd2cqmn5cb"))
             (search-patches "zephyr-3.1-linker-gen-abs-path.patch"))))))

Here we use the copy-build-system which takes a list of source destination pairs. In our case, we just copy everything to the output directory, but not before patching some files to accomodate our special environment.

While developing this I wanted to test some toolchain/board features on the latest release of Zephyr. I included an example of that package definition to show how we can easily accomodate side by side package variants and experiment without breaking anything.


It's finally time to define some firmware! Zephyr packages some examples in $ZEPHYR_BASE/samples including a basic hello world. The k64 development board is already supported by Zephyr so building the example is trivial.

In order to actually target the k64 we need to two modules, the NXP hardware abstraction layer, and CMSIS. Looking at $ZEPHYR_BASE/west.yml we can see the repositories and commits which contain these modules. This is how West does dependency management.

Defining these packages is not so bad (see footnote 1).

(define-public hal-cmsis
    (name "hal-cmsis")
    (version "5.8.0")
    (home-page "")
    (source (origin
          (method git-fetch)
          (uri (git-reference
            (url "")
            (commit "093de61c2a7d12dc9253daf8692f61f793a9254a")))
          (file-name (git-file-name name version))
           (base32 "0f7cipnwllna7iknsnz273jkvrly16yr6wm4y2018i6njpqh67wi"))))
    (build-system zephyr-module-build-system)
    (arguments `(#:workspace-path  "/modules/hal/cmsis"))
    (synopsis "Zephyr module providing the Common Microcontroller
    Software Interface Standard")
    (description "Zephyr module providing the Common Microcontroller
    Software Interface Standard")
    (license license:apsl2)))

(define-public hal-nxp
    (name "hal-nxp")
    (version "3.1.0")
    (home-page "")
    (source (origin
          (method git-fetch)
          (uri (git-reference
            (url "")
            (commit "708c95825b0d5279620935a1356299fff5dfbc6e")))
          (file-name (git-file-name name version))
           (base32 "1c0i26bpk6cyhr1q4af183jclfmxshv4d15i7k3cz7brzb12m8q1"))))
    (build-system zephyr-module-build-system)
    (arguments `(#:workspace-path  "/modules/hal/nxp"))
     (list (search-path-specification
        (variable "ZEPHYR_MODULES")
        (files `(,(string-append %zephyr-workspace-name module-path)))
        (separator ";"))))
    (synopsis "Zephyr module for NXP Hardware Abstraction Layer")
    (description "Provides sources for NXP HAL zephyr module")
    (license license:bsd-3)))

With these two modules defined we can write zephyr-hello-world-frdm-k64f.

Hello world

(define-public zephyr-hello-world-frdm-k64f
    (name "zephyr-hello-world-frdm-k64f")
    (version (package-version zephyr))
    (home-page "")
    (source (file-append (package-source zephyr)
    (build-system zephyr-build-system)
     '(#:configure-flags '("-DBOARD=frdm_k64f")))
    (outputs '("out" "debug"))
     (list hal-cmsis
    (synopsis "Hello world example from Zephyr Project")
    (description "Sample package for zephyr project")
    (license license:apsl2)))


Our above definition can be built using the following: When testing package definitions, I use the -L flag to point to the local repository to load the new packages. I will be omitting that flag as if I had ~guix pull~ed successfully from guix-zephyr.

guix build zephyr-hello-world-frdm-k64f


This actually doesn't fully test our toolchain. The hello world example, by default, will use a zephyr provided minimal implementation of the C library and will not link against newlib.

(define-public zephyr-hello-world-newlib-frdm-k64f
    (inherit zephyr-hello-world-frdm-k64f)
    (name "zephyr-hello-world-newlib-frdm-k64f")
     (substitute-keyword-arguments (package-arguments zephyr-hello-world-frdm-k64f)
       ((#:configure-flags flags)

    guix build zephyr-hello-world-newlib-frdm-k64f



Further Musings

One thing I learned while going through the pains of getting this working is that even though the components are "modular" there is still a lot rigid interdependencies, especially on the zephyr base. Just having two versions of Zephyr in the code base made component composition fragile. Modules rely on specific features from the kernel. This is hidden from developers normally by west automagically walking the `west.yml` of all of the declared dependencies recursively to discover the graph.

While there are many benefits to a modularized build system, a monolithic build system, like Zephyr, does have many benefits too.

Part of the problem comes from the domain itself. If you really want to be able to target the most resource constrained systems and deal with the "industrial mess" that comes from every board being unique, you have to be as generic and flexible as possible, which is hard in a guix-like modular build system.

Superficially the problem is solved in the same way Linux solved it, using device trees and having a very stable userland interface. However, unlike Linux where the device tree is compiled to a binary blob and interpreted by drivers at runtime, Zephyr device trees are compiled to a .h file and are mostly interpreted by the C pre-processor using an elaborate set of macros.

It goes beyond simply abstracting the hardware using clever hacks. Zephyr applications (and any zephyr module) can also introduce new "kernel" code, configuration options, and even linker script fragments at build time. Essentially the Zephyr CMake build system acts like a reverse ld. Instead of linking libraries after compilation, it discovers these things before gcc is ever invoked and provides additional code generation steps.

Zephyr does not have a stable "userland" interface for the same reason Linux does not have a stable "kernel module" interface. Because Zephyr applications are so tightly coupled to the hardware they run on it is not uncommon to bypass Zephyr utilities and directly touch hardware and memory.

In this way they are more related to kernel modules than userspace applications such as GNU Hello.

Perhaps there is a lispy way to track several zephyr releases without reducing the ability to freely modify components in the usual ways...I invite you dear reader to developer code to explore that possibility.

It is use-able for Guix anyway.

Why a Special Build System? Why not --target=frdm_k64f?

That is a fair question! At first glance you might imagine the following incantation:

guix build --target=arm-zephyr-eabi hello

The problem with calling the toolchain directly is that the architecture is specified by the board selection. It is not generally useful to compile a board to a different architecture.

Perhaps maybe something like this then.

guix build --target=frdm_k64f hello

The above command tells GNU hello to link against arm-zephyr-newlib and run on a specific board. The problem is that while this may work for GNU Hello, it will not work for anything, which requires inputs that are discovered by normal methods. Only packages which target zephyr explicitly could benefit from such an interface, and at that point, you may as well record which board is being targeted in the package definition.

In general not all zephyr applications can run on every board zephyr can run on, so the usefulness of the above command is dubious.

I think if you have firmware which targets multiple boards it is better to define a package for every board. It is likely every board will require special configuration flags anyway.


Zephyr has a very complex build process which can be difficult to understand and frustrating to set up.

Using Guix to define firmware packages makes these problems disappear. It is trivial to create a channel which contains all of the quirks of your platform and share it with a team or student.

Packages defined this way serve as a reproducible starting point for future hardware revisions and variations.


  1. I also made a zephyr-module-build-system as well which is just the copy-build-system that mimics the default zephyr workspace layout as provided by west. This way we do not need to provide the same install-plan for every module we package. However as I use the copy-build-system more often, it doesn't really provide much over just using the copy-build system.