|
| 1 | +# System manager |
| 2 | + |
| 3 | +[system-manager](https://github.com/numtide/system-manager) provides declarative, Nix-based system configuration management for non-NixOS Linux systems. |
| 4 | +It replaces imperative service setup with reproducible Nix module definitions, bringing NixOS-style service management to the AMI build without requiring a full NixOS installation. |
| 5 | + |
| 6 | +## How it fits into the AMI build pipeline |
| 7 | + |
| 8 | +The AMI build uses a two-stage pipeline orchestrated by Packer and Ansible. |
| 9 | +Stage 1 installs Nix itself, while stage 2 uses Nix to build and deploy all services. |
| 10 | +system-manager is deployed during stage 2 via the Ansible task `ansible/tasks/setup-system-manager.yml`: |
| 11 | + |
| 12 | +```yaml |
| 13 | +- name: Deploy system manager |
| 14 | + ansible.builtin.shell: | |
| 15 | + . /nix/var/nix/profiles/default/etc/profile.d/nix-daemon.sh |
| 16 | + cd /tmp |
| 17 | + nix run --accept-flake-config /tmp/flake#system-manager -- switch --flake /tmp/flake 2>&1 | tee /tmp/system-manager-deploy.log |
| 18 | + become: true |
| 19 | +``` |
| 20 | +
|
| 21 | +This sources the Nix daemon profile, then runs `system-manager switch` against the flake to apply the declared system configuration. |
| 22 | + |
| 23 | +## Nix configuration walkthrough |
| 24 | + |
| 25 | +### Flake input |
| 26 | + |
| 27 | +The system-manager flake input is declared in `flake.nix` (lines 34-35), pinned to the upstream repository with nixpkgs following the main input: |
| 28 | + |
| 29 | +```nix |
| 30 | +system-manager.inputs.nixpkgs.follows = "nixpkgs"; |
| 31 | +system-manager.url = "github:numtide/system-manager"; |
| 32 | +``` |
| 33 | + |
| 34 | +The flake outputs import both the module registry and the system configurations: |
| 35 | + |
| 36 | +```nix |
| 37 | +imports = [ |
| 38 | + # ... |
| 39 | + nix/systemModules |
| 40 | + nix/systemConfigs.nix |
| 41 | +]; |
| 42 | +``` |
| 43 | + |
| 44 | +### System configurations |
| 45 | + |
| 46 | +`nix/systemConfigs.nix` defines the top-level system configurations for each supported architecture. |
| 47 | +It calls `system-manager.lib.makeSystemConfig` to produce a configuration from the enabled modules: |
| 48 | + |
| 49 | +```nix |
| 50 | +mkSystemConfig = system: { |
| 51 | + name = system; |
| 52 | + value.default = inputs.system-manager.lib.makeSystemConfig { |
| 53 | + modules = mkModules system; |
| 54 | + extraSpecialArgs = { |
| 55 | + inherit self; |
| 56 | + inherit system; |
| 57 | + }; |
| 58 | + }; |
| 59 | +}; |
| 60 | +``` |
| 61 | + |
| 62 | +The `mkModules` function returns the list of modules to enable. |
| 63 | +Currently it enables the nginx service and sets the host platform: |
| 64 | + |
| 65 | +```nix |
| 66 | +mkModules = system: [ |
| 67 | + ({ |
| 68 | + services.nginx.enable = true; |
| 69 | + nixpkgs.hostPlatform = system; |
| 70 | + }) |
| 71 | +]; |
| 72 | +``` |
| 73 | + |
| 74 | +Configurations are built for both `aarch64-linux` and `x86_64-linux`. |
| 75 | + |
| 76 | +### System modules |
| 77 | + |
| 78 | +`nix/systemModules/default.nix` is the module registry. |
| 79 | +It is a flake-parts module that exports individual system modules under `flake.systemModules`: |
| 80 | + |
| 81 | +```nix |
| 82 | +{ |
| 83 | + imports = [ ./tests ]; |
| 84 | + flake = { |
| 85 | + systemModules = { |
| 86 | + nginx = flake-parts-lib.importApply ./nginx.nix { inherit withSystem self; }; |
| 87 | + }; |
| 88 | + }; |
| 89 | +} |
| 90 | +``` |
| 91 | + |
| 92 | +Each module is loaded with `flake-parts-lib.importApply`, which passes `withSystem` and `self` as arguments to the module file. |
| 93 | + |
| 94 | +## Adding a new system module |
| 95 | + |
| 96 | +To add a new system module: |
| 97 | + |
| 98 | +1. Create a new `.nix` file under `nix/systemModules/`, for example `nix/systemModules/my-service.nix`. |
| 99 | + The module is a standard NixOS-style module with options and config: |
| 100 | + |
| 101 | + ```nix |
| 102 | + { |
| 103 | + lib, |
| 104 | + config, |
| 105 | + ... |
| 106 | + }: |
| 107 | + let |
| 108 | + cfg = config.supabase.services.my-service; |
| 109 | + in |
| 110 | + { |
| 111 | + options = { |
| 112 | + supabase.services.my-service = { |
| 113 | + enable = lib.mkEnableOption "Whether to enable the my-service systemd service."; |
| 114 | + }; |
| 115 | + }; |
| 116 | +
|
| 117 | + config = lib.mkIf cfg.enable { |
| 118 | + # systemd units, environment.etc entries, etc. |
| 119 | + }; |
| 120 | + } |
| 121 | + ``` |
| 122 | + |
| 123 | +2. Register the module in `nix/systemModules/default.nix` by adding it to the `systemModules` attribute set: |
| 124 | + |
| 125 | + ```nix |
| 126 | + systemModules = { |
| 127 | + my-service = ./my-service.nix; |
| 128 | + }; |
| 129 | + ``` |
| 130 | + |
| 131 | +3. Include and enable the module in `nix/systemConfigs.nix` by adding it to the `mkModules` list and setting the enable option: |
| 132 | + |
| 133 | + ```nix |
| 134 | + mkModules = system: [ |
| 135 | + self.systemModules.my-service |
| 136 | + ({ |
| 137 | + services.nginx.enable = true; |
| 138 | + supabase.services.my-service.enable = true; |
| 139 | + nixpkgs.hostPlatform = system; |
| 140 | + }) |
| 141 | + ]; |
| 142 | + ``` |
| 143 | + |
| 144 | +4. Add a test assertion to the test script in `nix/systemModules/tests/default.nix` (see below). |
| 145 | + |
| 146 | +## Testing |
| 147 | + |
| 148 | +### Container tests |
| 149 | + |
| 150 | +Tests are defined in `nix/systemModules/tests/default.nix` using `system-manager.lib.containerTest.makeContainerTest`. |
| 151 | +This creates a lightweight container-based NixOS test that validates the system configuration: |
| 152 | + |
| 153 | +```nix |
| 154 | +check-system-manager = |
| 155 | + let |
| 156 | + toplevel = self.systemConfigs.${pkgs.system}.default; |
| 157 | + in |
| 158 | + inputs.system-manager.lib.containerTest.makeContainerTest { |
| 159 | + hostPkgs = pkgs; |
| 160 | + name = "check-system-manager"; |
| 161 | + inherit toplevel; |
| 162 | + testScript = '' |
| 163 | + start_all() |
| 164 | +
|
| 165 | + machine.wait_for_unit("multi-user.target") |
| 166 | +
|
| 167 | + machine.activate() |
| 168 | + machine.wait_for_unit("system-manager.target") |
| 169 | +
|
| 170 | + with subtest("Verify nginx service"): |
| 171 | + assert machine.service("nginx").is_running, "nginx should be running" |
| 172 | + ''; |
| 173 | + }; |
| 174 | +``` |
| 175 | + |
| 176 | +The test script starts the container, waits for systemd to reach `multi-user.target`, activates the system-manager configuration, then verifies that managed services are running. |
| 177 | +When adding a new module, extend the `testScript` with an additional `subtest` block that asserts the new service is running. |
| 178 | + |
| 179 | +### Running tests locally |
| 180 | + |
| 181 | +Run the system-manager check locally with: |
| 182 | + |
| 183 | +```bash |
| 184 | +nix build .#checks.x86_64-linux.check-system-manager -L |
| 185 | +``` |
| 186 | + |
| 187 | +The `-L` flag streams build logs for visibility. |
| 188 | +These checks only run on Linux (gated by `lib.optionalAttrs pkgs.stdenv.hostPlatform.isLinux`). |
| 189 | + |
| 190 | +## CI integration |
| 191 | + |
| 192 | +The `check-system-manager` derivation is part of the flake's `checks` output, so it runs automatically in the `nix-build-checks-*` jobs of the main `nix-build.yml` workflow alongside all other checks. |
| 193 | + |
| 194 | +## Runtime effects |
| 195 | + |
| 196 | +After `system-manager switch` runs, managed software is available under `/run/system-manager/sw/`. |
| 197 | +This affects paths throughout the system. |
| 198 | +For example, the audit baseline `audit-specs/baselines/ami-build/user.yml` references these paths for user shells: |
| 199 | + |
| 200 | +```yaml |
| 201 | +root: |
| 202 | + exists: true |
| 203 | + home: /root |
| 204 | + shell: /run/system-manager/sw/bin/bash |
| 205 | +nobody: |
| 206 | + exists: true |
| 207 | + shell: /run/system-manager/sw/bin/nologin |
| 208 | +``` |
| 209 | + |
| 210 | +When adding new services or modifying system-manager configuration, update the audit baselines accordingly to reflect any changes to user shells, service users, or file paths that `supascan` validates during AMI builds. |
0 commit comments