Add a11ywatch and related configurations for Podman and Nginx
- Introduced a new module for a11ywatch with Podman support, creating a bridge network and defining backend and frontend containers. - Configured Nginx to serve the a11ywatch application with SSL and ACME support. - Added user and group configurations for a11ywatch. - Created a systemd service to ensure the Podman network exists on boot. Implement Firefox Container Controller extension and host - Added a module for the Firefox Container Controller extension, allowing installation via Nix. - Created a native messaging host for the extension to communicate with the container controller. - Included CLI helpers to enqueue commands for showing and hiding containers. Enable fingerprint authentication in PAM - Configured fingerprint authentication for login, sudo, and swaylock services. Setup Raspberry Pi OS image creation script - Developed a script to create a read-only Raspberry Pi OS Lite image with Snapcast client. - Included configuration for Wi-Fi, hostname, and Snapcast server. - Implemented user and group setup for Snapcast client and ensured necessary services are enabled. Document Raspberry Pi Zero W setup instructions - Added detailed instructions for configuring Raspberry Pi OS on Zero W, including disabling unused services and setting up Snapcast client. Create test configuration script for NixOS - Implemented a script to perform dry-builds for NixOS configurations, allowing for easy validation of host configurations.
This commit is contained in:
83
.roo/rules/rules.md
Normal file
83
.roo/rules/rules.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# RULES.md
|
||||
|
||||
## Overview
|
||||
|
||||
This repository manages NixOS configurations for multiple systems, structured to promote modularity, security, and maintainability.
|
||||
|
||||
### Directory Structure
|
||||
|
||||
Each host has its own directory under `hosts/`, containing:
|
||||
|
||||
```
|
||||
|
||||
hosts/
|
||||
└── hostname/
|
||||
├── configuration.nix
|
||||
├── modules/
|
||||
└── secrets.yaml
|
||||
```
|
||||
|
||||
|
||||
|
||||
* `configuration.nix`: Main configuration file for the host.
|
||||
* `modules/`: Custom NixOS modules specific to the host.
|
||||
* `secrets.yaml`: Encrypted secrets file (see [Secrets Management](#secrets-management)).
|
||||
|
||||
## Configuration Management
|
||||
|
||||
### Modularization
|
||||
|
||||
* Break down configurations into reusable modules placed in the `modules/` directory.
|
||||
* Use the `imports` directive in `configuration.nix` to include necessary modules.
|
||||
* Avoid monolithic configurations; modularity enhances clarity and reusability.
|
||||
|
||||
### Version Control
|
||||
|
||||
* Track all configuration files using Git.
|
||||
* Exclude sensitive files like `secrets.yaml` from version control.
|
||||
* Use descriptive commit messages to document changes.
|
||||
|
||||
## Deployment with Bento
|
||||
|
||||
Bento is utilized for deploying configurations across systems.
|
||||
|
||||
* Centralize configurations on a management server.
|
||||
* Ensure each host accesses only its specific configuration files.
|
||||
* Leverage Bento's features to manage deployments efficiently.([NixOS Discourse][1], [Reddit][2], [cbiit.github.io][3])
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
### Secrets Management
|
||||
|
||||
* Never store plain-text secrets in the Nix store or configuration files.
|
||||
* Use tools like [sops-nix](https://github.com/Mic92/sops-nix) to encrypt `secrets.yaml`.
|
||||
* Restrict access to decrypted secrets using appropriate file permissions.([Reddit][4], [dade][5])
|
||||
|
||||
### System Hardening
|
||||
|
||||
* Disable unnecessary services to minimize attack surfaces.
|
||||
* Configure firewalls to allow only essential traffic.
|
||||
* Regularly update systems to apply security patches.
|
||||
|
||||
### User Management
|
||||
|
||||
* Implement the principle of least privilege for user accounts.
|
||||
* Use SSH keys for authentication; disable password-based logins.
|
||||
* Monitor user activities and access logs for suspicious behavior.
|
||||
|
||||
## Maintenance Guidelines
|
||||
|
||||
* Regularly review and refactor modules for efficiency and clarity.
|
||||
* Document all modules and configurations for future reference.
|
||||
* Test configurations in a controlled environment before deploying to production systems.([NixOS & Flakes][6])
|
||||
|
||||
---
|
||||
|
||||
Adhering to these guidelines will help maintain a secure, organized, and efficient NixOS configuration across multiple systems.
|
||||
|
||||
[1]: https://discourse.nixos.org/t/introducing-bento-a-nixos-deployment-framework/21446?utm_source=chatgpt.com "Introducing bento, a NixOS deployment framework"
|
||||
[2]: https://www.reddit.com/r/NixOS/comments/1e95b69/how_do_you_guys_organize_your_nix_config_files_i/?utm_source=chatgpt.com "How do you guys organize your .nix config files? I have a ... - Reddit"
|
||||
[3]: https://cbiit.github.io/bento-docs/master/installation/bento-quick-start.html?utm_source=chatgpt.com "1. Quick Start Tutorial — Bento release-4.1.0 documentation"
|
||||
[4]: https://www.reddit.com/r/NixOS/comments/1cnhx6z/best_security_practices_for_nixos_devices_exposed/?utm_source=chatgpt.com "Best Security practices for NixOS devices exposed to the Internet"
|
||||
[5]: https://0xda.de/blog/2024/07/framework-and-nixos-sops-nix-secrets-management/?utm_source=chatgpt.com "Framework and NixOS - Sops-nix Secrets Management - dade"
|
||||
[6]: https://nixos-and-flakes.thiscute.world/nixos-with-flakes/modularize-the-configuration?utm_source=chatgpt.com "Modularize Your NixOS Configuration | NixOS & Flakes Book"
|
||||
94
hosts/fw/modules/allywatch.nix
Normal file
94
hosts/fw/modules/allywatch.nix
Normal file
@@ -0,0 +1,94 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
domain = "a11ywatch.cloonar.com";
|
||||
confDir = "/var/lib/a11ywatch";
|
||||
|
||||
json = pkgs.formats.json { };
|
||||
in {
|
||||
# 1) Enable Podman (daemonless, drop-in for docker)
|
||||
virtualisation.podman.enable = true; # :contentReference[oaicite:0]{index=0}
|
||||
virtualisation.podman.dockerCompat = true; # :contentReference[oaicite:1]{index=1}
|
||||
virtualisation.podman.defaultNetwork.settings.dns_enabled = true;# :contentReference[oaicite:2]{index=2}
|
||||
|
||||
services.nginx.virtualHosts."${domain}" = {
|
||||
forceSSL = true;
|
||||
enableACME = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://localhost:3000/";
|
||||
};
|
||||
};
|
||||
|
||||
environment.etc."containers/networks/a11ywatch-net.json" = {
|
||||
source = json.generate "a11ywatch-net.json" ({
|
||||
name = "a11ywatch-net";
|
||||
id = "ccb4b7fb90d2df26db27ef0995765b04f52d318db752c9474b470c5ef4d7978d";
|
||||
driver = "bridge";
|
||||
network_interface = "podman1";
|
||||
subnets = [
|
||||
{
|
||||
subnet = "10.89.0.0/24";
|
||||
gateway = "10.89.0.1";
|
||||
}
|
||||
];
|
||||
ipv6_enabled = false;
|
||||
internal = false;
|
||||
dns_enabled = true;
|
||||
ipam_options = {
|
||||
driver = "host-local";
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
users.users.a11ywatch = {
|
||||
isSystemUser = true;
|
||||
group = "a11ywatch";
|
||||
home = "/var/lib/a11ywatch";
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.a11ywatch = { };
|
||||
users.groups.docker.members = [ "a11ywatch" ];
|
||||
|
||||
# 2) Create the bridge network on boot via a oneshot systemd service
|
||||
systemd.services.a11ywatch-net = {
|
||||
description = "Ensure a11ywatch-net Podman network exists";
|
||||
wants = [ "podman.service" ];
|
||||
after = [ "podman.service" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = ''
|
||||
${pkgs.podman}/bin/podman network inspect a11ywatch-net >/dev/null 2>&1 \
|
||||
|| ${pkgs.podman}/bin/podman network create a11ywatch-net
|
||||
'';
|
||||
RemainAfterExit = true;
|
||||
};
|
||||
wantedBy = [
|
||||
"multi-user.target"
|
||||
];
|
||||
};
|
||||
|
||||
# 3) Declare your two containers using the podman backend
|
||||
virtualisation.oci-containers = {
|
||||
backend = "podman"; # :contentReference[oaicite:3]{index=3}
|
||||
containers = {
|
||||
a11ywatch-backend = {
|
||||
image = "docker.io/a11ywatch/a11ywatch:latest";
|
||||
autoStart = true;
|
||||
ports = [ "3280:3280" ];
|
||||
volumes = [ "${confDir}:/a11ywatch/conf" ];
|
||||
environment = { SUPER_MODE = "true"; };
|
||||
extraOptions = [ "--network=a11ywatch-net" ];
|
||||
};
|
||||
a11ywatch-frontend = {
|
||||
image = "docker.io/a11ywatch/web:latest";
|
||||
autoStart = true;
|
||||
ports = [ "3000:3000" ];
|
||||
volumes = [ "${confDir}:/a11ywatch/conf" ];
|
||||
environment = { SUPER_MODE = "true"; };
|
||||
extraOptions = [
|
||||
"--network=a11ywatch-net"
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -95,8 +95,11 @@
|
||||
"/home-assistant.cloonar.com/${config.networkPrefix}.97.20"
|
||||
"/mopidy.cloonar.com/${config.networkPrefix}.97.21"
|
||||
"/snapcast.cloonar.com/${config.networkPrefix}.97.21"
|
||||
"/lms.cloonar.com/${config.networkPrefix}.97.21"
|
||||
"/git.cloonar.com/${config.networkPrefix}.97.50"
|
||||
"/feeds.cloonar.com/188.34.191.144"
|
||||
"/nukibridge1a753f72.cloonar.smart/${config.networkPrefix}.100.112"
|
||||
"/allywatch.cloonar.com/${config.networkPrefix}.97.5"
|
||||
|
||||
"/stage.wsw.at/10.254.235.22"
|
||||
"/prod.wsw.at/10.254.217.23"
|
||||
@@ -147,6 +150,8 @@
|
||||
|
||||
"/bath-bulb-0.cloonar.smart/${config.networkPrefix}.100.41"
|
||||
"/bath-bulb-0.cloonar.smart/${config.networkPrefix}.100.42"
|
||||
|
||||
"/paraclub.at/188.34.191.144"
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
@@ -1,5 +1,8 @@
|
||||
{
|
||||
services.home-assistant.extraComponents = [ "squeezebox" ];
|
||||
services.home-assistant.extraComponents = [
|
||||
"squeezebox"
|
||||
"slimproto"
|
||||
];
|
||||
services.home-assistant.config = {
|
||||
"automation toilet music" = {
|
||||
alias = "toilet music";
|
||||
@@ -11,7 +14,7 @@
|
||||
{
|
||||
service = "media_player.volume_mute";
|
||||
target = {
|
||||
entity_id = "media_player.music_toilet_snapcast_client";
|
||||
entity_id = "media_player.music_toilet";
|
||||
};
|
||||
data = {
|
||||
is_volume_muted = "{{ trigger.to_state.state != 'on' }}";
|
||||
@@ -29,7 +32,7 @@
|
||||
{
|
||||
service = "media_player.volume_mute";
|
||||
target = {
|
||||
entity_id = "media_player.music_bathroom_snapcast_client";
|
||||
entity_id = "media_player.music_bathroom";
|
||||
};
|
||||
data = {
|
||||
is_volume_muted = "{{ trigger.to_state.state != 'on' }}";
|
||||
@@ -41,7 +44,7 @@
|
||||
alias = "piano";
|
||||
trigger = {
|
||||
platform = "state";
|
||||
entity_id = "media_player.music_piano_snapcast_client";
|
||||
entity_id = "media_player.music_piano";
|
||||
attribute = "is_volume_muted";
|
||||
};
|
||||
condition = [
|
||||
|
||||
@@ -26,7 +26,11 @@ in
|
||||
{
|
||||
services.mopidy = {
|
||||
enable = true;
|
||||
extensionPackages = [ pkgs.mopidy-iris pkgs.mopidy-tunein mopidy-autoplay ];
|
||||
extensionPackages = [
|
||||
pkgs.mopidy-iris
|
||||
pkgs.mopidy-tunein
|
||||
mopidy-autoplay
|
||||
];
|
||||
configuration = ''
|
||||
[audio]
|
||||
output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! wavenc ! filesink location=/run/snapserver/mopidy
|
||||
|
||||
@@ -1,28 +1,18 @@
|
||||
{ pkgs, config, python3Packages, ... }:
|
||||
{ pkgs, config, lib, python3Packages, ... }:
|
||||
let
|
||||
domain = "snapcast.cloonar.com";
|
||||
mopidyDomain = "mopidy.cloonar.com";
|
||||
networkPrefix = config.networkPrefix;
|
||||
|
||||
snapweb = pkgs.stdenv.mkDerivation {
|
||||
pname = "snapweb";
|
||||
version = "0.8";
|
||||
|
||||
src = pkgs.fetchzip {
|
||||
url = "https://github.com/badaix/snapweb/releases/download/v0.8.0/snapweb.zip";
|
||||
sha256 = "sha256-IpT1pcuzcM8kqWJUX3xxpRQHlfPNsrwhemLmY0PyzjI=";
|
||||
stripRoot = false;
|
||||
};
|
||||
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
cp -r $src/* $out/
|
||||
'';
|
||||
};
|
||||
in
|
||||
{
|
||||
security.acme.certs."${domain}" = {
|
||||
group = "nginx";
|
||||
};
|
||||
security.acme.certs."${mopidyDomain}" = {
|
||||
group = "nginx";
|
||||
};
|
||||
|
||||
sops.secrets.mopidy-spotify = { };
|
||||
|
||||
containers.snapcast = {
|
||||
autoStart = true;
|
||||
@@ -39,6 +29,13 @@ in
|
||||
hostPath = "${config.security.acme.certs.${domain}.directory}";
|
||||
isReadOnly = true;
|
||||
};
|
||||
"/var/lib/acme/mopidy/" = {
|
||||
hostPath = "${config.security.acme.certs.${mopidyDomain}.directory}";
|
||||
isReadOnly = true;
|
||||
};
|
||||
"/run/secrets/mopidy-spotify" = {
|
||||
hostPath = "${config.sops.secrets.mopidy-spotify.path}";
|
||||
};
|
||||
};
|
||||
config = { lib, config, pkgs, python3Packages, ... }:
|
||||
let
|
||||
@@ -51,18 +48,59 @@ in
|
||||
"--with-metadata"
|
||||
];
|
||||
});
|
||||
snapweb = pkgs.stdenv.mkDerivation {
|
||||
pname = "snapweb";
|
||||
version = "0.8";
|
||||
|
||||
src = pkgs.fetchzip {
|
||||
url = "https://github.com/badaix/snapweb/releases/download/v0.8.0/snapweb.zip";
|
||||
sha256 = "sha256-IpT1pcuzcM8kqWJUX3xxpRQHlfPNsrwhemLmY0PyzjI=";
|
||||
stripRoot = false;
|
||||
};
|
||||
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
cp -r $src/* $out/
|
||||
'';
|
||||
};
|
||||
|
||||
mopidy-autoplay = pkgs.python3Packages.buildPythonApplication rec {
|
||||
pname = "Mopidy-Autoplay";
|
||||
version = "0.2.3";
|
||||
|
||||
src = pkgs.python3Packages.fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "sha256-E2Q+Cn2LWSbfoT/gFzUfChwl67Mv17uKmX2woFz/3YM=";
|
||||
};
|
||||
|
||||
propagatedBuildInputs = [
|
||||
pkgs.mopidy
|
||||
] ++ (with pkgs.python3Packages; [
|
||||
configobj
|
||||
]);
|
||||
|
||||
# no tests implemented
|
||||
doCheck = false;
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://codeberg.org/sph/mopidy-autoplay";
|
||||
};
|
||||
};
|
||||
in
|
||||
{
|
||||
networking = {
|
||||
hostName = "snapcast";
|
||||
useHostResolvConf = false;
|
||||
defaultGateway = {
|
||||
address = "${networkPrefix}.96.1";
|
||||
address = "${networkPrefix}.97.1";
|
||||
interface = "eth0";
|
||||
};
|
||||
nameservers = [ "${networkPrefix}.97.1" ];
|
||||
firewall.enable = false;
|
||||
};
|
||||
environment.systemPackages = with pkgs; [
|
||||
# shanocast
|
||||
];
|
||||
environment.etc = {
|
||||
# Creates /etc/nanorc
|
||||
shairport = {
|
||||
@@ -83,36 +121,79 @@ in
|
||||
};
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"p /run/snapserver/mopidyfifo 0660 mopidy snapserver -"
|
||||
];
|
||||
|
||||
services.mopidy = {
|
||||
enable = true;
|
||||
extensionPackages = [
|
||||
pkgs.mopidy-iris
|
||||
pkgs.mopidy-tunein
|
||||
pkgs.mopidy-spotify
|
||||
mopidy-autoplay
|
||||
];
|
||||
configuration = ''
|
||||
[audio]
|
||||
output = audioresample ! audioconvert ! audio/x-raw,rate=48000,channels=2,format=S16LE ! wavenc ! filesink location=/run/snapserver/mopidyfifo
|
||||
|
||||
[file]
|
||||
enabled = false
|
||||
|
||||
[autoplay]
|
||||
enabled = true
|
||||
'';
|
||||
extraConfigFiles = [
|
||||
"/run/secrets/mopidy-spotify"
|
||||
];
|
||||
};
|
||||
|
||||
|
||||
services.snapserver = {
|
||||
enable = true;
|
||||
codec = "flac";
|
||||
http.enable = true;
|
||||
http.docRoot = "${snapweb}/";
|
||||
streams.mopidy = {
|
||||
type = "pipe";
|
||||
location = "/run/snapserver/mopidy";
|
||||
};
|
||||
buffer = 1000;
|
||||
streamBuffer = 1000;
|
||||
streams.airplay = {
|
||||
type = "airplay";
|
||||
location = "${shairport-sync}/bin/shairport-sync";
|
||||
query = {
|
||||
devicename = "Multi Room New";
|
||||
devicename = "Multi Room";
|
||||
port = "5000";
|
||||
params = "--mdns=avahi";
|
||||
};
|
||||
sampleFormat = "44100:16:2";
|
||||
codec = "pcm";
|
||||
};
|
||||
streams.mopidy = {
|
||||
type = "pipe";
|
||||
location = "/run/snapserver/mopidyfifo";
|
||||
};
|
||||
streams.mixed = {
|
||||
type = "meta";
|
||||
location = "/airplay/mopidy";
|
||||
location = "meta:///airplay/mopidy?name=Mixed&sampleformat=44100:16:2";
|
||||
codec = "opus";
|
||||
};
|
||||
};
|
||||
|
||||
# run after tmpfiles-setup
|
||||
systemd.services.snapserver = {
|
||||
after = [ "systemd-tmpfiles-setup.service" ];
|
||||
requires = [ "systemd-tmpfiles-setup.service" ];
|
||||
};
|
||||
systemd.services.mopidy = {
|
||||
after = [ "systemd-tmpfiles-setup.service" ];
|
||||
requires = [ "systemd-tmpfiles-setup.service" ];
|
||||
};
|
||||
|
||||
services.avahi.enable = true;
|
||||
services.avahi.publish.enable = true;
|
||||
services.avahi.publish.userServices = true;
|
||||
|
||||
services.nginx.enable = true;
|
||||
services.nginx.virtualHosts."snapcast.cloonar.com" = {
|
||||
services.nginx.virtualHosts."${domain}" = {
|
||||
sslCertificate = "/var/lib/acme/snapcast/fullchain.pem";
|
||||
sslCertificateKey = "/var/lib/acme/snapcast/key.pem";
|
||||
sslTrustedCertificate = "/var/lib/acme/snapcast/chain.pem";
|
||||
@@ -131,6 +212,26 @@ in
|
||||
'';
|
||||
};
|
||||
|
||||
|
||||
services.nginx.virtualHosts."${mopidyDomain}" = {
|
||||
sslCertificate = "/var/lib/acme/mopidy/fullchain.pem";
|
||||
sslCertificateKey = "/var/lib/acme/mopidy/key.pem";
|
||||
sslTrustedCertificate = "/var/lib/acme/mopidy/chain.pem";
|
||||
forceSSL = true;
|
||||
extraConfig = ''
|
||||
proxy_buffering off;
|
||||
'';
|
||||
locations."/".extraConfig = ''
|
||||
proxy_pass http://127.0.0.1:6680;
|
||||
proxy_set_header Host $host;
|
||||
proxy_redirect http:// https://;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
'';
|
||||
};
|
||||
|
||||
system.stateVersion = "23.05";
|
||||
};
|
||||
};
|
||||
|
||||
94
hosts/fw/modules/web/allywatch.nix
Normal file
94
hosts/fw/modules/web/allywatch.nix
Normal file
@@ -0,0 +1,94 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
domain = "a11ywatch.cloonar.com";
|
||||
confDir = "/var/lib/a11ywatch";
|
||||
|
||||
json = pkgs.formats.json { };
|
||||
in {
|
||||
# 1) Enable Podman (daemonless, drop-in for docker)
|
||||
virtualisation.podman.enable = true; # :contentReference[oaicite:0]{index=0}
|
||||
virtualisation.podman.dockerCompat = true; # :contentReference[oaicite:1]{index=1}
|
||||
virtualisation.podman.defaultNetwork.settings.dns_enabled = true;# :contentReference[oaicite:2]{index=2}
|
||||
|
||||
services.nginx.virtualHosts."${domain}" = {
|
||||
forceSSL = true;
|
||||
enableACME = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://localhost:3000/";
|
||||
};
|
||||
};
|
||||
|
||||
environment.etc."containers/networks/a11ywatch-net.json" = {
|
||||
source = json.generate "a11ywatch-net.json" ({
|
||||
name = "a11ywatch-net";
|
||||
id = "ccb4b7fb90d2df26db27ef0995765b04f52d318db752c9474b470c5ef4d7978d";
|
||||
driver = "bridge";
|
||||
network_interface = "podman1";
|
||||
subnets = [
|
||||
{
|
||||
subnet = "10.89.0.0/24";
|
||||
gateway = "10.89.0.1";
|
||||
}
|
||||
];
|
||||
ipv6_enabled = false;
|
||||
internal = false;
|
||||
dns_enabled = true;
|
||||
ipam_options = {
|
||||
driver = "host-local";
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
users.users.a11ywatch = {
|
||||
isSystemUser = true;
|
||||
group = "a11ywatch";
|
||||
home = "/var/lib/a11ywatch";
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.a11ywatch = { };
|
||||
users.groups.docker.members = [ "a11ywatch" ];
|
||||
|
||||
# 2) Create the bridge network on boot via a oneshot systemd service
|
||||
systemd.services.a11ywatch-net = {
|
||||
description = "Ensure a11ywatch-net Podman network exists";
|
||||
wants = [ "podman.service" ];
|
||||
after = [ "podman.service" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = ''
|
||||
${pkgs.podman}/bin/podman network inspect a11ywatch-net >/dev/null 2>&1 \
|
||||
|| ${pkgs.podman}/bin/podman network create a11ywatch-net
|
||||
'';
|
||||
RemainAfterExit = true;
|
||||
};
|
||||
wantedBy = [
|
||||
"multi-user.target"
|
||||
];
|
||||
};
|
||||
|
||||
# 3) Declare your two containers using the podman backend
|
||||
virtualisation.oci-containers = {
|
||||
backend = "podman"; # :contentReference[oaicite:3]{index=3}
|
||||
containers = {
|
||||
a11ywatch-backend = {
|
||||
image = "docker.io/a11ywatch/a11ywatch:latest";
|
||||
autoStart = true;
|
||||
ports = [ "3280:3280" ];
|
||||
volumes = [ "${confDir}:/a11ywatch/conf" ];
|
||||
environment = { SUPER_MODE = "true"; };
|
||||
extraOptions = [ "--network=a11ywatch-net" ];
|
||||
};
|
||||
a11ywatch-frontend = {
|
||||
image = "docker.io/a11ywatch/web:latest";
|
||||
autoStart = true;
|
||||
ports = [ "3000:3000" ];
|
||||
volumes = [ "${confDir}:/a11ywatch/conf" ];
|
||||
environment = { SUPER_MODE = "true"; };
|
||||
extraOptions = [
|
||||
"--network=a11ywatch-net"
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -58,7 +58,6 @@ in {
|
||||
./zammad.nix
|
||||
./proxies.nix
|
||||
./matrix.nix
|
||||
./tinder-api.nix
|
||||
];
|
||||
|
||||
networkPrefix = config.networkPrefix;
|
||||
|
||||
@@ -25,6 +25,10 @@
|
||||
publicKey = "HE4eX4IMKG8eRDzcriy6XdIPV71uBY5VTqjKzfHPsFI=";
|
||||
allowedIPs = [ "${config.networkPrefix}.98.203/32" ];
|
||||
}
|
||||
{
|
||||
publicKey = "yv0AWQl4LFebVa7SvwdxpEmB3PPglwjoKy6A3og93WI=";
|
||||
allowedIPs = [ "${config.networkPrefix}.98.204/32" ];
|
||||
}
|
||||
];
|
||||
};
|
||||
wg_epicenter = {
|
||||
|
||||
@@ -18,6 +18,8 @@ palworld: ENC[AES256_GCM,data:rdqChPt4gSJHS1D60+HJ+4m5mg35JbC+pOmevK21Y95QyAIeyB
|
||||
ark: ENC[AES256_GCM,data:YYGyzoVIKI9Ac1zGOr0BEpd3fgBsvp1hSwAvfO07/EQdg8ufMWUkNvqNHDKN62ZK5A1NnY3JTA1p4gyZ4ryQeAOsbwqU1GSk2YKHFyPeEnpLz/Ml82KMsv7XPGXuKRXZ4v3UcLu0R8k1Q0gQsMWo4FjCs3FF5mVtJG/YWxxbCYHoBLJ/di5p0DgjuFgJBQknYBpuLzr+yIoeqEyN7XcGYAJO53trEJuOOxLILULifkqISHjZ66i5F1fHW0iUdRbmeWV4aOAeOrsQqXYv,iv:gJwV5ip84zHqpU0l0uESfWWOtcgihMvEEdLaeI+twcU=,tag:sy8udVQsKxV/jOqwhJmWAg==,type:str]
|
||||
firefox-sync: ENC[AES256_GCM,data:uAJAdyKAuXRuqCFl8742vIejU5RnAPpUxUFCC0s0QeXZR5oH2YOrDh+3vKUmckW4V1cIhSHoe+4+I4HuU5E73DDrJThfIzBEw+spo4HXwZf5KBtu3ujgX6/fSTlPWV7pEsDDsZ0y6ziKPADBDym8yEk0bU9nRedvTBUhVryo3aolzF/c+gJvdeDvKUYa8+8=,iv:yuvE4KG7z7Rp9ZNlLiJ2rh0keed3DuvrELzsfJu4+bs=,tag:HFo1A53Eva31NJ8fRE7TlA==,type:str]
|
||||
knot-tsig-key: ENC[AES256_GCM,data:H2jEkRSVSIJl1dSolAXj9uUmzD6eEh9zPpoajZLxfuuFt7/LJF8aCEHyk+Q=,iv:9aqywuaILYtejuZGd+Cy8oErrHIoL2XhL1g9HtcUn/o=,tag:K3SnVEXGC/NhlchU7OyA6Q==,type:str]
|
||||
mopidy-spotify: ENC[AES256_GCM,data:O3s6UvTP8z5KZPCq10GaaEQntWAEoxGFMnTkeUz9AfobrpsGZJcQgyazFX2u4DgAaIjNb34032MISotmuVQDJ14mi8xI5vC9w/Vf16v3TFu/dSKGZNb5ZPQwTUQ+iMJf7chgwOV9guThhutVJokb6pLxzt7fSht7,iv:j8+X1AmuWzIJdafzgrE7WBIlZ7coNNi0/Zn6JObR6rw=,tag:fiw6M2/6nfEPqEgV2YOWLg==,type:str]
|
||||
lms-spotify: ENC[AES256_GCM,data:gh5kx/MDSefNLbZsnovRc3rNWxp/RTrJ4A2WIs1QMi4JVGFj9SppdsErMXW4y/IFj/YxH1X7JtwvhptO/p3P2CFK0XL2I1vFVqPuj7LavDHJK7GXPAV6+x17ldvPXgym5NqHjzHi4gtj7U/bMJlz0NxrFsrrjMcY9nmNX2vVwKlINUFqWb1JRvQsJ8ujSutjJbGtAY/bVQI8OFtU29QGKw1CU3RH/bgXIzxGiLQsUd68w7N17oKYj8MiTpGVcovMCRKwwUbd9w==,iv:4aVy+r//s1Cs9q4GasR3vSAb8b/VB/8Mx5E1jWAUA+E=,tag:TgTSLLH1OG9ySi2tZ+hK1Q==,type:str]
|
||||
sops:
|
||||
kms: []
|
||||
gcp_kms: []
|
||||
@@ -60,8 +62,8 @@ sops:
|
||||
WXJpUUxadERyYUExRFMzNzBXaUVET3cKG9ZwWy5YvTr/BAw/i+ZJos5trwRvaW5j
|
||||
eV/SHiEteZZtCuCVFAp3iolE/mJyu97nA2yFwWaLN86h+/xkOJsdqA==
|
||||
-----END AGE ENCRYPTED FILE-----
|
||||
lastmodified: "2025-05-01T20:36:09Z"
|
||||
mac: ENC[AES256_GCM,data:ZtXJcuwDpDlBl2xdRtMF1PwwqbW00Eps2ZZG5x4C2djAq+meXJCxKS9sNazQhMYFOqphQXe3JEhChykLxnJyWivY/Er1ig2sU6Ke1uVcfSP85B1/rpzhe/7QI+GBDWrkCk1O0xGKKj8fWt+Yv2MV8gw2XctdtJ9Md4imUhcK7zo=,iv:5NFH+7Z0alBiq/b94T40XJSCar2+BGaFB20z0Kc59fU=,tag:18n0tt17RNMyyE0eECH2kQ==,type:str]
|
||||
lastmodified: "2025-05-23T19:49:55Z"
|
||||
mac: ENC[AES256_GCM,data:+03w76nM2yhAipvIYgbrdxDT9EiRzqhWuOtngiJprp+zRYNf8uRJaMNSfVNkmIQ/PUikQpDLoz98zKJNGFsdT6O6JC1mq2/e5MaFMhk7GiV/T93YEhGpU8/CSzKtI+uIQaCO7jCfPFOtsiOBcsscqfAYlWlyCecKrg9zMPmNOaE=,iv:nmy5ATUrGXLZpSvZCSyDnoxHtRyNmXiEqbw62anH7LI=,tag:5nr6/cCFlrwqH9kNGp25og==,type:str]
|
||||
pgp: []
|
||||
unencrypted_suffix: _unencrypted
|
||||
version: 3.9.4
|
||||
|
||||
@@ -24,12 +24,15 @@ in {
|
||||
./utils/modules/sops.nix
|
||||
./utils/modules/nur.nix
|
||||
./modules/appimage.nix
|
||||
./modules/desktop
|
||||
./modules/sway/sway.nix
|
||||
# ./modules/printer.nix
|
||||
# ./modules/cyberghost.nix
|
||||
./utils/modules/autoupgrade.nix
|
||||
./modules/puppeteer.nix
|
||||
|
||||
# ./modules/development
|
||||
|
||||
./cachix.nix
|
||||
./users
|
||||
|
||||
@@ -38,6 +41,7 @@ in {
|
||||
./modules/coding.nix
|
||||
|
||||
# ./modules/steam.nix
|
||||
./modules/fingerprint.nix
|
||||
|
||||
./hardware-configuration.nix
|
||||
|
||||
@@ -57,6 +61,7 @@ in {
|
||||
open-sans
|
||||
nix-prefetch
|
||||
jq
|
||||
mkcert
|
||||
oh-my-zsh
|
||||
zsh-autosuggestions
|
||||
zsh-completions
|
||||
|
||||
4
hosts/nb/modules/desktop/default.nix
Normal file
4
hosts/nb/modules/desktop/default.nix
Normal file
@@ -0,0 +1,4 @@
|
||||
{ pkgs, ... }: {
|
||||
imports = [
|
||||
];
|
||||
}
|
||||
@@ -0,0 +1,67 @@
|
||||
# firefox-container-controller-extension.nix
|
||||
# Import this file in your configuration.nix to build and install the Container Controller extension.
|
||||
# Usage in configuration.nix:
|
||||
#
|
||||
# let
|
||||
# containerControllerXpi = import ./firefox-container-controller-extension.nix { inherit pkgs; };
|
||||
# in {
|
||||
# programs.firefox = {
|
||||
# enable = true;
|
||||
# profiles.default = {
|
||||
# extensions = [ containerControllerXpi ];
|
||||
# };
|
||||
# };
|
||||
# }
|
||||
|
||||
{ pkgs }:
|
||||
|
||||
pkgs.runCommand "firefox-containercontroller-xpi" {
|
||||
nativeBuildInputs = [ pkgs.zip ];
|
||||
} ''
|
||||
# Create temp dir for packaging
|
||||
TMPDIR=$(mktemp -d)
|
||||
cd "$TMPDIR"
|
||||
|
||||
# Write manifest.json without leading spaces
|
||||
cat > manifest.json << 'EOF'
|
||||
{
|
||||
"manifest_version": 2,
|
||||
"name": "Container Controller",
|
||||
"version": "1.0",
|
||||
"applications": { "gecko": { "id": "containercontroller@cloonar.com" } },
|
||||
"permissions": ["containers", "nativeMessaging"],
|
||||
"background": { "scripts": ["background.js"] }
|
||||
}
|
||||
EOF
|
||||
|
||||
# Write background.js without indentation
|
||||
cat > background.js << 'EOF'
|
||||
async function poll() {
|
||||
const resp = await browser.runtime.sendNativeMessage(
|
||||
"com.firefox.containercontroller", {}
|
||||
);
|
||||
if (resp.userContextId && resp.action) {
|
||||
try {
|
||||
if (resp.action === "hide") {
|
||||
await browser.containers.hideContainer({ userContextId: resp.userContextId });
|
||||
} else if (resp.action === "show") {
|
||||
await browser.containers.showContainer({ userContextId: resp.userContextId });
|
||||
}
|
||||
} catch (e) {}
|
||||
}
|
||||
}
|
||||
|
||||
// Poll every second
|
||||
setInterval(poll, 1000);
|
||||
EOF
|
||||
|
||||
# Ensure the Firefox extensions directory exists in the output
|
||||
mkdir -p "$out/share/firefox/extensions"
|
||||
|
||||
# Create ZIP archive at root of package
|
||||
# and use the updated extension id for the filename
|
||||
zip -r "$out/share/firefox/extensions/containercontroller@cloonar.com.xpi" manifest.json background.js
|
||||
|
||||
# Clean up
|
||||
rm -rf "$TMPDIR"
|
||||
''
|
||||
@@ -0,0 +1,59 @@
|
||||
{ pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
# 1) Native‐messaging host: reads and clears the queued JSON command
|
||||
containerControllerHost = pkgs.writeScriptBin "firefox-containercontroller-host" ''
|
||||
#!/usr/bin/env bash
|
||||
CMD_FILE="$HOME/.cache/firefox-container-command.json"
|
||||
if [ -f "$CMD_FILE" ]; then
|
||||
cat "$CMD_FILE"
|
||||
rm "$CMD_FILE"
|
||||
else
|
||||
echo '{}'
|
||||
fi
|
||||
'';
|
||||
|
||||
# 2) CLI helper to enqueue a “hide” command
|
||||
hideContainer = pkgs.writeScriptBin "hide-container" ''
|
||||
#!/usr/bin/env bash
|
||||
if [ -z "$1" ]; then
|
||||
echo "Usage: $0 <userContextId>" >&2
|
||||
exit 1
|
||||
fi
|
||||
ID="$1"
|
||||
mkdir -p "$HOME/.cache"
|
||||
printf '{"userContextId": %s, "action": "hide"}' "$ID" \
|
||||
> "$HOME/.cache/firefox-container-command.json"
|
||||
'';
|
||||
|
||||
# 3) CLI helper to enqueue a “show” command
|
||||
showContainer = pkgs.writeScriptBin "show-container" ''
|
||||
#!/usr/bin/env bash
|
||||
if [ -z "$1" ]; then
|
||||
echo "Usage: $0 <userContextId>" >&2
|
||||
exit 1
|
||||
fi
|
||||
ID="$1"
|
||||
mkdir -p "$HOME/.cache"
|
||||
printf '{"userContextId": %s, "action": "show"}' "$ID" \
|
||||
> "$HOME/.cache/firefox-container-command.json"
|
||||
'';
|
||||
in
|
||||
{
|
||||
# Install host + helpers
|
||||
environment.systemPackages = [
|
||||
containerControllerHost
|
||||
hideContainer
|
||||
showContainer
|
||||
];
|
||||
|
||||
# Register the native‐messaging host for our extension
|
||||
environment.etc."mozilla/native-messaging-hosts/com.firefox.containercontroller.json".text =
|
||||
builtins.toJSON {
|
||||
name = "com.firefox.containercontroller";
|
||||
description = "Native messaging host for Container Controller";
|
||||
path = containerControllerHost;
|
||||
type = "stdio";
|
||||
allowed_extensions = [ "containercontroller@cloonar.com" ];
|
||||
};
|
||||
}
|
||||
@@ -1,11 +1,13 @@
|
||||
|
||||
{ config, pkgs, lib, ... }:
|
||||
let
|
||||
pkgs = import (builtins.fetchTarball "https://github.com/NixOS/nixpkgs/archive/refs/heads/nixos-unstable.tar.gz") { };
|
||||
mcp-servers = import (builtins.fetchTarball "https://github.com/natsukium/mcp-servers-nix/archive/refs/heads/main.tar.gz") { inherit pkgs; };
|
||||
in {
|
||||
nixpkgs.overlays = [
|
||||
(import (builtins.fetchTarball "https://github.com/natsukium/mcp-servers-nix/archive/main.tar.gz")).overlays.default
|
||||
];
|
||||
environment.systemPackages = with pkgs; [
|
||||
mcp-server-fetch
|
||||
];
|
||||
mcp-servers.lib.mkConfig pkgs {
|
||||
programs = {
|
||||
fetch.enable = true;
|
||||
memory.enable = true;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
9
hosts/nb/modules/fingerprint.nix
Normal file
9
hosts/nb/modules/fingerprint.nix
Normal file
@@ -0,0 +1,9 @@
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
services.fprintd.enable = true;
|
||||
|
||||
security.pam.services.login.fprintAuth = true;
|
||||
security.pam.services.sudo.fprintAuth = true;
|
||||
security.pam.services.swaylock.fprintAuth = true;
|
||||
}
|
||||
@@ -5,7 +5,10 @@ let
|
||||
name = "social";
|
||||
desktopName = "Firefox browser with social profile";
|
||||
exec = "firefox -P social";
|
||||
# exec= "firefox -P social --marionette --remote-debugging-port 2828 --no-remote";
|
||||
};
|
||||
in {
|
||||
environment.systemPackages = [ socialDesktopItem ];
|
||||
environment.systemPackages = with pkgs; [
|
||||
socialDesktopItem
|
||||
];
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
|
||||
|
||||
./modules/mysql.nix
|
||||
./modules/postfix.nix
|
||||
./utils/modules/nginx.nix
|
||||
./modules/bitwarden
|
||||
./modules/authelia
|
||||
@@ -25,6 +26,8 @@
|
||||
|
||||
./hardware-configuration.nix
|
||||
|
||||
# ./modules/a11ywatch.nix
|
||||
|
||||
./modules/web/typo3.nix
|
||||
./modules/web/stack.nix
|
||||
|
||||
@@ -38,6 +41,10 @@
|
||||
./sites/stage.cloonar-technologies.at.nix
|
||||
|
||||
./sites/cloonar.dev.nix
|
||||
./sites/paraclub.at.nix
|
||||
./sites/api.paraclub.at.nix
|
||||
./sites/module.paraclub.at.nix
|
||||
./sites/tandem.paraclub.at.nix
|
||||
./sites/paraclub.cloonar.dev.nix
|
||||
./sites/api.paraclub.cloonar.dev.nix
|
||||
./sites/tandem.paraclub.cloonar.dev.nix
|
||||
|
||||
24
hosts/web-arm/modules/a11ywatch.nix
Normal file
24
hosts/web-arm/modules/a11ywatch.nix
Normal file
@@ -0,0 +1,24 @@
|
||||
{ config, ... }:
|
||||
{
|
||||
#Collabora Containers
|
||||
virtualisation.oci-containers.containers.pally = {
|
||||
image = "docker.io/croox/pa11y-dashboard:latest";
|
||||
ports = [ "4000:4000/tcp" ];
|
||||
extraOptions = [
|
||||
"--pull=newer"
|
||||
];
|
||||
};
|
||||
|
||||
services.nginx.virtualHosts."allywatch.cloonar.com" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
|
||||
extraConfig = ''
|
||||
# static files
|
||||
location ^~ / {
|
||||
proxy_pass http://127.0.0.1:4000;
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
'';
|
||||
};
|
||||
}
|
||||
24
hosts/web-arm/modules/pa11y.nix
Normal file
24
hosts/web-arm/modules/pa11y.nix
Normal file
@@ -0,0 +1,24 @@
|
||||
{ config, ... }:
|
||||
{
|
||||
#Collabora Containers
|
||||
virtualisation.oci-containers.containers.pally = {
|
||||
image = "docker.io/croox/pa11y-dashboard:latest";
|
||||
ports = [ "4000:4000/tcp" ];
|
||||
extraOptions = [
|
||||
"--pull=newer"
|
||||
];
|
||||
};
|
||||
|
||||
services.nginx.virtualHosts."allywatch.cloonar.com" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
|
||||
extraConfig = ''
|
||||
# static files
|
||||
location ^~ / {
|
||||
proxy_pass http://127.0.0.1:4000;
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
'';
|
||||
};
|
||||
}
|
||||
@@ -4,7 +4,7 @@
|
||||
enableDefaultLocations = false;
|
||||
enableMysql = true;
|
||||
authorizedKeys = [
|
||||
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmLPJoHwL+d7dnc3aFLbRCDshxRSQ0dtAVv/LYBn2/PBlZcIyVO9drjr702GL9QuS5DQyjtoZjSOvv1ykBKedUwY3XDyyZgtqjleojKIFMXkdXtD5iG+RUraUfzcFCZU12BYXSeAXK1HmIjSDUtDOlp6lVVWxNpz1vWSRtA/+PULhP+n5Cj7232Wf372+EPfQPntOlcMbyrDLFtj7cUz+E6BH0qdX0l3QtIVnK/C1iagPAwLcwPJd9Sfs8lj5C4g8T9uBJa6OX+87lE4ySYY+Cik9BN59S0ctjXvWCFsPO3udQSC1mf33XdDenc2mbi+lZWTfrN8S2K5CsbxRsVBlbapFBRwufEpN4iQnaTu1QmzDrmktBFAPJ2jvjBJPIx6W3KOy3kUwh9WNhzd/ubf9dFTHzkTzgluo/Zk6/S8fTJiA4rbYKSkLw9Y265bvtR1kfUBLKSa/Axe5dkKysX1RNKfTJEwbh2TfIS3apQPZZc5kIEWfeK/6kbQX7WJZFtTs="
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKcfDiAqwP8FnH0Nl/joMtaRWwiNXbDBYk0wq1gnC5G8"
|
||||
];
|
||||
extraConfig = ''
|
||||
add_header X-Frame-Options "SAMEORIGIN";
|
||||
@@ -31,4 +31,6 @@
|
||||
phpPackage = pkgs.php82.withExtensions ({ enabled, all }:
|
||||
enabled ++ [ all.imagick ]);
|
||||
};
|
||||
|
||||
services.nginx.virtualHosts."api.paraclub.at".acmeRoot = lib.mkForce "/var/lib/acme/acme-challenge";
|
||||
}
|
||||
|
||||
@@ -6,7 +6,8 @@ in {
|
||||
services.nginx.virtualHosts."${domain}" = {
|
||||
forceSSL = true;
|
||||
enableACME = true;
|
||||
acmeRoot = null;
|
||||
# acmeRoot = null;
|
||||
acmeRoot = "/var/lib/acme/acme-challenge";
|
||||
root = "${dataDir}";
|
||||
|
||||
locations."/favicon.ico".extraConfig = ''
|
||||
@@ -37,7 +38,7 @@ in {
|
||||
#home = "/home/${domain}";
|
||||
group = "nginx";
|
||||
openssh.authorizedKeys.keys = [
|
||||
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwgjsgjxOSX4ZeLuhSq+JumnEa1bKS3fwlA8LDuxvOWXs2Zn4Hwa04ZuM59jzqifwGGMJOFxErm8+5oH2QQFa0wgg8zEG+2U1AzjMNk5+mxrhnLPGAMlnqXmkGi0Jj2nFwKaEM9kcO5UUqRP71BFdGtP74wRcaVpT4TTPzCQl1HTdwzmAOT+3yQ364kyAHXTwQOAjiFcSAlNfZ5C2eeNC642bv6Dfi6mMWi55tdNV6HUn7y2cbq8wscDG7gla8bN3xivuO6POWqyCpHtLxDhppLYJ28ZwqpcynRAXDnVYlT3DmPw1bDs/eBlkjauGR/oM8phka3No3cREBYpSWK7mJeqIIWSV0Z4dvFLeWh6MM4AVhX3HOW7jcxf2tUmpzre6S10HjXj3lLES7oJO4uOYoJWxaGcqFiUc9BOxqLN9FqECXuzfC0apCr0OYm5T2NsSmzlkBPzCa2EqBBI0u5XGcDKgpBA4gD8kuD+8Cj5DxPzXP+IdX1jhHRVsI5nucTvM="
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJQLKQ5skQyBRYe8S5Sb72YLE9QFnrHesEWtcf+0D4c"
|
||||
];
|
||||
};
|
||||
users.groups.${domain} = {};
|
||||
|
||||
@@ -8,6 +8,13 @@ in {
|
||||
enableACME = true;
|
||||
acmeRoot = null;
|
||||
root = "${dataDir}";
|
||||
serverAliases = [ "www.${domain}" ];
|
||||
|
||||
extraConfig = ''
|
||||
if ($host != '${domain}') {
|
||||
return 301 $scheme://${domain}$request_uri;
|
||||
}
|
||||
'';
|
||||
|
||||
locations."/favicon.ico".extraConfig = ''
|
||||
log_not_found off;
|
||||
@@ -37,7 +44,7 @@ in {
|
||||
#home = "/home/${domain}";
|
||||
group = "nginx";
|
||||
openssh.authorizedKeys.keys = [
|
||||
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbSqS0TrJnmihjuIwLY74jKmuErF5jarQeVEQbnl7k8DDfVXP6DKybK2wVRIrAMN2VQzgXWWyRj2wNZrvq1whZon6CrEDxDVN/VDGS99pazczbrypmycVnPsevtS3wrEhiQrwCplkPxoZGlSAPGtx3SOzql+iG7xrhJfuPDCgwIboKf8Tir170aflH7ZfXqUX+V5QMbOn+roT8Tj7vUd/za3o3okJQrW3NUHT6/0TDkGsn+lJp30e94GF5RDLUJgM8pBf45WM94dv1uEfRI7+AQJZRta3X2VNSbb8I2dPNLmgxYQaW1VtwGP/RfxoFESdQubN74p+VxNeP7z5AFiZfhEYb0yiAwXiavN7fStXX/MKXxMicS2fdGzieXLWpLol70xx19492kOnlzoiPKJRosNw8N60R+AkbPYdwl5z5uKDn1ve79YaWB3KWS5Pcr9IT1wZAc48UePL6QtcDppHe8tUflPP5h/LCKOmAioWG59YF5pKfYNLSXJzmiudzzrs="
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINkaoMNNBDqjvKrQg2YvXUBlJSZwvlKe3wS5cIDdR3pd"
|
||||
];
|
||||
};
|
||||
users.groups.${domain} = {};
|
||||
|
||||
@@ -7,7 +7,9 @@ in {
|
||||
services.nginx.virtualHosts."${domain}" = {
|
||||
forceSSL = true;
|
||||
enableACME = true;
|
||||
acmeRoot = null;
|
||||
# acmeRoot = null;
|
||||
acmeRoot = "/var/lib/acme/acme-challenge";
|
||||
|
||||
root = "${dataDir}";
|
||||
|
||||
locations."/favicon.ico".extraConfig = ''
|
||||
@@ -38,7 +40,7 @@ in {
|
||||
#home = "/home/${domain}";
|
||||
group = "nginx";
|
||||
openssh.authorizedKeys.keys = [
|
||||
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpezoJfaqSlQKhbzIRxQysmSmU5tih0SGFh4Eiy3YjfxiJSCRCuVTBCUmnhDCPsJZK+5xEDGarO8UfiqxZfxEyEL5d7IcRQJ/uRSFhYzByGbkziLM760KYqBzaE2Siu+zk625KOm6BN9qWGZdirejwf1Ay9EYmUdNiCMBBFLkPaQkZ8IEuMavf1wHEiZLas25eK7oJWHYKltcluH05QEF+5ODu88nlSpFlz2FjxJSbLDf7qeUba/L2OL124dTU5NIDNzwZLCKjpp8aTYzTaoox7KXUVRmy1X4Or61WhSxw9+LGyrAZLsW+l0a4FgY17V5HnF5/jf8eOpkuVdwtd29KCheJ4BdUfomV8vEt6S0hUP66VqJn6MliuL+10KM6TjLnjg0McPp1LPuSFRoLzO0YetTZzeVc0oBIr9Z3vjm6jt1dYcUtaydn/fc+FgoqpIOLz6EOGCz/CmyaV4rLk2BFKqtx5GP1wbP36hVkyWpREbEMILpFKDOyp21fC67mb0M="
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPTsI0YyToIokBIcSf6j70iI68pKd4fPkRpqByFkZLRB"
|
||||
];
|
||||
};
|
||||
users.groups.${user} = {};
|
||||
|
||||
4
raspberry-new/config.txt
Normal file
4
raspberry-new/config.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
WIFI_SSID="Cloonar-Multimedia"
|
||||
WIFI_PSK="K2MC28Zhk$4zsx6Y"
|
||||
SNAPCAST_SERVER="snapcast.cloonar.com"
|
||||
# You can add other configurations here if needed later
|
||||
416
raspberry-new/create_rpi_image.sh
Executable file
416
raspberry-new/create_rpi_image.sh
Executable file
@@ -0,0 +1,416 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Script to create a read-only Raspberry Pi OS Lite image with Snapcast client.
|
||||
# Requires sudo privileges for many operations.
|
||||
# Ensure you have all dependencies from shell.nix (e.g., qemu-arm-static, parted, etc.)
|
||||
|
||||
set -euo pipefail # Exit on error, undefined variable, or pipe failure
|
||||
|
||||
# --- Configuration & Defaults ---
|
||||
RASPI_OS_LITE_URL="https://downloads.raspberrypi.com/raspios_lite_arm64/images/raspios_lite_arm64-2025-05-07/2025-05-06-raspios-bookworm-arm64-lite.img.xz"
|
||||
# Check for the latest image URL from https://www.raspberrypi.com/software/operating-systems/
|
||||
# The script assumes an .img.xz file. If it's .zip, adjust extraction.
|
||||
|
||||
WORK_DIR="rpi_build_temp"
|
||||
# QEMU is no longer used in this script.
|
||||
|
||||
# --- Helper Functions ---
|
||||
info() { echo -e "\033[0;32m[INFO]\033[0m $1"; }
|
||||
warn() { echo -e "\033[0;33m[WARN]\033[0m $1"; }
|
||||
error() { echo -e "\033[0;31m[ERROR]\033[0m $1" >&2; exit 1; }
|
||||
cleanup_exit() {
|
||||
warn "Cleaning up and exiting..."
|
||||
local rootfs_mount_point="${PWD}/${WORK_DIR}/rootfs" # PWD should be the script's initial dir, WORK_DIR is relative
|
||||
|
||||
# Attempt to unmount in reverse order of mounting
|
||||
if mount | grep -q "${rootfs_mount_point}/boot"; then
|
||||
info "Cleanup: Unmounting ${rootfs_mount_point}/boot..."
|
||||
sudo umount "${rootfs_mount_point}/boot" || sudo umount -l "${rootfs_mount_point}/boot" || warn "Failed to unmount ${rootfs_mount_point}/boot during cleanup."
|
||||
fi
|
||||
if mount | grep -q "${rootfs_mount_point}"; then # Check rootfs itself
|
||||
info "Cleanup: Unmounting ${rootfs_mount_point}..."
|
||||
sudo umount "${rootfs_mount_point}" || sudo umount -l "${rootfs_mount_point}" || warn "Failed to unmount ${rootfs_mount_point} during cleanup."
|
||||
fi
|
||||
if [ -n "${LOOP_DEV:-}" ] && losetup -a | grep -q "${LOOP_DEV}"; then
|
||||
info "Cleanup: Detaching loop device ${LOOP_DEV}..."
|
||||
sudo losetup -d "${LOOP_DEV}" || warn "Failed to detach ${LOOP_DEV} during cleanup."
|
||||
fi
|
||||
# sudo rm -rf "${WORK_DIR}" # Optional: clean work dir on error
|
||||
exit 1
|
||||
}
|
||||
trap cleanup_exit ERR INT TERM
|
||||
|
||||
# --- Argument Parsing ---
|
||||
DEVICE_TYPE=""
|
||||
HOSTNAME_PI=""
|
||||
CONFIG_FILE="./config.txt"
|
||||
OUTPUT_IMAGE_FILE=""
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 -d <device_type> -n <hostname> [-c <config_file>] [-o <output_image>]"
|
||||
echo " -d: Device type (rpizero2w | rpi4)"
|
||||
echo " -n: Desired hostname for the Raspberry Pi"
|
||||
echo " -c: Path to config.txt (default: ./config.txt)"
|
||||
echo " -o: Output image file name (default: snapcast-client-<device_type>-<hostname>.img)"
|
||||
exit 1
|
||||
}
|
||||
|
||||
while getopts "d:n:c:o:h" opt; do
|
||||
case ${opt} in
|
||||
d) DEVICE_TYPE="${OPTARG}";;
|
||||
n) HOSTNAME_PI="${OPTARG}";;
|
||||
c) CONFIG_FILE="${OPTARG}";;
|
||||
o) OUTPUT_IMAGE_FILE="${OPTARG}";;
|
||||
h) usage;;
|
||||
*) usage;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z "${DEVICE_TYPE}" ]; then
|
||||
error "Mandatory argument -d <device_type> is missing. Use -h for help."
|
||||
fi
|
||||
if [ -z "${HOSTNAME_PI}" ]; then
|
||||
error "Mandatory argument -n <hostname> is missing. Use -h for help."
|
||||
fi
|
||||
if [ "${DEVICE_TYPE}" != "rpizero2w" ] && [ "${DEVICE_TYPE}" != "rpi4" ]; then
|
||||
error "Invalid device type: '${DEVICE_TYPE}'. Must be 'rpizero2w' or 'rpi4'."
|
||||
fi
|
||||
if [ ! -f "${CONFIG_FILE}" ]; then
|
||||
error "Config file not found: ${CONFIG_FILE}"
|
||||
fi
|
||||
if [ -z "${OUTPUT_IMAGE_FILE}" ]; then
|
||||
OUTPUT_IMAGE_FILE="snapcast-client-${DEVICE_TYPE}-${HOSTNAME_PI}.img"
|
||||
fi
|
||||
|
||||
info "Starting Raspberry Pi Image Builder..."
|
||||
info "Device Type: ${DEVICE_TYPE}"
|
||||
info "Hostname: ${HOSTNAME_PI}"
|
||||
info "Config File: ${CONFIG_FILE}"
|
||||
info "Output Image: ${OUTPUT_IMAGE_FILE}"
|
||||
|
||||
# --- Load Configuration ---
|
||||
source "${CONFIG_FILE}"
|
||||
if [ -z "${WIFI_SSID:-}" ] || [ -z "${WIFI_PSK:-}" ] || [ -z "${SNAPCAST_SERVER:-}" ]; then
|
||||
error "WIFI_SSID, WIFI_PSK, or SNAPCAST_SERVER not set in config file."
|
||||
fi
|
||||
|
||||
# --- Prepare Workspace ---
|
||||
sudo rm -rf "${WORK_DIR}"
|
||||
mkdir -p "${WORK_DIR}"
|
||||
cd "${WORK_DIR}"
|
||||
|
||||
# --- 1. Base Image Acquisition ---
|
||||
IMG_XZ_NAME=$(basename "${RASPI_OS_LITE_URL}")
|
||||
IMG_NAME="${IMG_XZ_NAME%.xz}"
|
||||
|
||||
# Check for uncompressed image first
|
||||
if [ -f "${IMG_NAME}" ]; then
|
||||
info "Using existing uncompressed image: ${IMG_NAME}"
|
||||
# Else, check for compressed image
|
||||
elif [ -f "${IMG_XZ_NAME}" ]; then
|
||||
info "Found existing compressed image: ${IMG_XZ_NAME}. Extracting..."
|
||||
xz -d -k "${IMG_XZ_NAME}" # -k to keep the original .xz file
|
||||
if [ ! -f "${IMG_NAME}" ]; then # Double check extraction
|
||||
error "Failed to extract ${IMG_XZ_NAME} to ${IMG_NAME}"
|
||||
fi
|
||||
info "Extraction complete: ${IMG_NAME}"
|
||||
# Else, download and extract
|
||||
else
|
||||
info "Downloading Raspberry Pi OS Lite image: ${IMG_XZ_NAME}..."
|
||||
wget -q --show-progress -O "${IMG_XZ_NAME}" "${RASPI_OS_LITE_URL}"
|
||||
info "Extracting image..."
|
||||
xz -d -k "${IMG_XZ_NAME}" # -k to keep the original .xz file
|
||||
if [ ! -f "${IMG_NAME}" ]; then # Double check extraction
|
||||
error "Failed to extract ${IMG_XZ_NAME} to ${IMG_NAME}"
|
||||
fi
|
||||
info "Extraction complete: ${IMG_NAME}"
|
||||
fi
|
||||
|
||||
# Always work on a copy for the output image
|
||||
info "Copying ${IMG_NAME} to ${OUTPUT_IMAGE_FILE}..."
|
||||
cp "${IMG_NAME}" "${OUTPUT_IMAGE_FILE}"
|
||||
|
||||
# --- 2. Mount Image Partitions ---
|
||||
info "Setting up loop device for ${OUTPUT_IMAGE_FILE}..."
|
||||
LOOP_DEV=$(sudo losetup -Pf --show "${OUTPUT_IMAGE_FILE}")
|
||||
if [ -z "${LOOP_DEV}" ]; then error "Failed to setup loop device."; fi
|
||||
info "Loop device: ${LOOP_DEV}"
|
||||
|
||||
# Wait for device nodes to be created
|
||||
sleep 2
|
||||
sudo partprobe "${LOOP_DEV}"
|
||||
sleep 2
|
||||
|
||||
BOOT_PART="${LOOP_DEV}p1"
|
||||
ROOT_PART="${LOOP_DEV}p2"
|
||||
|
||||
mkdir -p rootfs
|
||||
info "Mounting root partition (${ROOT_PART}) to rootfs/..."
|
||||
sudo mount "${ROOT_PART}" rootfs
|
||||
info "Mounting boot partition (${BOOT_PART}) to rootfs/boot/..."
|
||||
# Note: Newer RPi OS might use /boot/firmware. Adjust if needed.
|
||||
# Check if /boot/firmware exists, if so, use it.
|
||||
# For simplicity, this script assumes /boot. If issues, this is a place to check.
|
||||
# A more robust check: BOOT_MOUNT_POINT="rootfs/boot"; if [ -d "rootfs/boot/firmware" ]; then BOOT_MOUNT_POINT="rootfs/boot/firmware"; fi
|
||||
# sudo mount "${BOOT_PART}" "${BOOT_MOUNT_POINT}"
|
||||
sudo mount "${BOOT_PART}" rootfs/boot
|
||||
|
||||
# --- 3. System Configuration (Directly on Mounted Rootfs) ---
|
||||
# QEMU and chroot are no longer used. All operations are on the mounted rootfs.
|
||||
|
||||
if ! command -v dpkg-deb &> /dev/null; then
|
||||
error "dpkg-deb command not found. Please ensure dpkg is installed and in your PATH (e.g., via Nix shell)."
|
||||
fi
|
||||
|
||||
info "Setting hostname to ${HOSTNAME_PI} on rootfs..."
|
||||
sudo sh -c "echo '${HOSTNAME_PI}' > rootfs/etc/hostname"
|
||||
sudo sed -i "s/127.0.1.1.*raspberrypi/127.0.1.1\t${HOSTNAME_PI}/g" rootfs/etc/hosts
|
||||
sudo sed -i "s/raspberrypi/${HOSTNAME_PI}/g" rootfs/etc/hosts # Also replace other occurrences
|
||||
|
||||
info "Configuring Wi-Fi (wpa_supplicant) on rootfs..."
|
||||
sudo sh -c "cat > rootfs/boot/wpa_supplicant.conf <<WPA
|
||||
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
|
||||
update_config=1
|
||||
country=AT # Set your country code
|
||||
|
||||
network={
|
||||
ssid=\"${WIFI_SSID}\"
|
||||
psk=\"${WIFI_PSK}\"
|
||||
scan_ssid=1
|
||||
}
|
||||
WPA"
|
||||
sudo chmod 600 rootfs/boot/wpa_supplicant.conf
|
||||
|
||||
info "Downloading and installing Snapcast client from .deb..."
|
||||
SNAPCLIENT_DEB_URL="https://github.com/badaix/snapcast/releases/download/v0.31.0/snapclient_0.31.0-1_arm64_bookworm_with-pulse.deb"
|
||||
SNAPCLIENT_DEB_NAME=$(basename "${SNAPCLIENT_DEB_URL}")
|
||||
|
||||
if [ ! -f "${SNAPCLIENT_DEB_NAME}" ]; then
|
||||
wget -q --show-progress -O "${SNAPCLIENT_DEB_NAME}" "${SNAPCLIENT_DEB_URL}"
|
||||
else
|
||||
info "Using existing ${SNAPCLIENT_DEB_NAME}"
|
||||
fi
|
||||
|
||||
mkdir -p snapclient_deb_extract
|
||||
info "Extracting ${SNAPCLIENT_DEB_NAME}..."
|
||||
dpkg-deb -x "${SNAPCLIENT_DEB_NAME}" snapclient_deb_extract
|
||||
info "Copying extracted files to rootfs using rsync..."
|
||||
# Use rsync to handle merging and symlinks like /lib -> /usr/lib correctly
|
||||
# The trailing slash on snapclient_deb_extract/ is important for rsync
|
||||
if ! command -v rsync &> /dev/null; then
|
||||
error "rsync command not found. Please ensure rsync is installed and in your PATH (e.g., via Nix shell)."
|
||||
fi
|
||||
|
||||
if ! command -v openssl &> /dev/null; then
|
||||
error "openssl command not found. Please ensure openssl is installed and in your PATH (e.g., via Nix shell)."
|
||||
fi
|
||||
sudo rsync -aK --chown=root:root snapclient_deb_extract/ rootfs/
|
||||
rm -rf snapclient_deb_extract "${SNAPCLIENT_DEB_NAME}"
|
||||
info "Snapclient files installed."
|
||||
|
||||
info "Attempting to create 'snapclient' user and group..."
|
||||
SNAPCLIENT_UID=987 # Choose an appropriate UID/GID
|
||||
SNAPCLIENT_GID=987
|
||||
SNAPCLIENT_USER="snapclient"
|
||||
SNAPCLIENT_GROUP="snapclient"
|
||||
SNAPCLIENT_HOME="/var/lib/snapclient" # A typical home for system users, though not strictly needed if nologin
|
||||
SNAPCLIENT_SHELL="/usr/sbin/nologin"
|
||||
|
||||
# Create group if it doesn't exist
|
||||
if ! sudo grep -q "^${SNAPCLIENT_GROUP}:" rootfs/etc/group; then
|
||||
info "Creating group '${SNAPCLIENT_GROUP}' (${SNAPCLIENT_GID}) in rootfs/etc/group"
|
||||
sudo sh -c "echo '${SNAPCLIENT_GROUP}:x:${SNAPCLIENT_GID}:' >> rootfs/etc/group"
|
||||
else
|
||||
info "Group '${SNAPCLIENT_GROUP}' already exists in rootfs/etc/group."
|
||||
fi
|
||||
|
||||
# Create user if it doesn't exist
|
||||
if ! sudo grep -q "^${SNAPCLIENT_USER}:" rootfs/etc/passwd; then
|
||||
info "Creating user '${SNAPCLIENT_USER}' (${SNAPCLIENT_UID}) in rootfs/etc/passwd"
|
||||
sudo sh -c "echo '${SNAPCLIENT_USER}:x:${SNAPCLIENT_UID}:${SNAPCLIENT_GID}:${SNAPCLIENT_USER} system user:${SNAPCLIENT_HOME}:${SNAPCLIENT_SHELL}' >> rootfs/etc/passwd"
|
||||
info "Creating basic shadow entry for '${SNAPCLIENT_USER}' (account locked)"
|
||||
# '!' in password field locks the account. '*' also works.
|
||||
sudo sh -c "echo '${SNAPCLIENT_USER}:!:19700:0:99999:7:::' >> rootfs/etc/shadow"
|
||||
# Create home directory if it doesn't exist and set permissions
|
||||
sudo mkdir -p "rootfs${SNAPCLIENT_HOME}"
|
||||
sudo chown "${SNAPCLIENT_UID}:${SNAPCLIENT_GID}" "rootfs${SNAPCLIENT_HOME}"
|
||||
sudo chmod 700 "rootfs${SNAPCLIENT_HOME}"
|
||||
else
|
||||
info "User '${SNAPCLIENT_USER}' already exists in rootfs/etc/passwd."
|
||||
fi
|
||||
# Remove previous warnings
|
||||
# warn "The snapclient user and group were NOT automatically created."
|
||||
# warn "Ensure 'snapclient' user/group exist on target or adjust service file."
|
||||
|
||||
info "Creating Snapcast systemd service file on rootfs..."
|
||||
# Note: SNAPCAST_SERVER_IP is from the config file via 'source "${CONFIG_FILE}"'
|
||||
sudo sh -c "cat > rootfs/etc/systemd/system/snapclient.service <<SERVICE
|
||||
[Unit]
|
||||
Description=Snapcast client
|
||||
After=network-online.target sound.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/snapclient -h ${SNAPCAST_SERVER} --player pulse # Or alsa, adjust player if needed
|
||||
Restart=always
|
||||
User=${SNAPCLIENT_USER}
|
||||
Group=${SNAPCLIENT_GROUP}
|
||||
# User and group should now be created by this script.
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
SERVICE"
|
||||
|
||||
info "Enabling Snapcast systemd service on rootfs..."
|
||||
sudo mkdir -p rootfs/etc/systemd/system/multi-user.target.wants
|
||||
sudo ln -sf ../../../../lib/systemd/system/snapclient.service rootfs/etc/systemd/system/multi-user.target.wants/snapclient.service
|
||||
# Note: The above symlink path assumes snapclient.service from the .deb is installed to /lib/systemd/system/snapclient.service
|
||||
# If dpkg-deb -x places it in /usr/lib/systemd/system, adjust the symlink source.
|
||||
# A common location for package-installed units is /lib/systemd/system.
|
||||
# If the .deb actually places it in /etc/systemd/system, then the symlink would be:
|
||||
# sudo ln -sf ../snapclient.service rootfs/etc/systemd/system/multi-user.target.wants/snapclient.service
|
||||
# Let's assume the .deb installs it to /usr/lib/systemd/system or /lib/systemd/system.
|
||||
# The .deb extraction copies to rootfs/, so if the .deb has ./usr/lib/systemd/system/snapclient.service,
|
||||
# it will be at rootfs/usr/lib/systemd/system/snapclient.service.
|
||||
# The service file we created is at rootfs/etc/systemd/system/snapclient.service.
|
||||
# Systemd prefers /etc/systemd/system over /lib/systemd/system.
|
||||
# So, if we create it in /etc/systemd/system, that should be fine.
|
||||
# The symlink should point to the file in /etc/systemd/system if we create it there.
|
||||
|
||||
info "Enabling dhcpcd systemd service on rootfs..."
|
||||
# Ensure the multi-user.target.wants directory exists (already created for snapclient)
|
||||
# Standard path for dhcpcd.service in base RPi OS images is /lib/systemd/system/dhcpcd.service or /usr/lib/systemd/system/dhcpcd.service
|
||||
# The symlink needs to point from /etc/systemd/system/multi-user.target.wants/ to that location.
|
||||
if [ -f "rootfs/lib/systemd/system/dhcpcd.service" ]; then
|
||||
sudo ln -sf ../../../../lib/systemd/system/dhcpcd.service rootfs/etc/systemd/system/multi-user.target.wants/dhcpcd.service
|
||||
info "dhcpcd.service enabled (symlink created from /lib)."
|
||||
elif [ -f "rootfs/usr/lib/systemd/system/dhcpcd.service" ]; then
|
||||
sudo ln -sf ../../../../usr/lib/systemd/system/dhcpcd.service rootfs/etc/systemd/system/multi-user.target.wants/dhcpcd.service
|
||||
info "dhcpcd.service enabled (symlink created from /usr/lib)."
|
||||
else
|
||||
warn "dhcpcd.service file not found in rootfs/lib/systemd/system/ or rootfs/usr/lib/systemd/system/. Cannot enable dhcpcd."
|
||||
fi
|
||||
|
||||
# Let's ensure the symlink points to the one we just created:
|
||||
sudo rm -f rootfs/etc/systemd/system/multi-user.target.wants/snapclient.service # remove if exists
|
||||
sudo ln -s ../snapclient.service rootfs/etc/systemd/system/multi-user.target.wants/snapclient.service
|
||||
info "Snapclient service enabled (symlink created)."
|
||||
|
||||
|
||||
if [ "${DEVICE_TYPE}" == "rpizero2w" ]; then
|
||||
info "Applying HifiBerry DAC+ overlay for Raspberry Pi Zero 2 W on rootfs..."
|
||||
sudo sh -c "echo 'dtoverlay=hifiberry-dacplus' >> rootfs/boot/config.txt"
|
||||
fi
|
||||
|
||||
info "Configuring for read-only filesystem on rootfs..."
|
||||
# 1. Modify /etc/fstab
|
||||
sudo sed -i -E 's/(\s+\/\s+ext4\s+)(defaults,noatime)(\s+0\s+1)/\1ro,defaults,noatime\3/' rootfs/etc/fstab
|
||||
sudo sed -i -E 's/(\s+\/boot\s+vfat\s+)(defaults)(\s+0\s+2)/\1ro,defaults,nofail\3/' rootfs/etc/fstab
|
||||
|
||||
# 2. Add fastboot and ro to /boot/cmdline.txt
|
||||
BOOT_CMDLINE_FILE="rootfs/boot/cmdline.txt"
|
||||
if [ -f "${BOOT_CMDLINE_FILE}" ]; then
|
||||
if ! sudo grep -q "fastboot" "${BOOT_CMDLINE_FILE}"; then
|
||||
sudo sed -i '1 s/$/ fastboot/' "${BOOT_CMDLINE_FILE}"
|
||||
fi
|
||||
if ! sudo grep -q " ro" "${BOOT_CMDLINE_FILE}"; then # space before ro is important
|
||||
sudo sed -i '1 s/$/ ro/' "${BOOT_CMDLINE_FILE}" # Add ro if not present
|
||||
fi
|
||||
else
|
||||
warn "${BOOT_CMDLINE_FILE} not found. Skipping cmdline modifications."
|
||||
fi
|
||||
|
||||
info "Creating userconf.txt for predefined user setup..."
|
||||
DEFAULT_USER="rpiuser"
|
||||
DEFAULT_PASS="raspberry"
|
||||
# Check if openssl is available (already done above, but good to be mindful here)
|
||||
ENCRYPTED_PASS=$(echo "${DEFAULT_PASS}" | openssl passwd -6 -stdin)
|
||||
if [ -z "${ENCRYPTED_PASS}" ]; then
|
||||
error "Failed to encrypt password using openssl."
|
||||
fi
|
||||
sudo sh -c "echo '${DEFAULT_USER}:${ENCRYPTED_PASS}' > rootfs/boot/userconf.txt"
|
||||
sudo chmod 600 rootfs/boot/userconf.txt # Set appropriate permissions
|
||||
info "userconf.txt created with user '${DEFAULT_USER}'."
|
||||
|
||||
info "Attempting to disable unnecessary write-heavy services on rootfs..."
|
||||
# This is a best-effort attempt by removing common symlinks.
|
||||
# The exact paths might vary or services might not be installed.
|
||||
DISABLED_SERVICES_COUNT=0
|
||||
declare -a services_to_disable=(
|
||||
"apt-daily.timer"
|
||||
"apt-daily-upgrade.timer"
|
||||
"man-db.timer"
|
||||
"dphys-swapfile.service" # RPi OS specific swap service
|
||||
"logrotate.timer"
|
||||
"motd-news.timer"
|
||||
)
|
||||
declare -a common_wants_dirs=(
|
||||
"multi-user.target.wants"
|
||||
"timers.target.wants"
|
||||
"sysinit.target.wants"
|
||||
# Add other .wants directories if known
|
||||
)
|
||||
for service in "${services_to_disable[@]}"; do
|
||||
for wants_dir in "${common_wants_dirs[@]}"; do
|
||||
link_path="rootfs/etc/systemd/system/${wants_dir}/${service}"
|
||||
if [ -L "${link_path}" ]; then
|
||||
info "Disabling ${service} by removing symlink ${link_path}"
|
||||
sudo rm -f "${link_path}"
|
||||
DISABLED_SERVICES_COUNT=$((DISABLED_SERVICES_COUNT + 1))
|
||||
fi
|
||||
done
|
||||
# Also check for service files directly in /lib/systemd/system and mask them if we want to be more aggressive
|
||||
# For now, just removing symlinks from /etc/systemd/system is less intrusive.
|
||||
done
|
||||
info "Attempted to disable ${DISABLED_SERVICES_COUNT} services by removing symlinks."
|
||||
warn "Service disabling is best-effort. Review target system services."
|
||||
|
||||
# No apt cache to clean as we didn't use apt
|
||||
|
||||
info "System configuration on rootfs complete."
|
||||
|
||||
# --- 5. Cleanup & Unmount ---
|
||||
# Pseudo-filesystems and QEMU are no longer used, so their cleanup is removed.
|
||||
|
||||
info "Unmounting partitions..."
|
||||
# Unmount boot first, then root
|
||||
# BOOT_MOUNT_POINT="rootfs/boot"; if [ -d "rootfs/boot/firmware" ]; then BOOT_MOUNT_POINT="rootfs/boot/firmware"; fi
|
||||
# sudo umount "${BOOT_MOUNT_POINT}"
|
||||
sudo umount rootfs/boot
|
||||
sudo umount rootfs
|
||||
|
||||
info "Detaching loop device ${LOOP_DEV}..."
|
||||
sudo losetup -d "${LOOP_DEV}"
|
||||
unset LOOP_DEV # Important for trap
|
||||
|
||||
rmdir rootfs
|
||||
cd .. # Back to original directory
|
||||
|
||||
# --- 6. Shrink Image (Optional) ---
|
||||
info "Image shrinking (optional step)..."
|
||||
info "If you want to shrink the image, you can use a tool like PiShrink."
|
||||
info "Example: sudo pishrink.sh ${WORK_DIR}/${OUTPUT_IMAGE_FILE}"
|
||||
# Check if pishrink is available and executable
|
||||
# if [ -x "./pishrink.sh" ]; then
|
||||
# info "Running PiShrink..."
|
||||
# sudo ./pishrink.sh "${WORK_DIR}/${OUTPUT_IMAGE_FILE}"
|
||||
# else
|
||||
# warn "PiShrink script (pishrink.sh) not found or not executable in current directory. Skipping shrink."
|
||||
# fi
|
||||
|
||||
|
||||
# --- 7. Final Output ---
|
||||
FINAL_IMAGE_PATH="${WORK_DIR}/${OUTPUT_IMAGE_FILE}"
|
||||
info "---------------------------------------------------------------------"
|
||||
info "Raspberry Pi image created successfully!"
|
||||
info "Output image: ${FINAL_IMAGE_PATH}"
|
||||
info "Device Type: ${DEVICE_TYPE}"
|
||||
info "Hostname: ${HOSTNAME_PI}"
|
||||
info "Wi-Fi SSID: ${WIFI_SSID}"
|
||||
info "Snapcast Server: ${SNAPCAST_SERVER}"
|
||||
info ""
|
||||
info "To write to an SD card (e.g., /dev/sdX - BE VERY CAREFUL):"
|
||||
info " sudo dd bs=4M if=${FINAL_IMAGE_PATH} of=/dev/sdX status=progress conv=fsync"
|
||||
info "---------------------------------------------------------------------"
|
||||
|
||||
exit 0
|
||||
19
raspberry-new/shell.nix
Normal file
19
raspberry-new/shell.nix
Normal file
@@ -0,0 +1,19 @@
|
||||
{ pkgs ? import <nixpkgs> {} }:
|
||||
|
||||
pkgs.mkShellNoCC {
|
||||
packages = with pkgs; [
|
||||
gnumake
|
||||
ncurses
|
||||
pkg-config
|
||||
flex
|
||||
bison
|
||||
openssl
|
||||
bc
|
||||
which
|
||||
file
|
||||
];
|
||||
|
||||
shellHook = ''
|
||||
export KCONFIG_CONFIG=.config
|
||||
'';
|
||||
}
|
||||
@@ -11,3 +11,57 @@ export TMPDIR=/nix/persist/home/dominik/tmp/build-sdcard
|
||||
- add wifi psk
|
||||
- nix-build '<nixpkgs/nixos>' -A config.system.build.sdImage -I nixos-config=./sd-card-zero.nix --argstr system aarch64-linux
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
install rasberry pi os
|
||||
connect via ssh
|
||||
|
||||
edit /boot/firmware/config.txt
|
||||
dtoverlay=hifiberry-dacplus-std
|
||||
force_eeprom_read=0
|
||||
auto_initramfs=1
|
||||
disable_fw_kms_setup=1
|
||||
arm_64bit=1
|
||||
disable_overscan=1
|
||||
arm_boost=1
|
||||
[cm4]
|
||||
otg_mode=1
|
||||
[cm5]
|
||||
dtoverlay=dwc2,dr_mode=host
|
||||
|
||||
edit /boot/firmware/cmdline.txt
|
||||
add ro to the end of the line
|
||||
|
||||
setup user and group snapclient, add to audio group
|
||||
sudo groupadd -r snapclient
|
||||
sudo useradd -r -g snapclient -G audio snapclient
|
||||
download snapclient release arm64 with pulse
|
||||
wget https://github.com/badaix/snapcast/releases/download/v0.31.0/snapclient_0.31.0-1_arm64_bookworm_with-pulse.deb
|
||||
wget https://github.com/badaix/snapcast/releases/download/v0.31.0/snapclient_0.31.0-1_armhf_bookworm_with-pulse.deb
|
||||
|
||||
install deb package
|
||||
sudo dpkg -i snapclient_0.31.0-1_arm64_bookworm_with-pulse.deb
|
||||
sudo dpkg -i snapclient_0.31.0-1_armhf_bookworm_with-pulse.deb
|
||||
sudo apt install -f -y
|
||||
vim.tiny /etc/default/snapclient
|
||||
START_SNAPCLIENT=true
|
||||
SNAPCLIENT_OPTS="--hostID music-bedroom -h snapcast.cloonar.com"
|
||||
|
||||
sudo systemctl enable snapclient
|
||||
make filesystem read-only
|
||||
mv /etc/resolv.conf /var/run/
|
||||
ln -s /var/run/resolv.conf /etc/resolv.conf
|
||||
add /etc/NetworkManager/NetworkManager.conf main
|
||||
rc-manager=file
|
||||
|
||||
change /etc/fstab
|
||||
proc /proc proc defaults 0 0
|
||||
PARTUUID=3bd31f85-01 /boot/firmware vfat defaults,ro 0 2
|
||||
PARTUUID=3bd31f85-02 / ext4 defaults,noatime,ro 0 1
|
||||
|
||||
tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev 0 0
|
||||
tmpfs /var/tmp tmpfs defaults,noatime,nosuid,nodev 0 0
|
||||
tmpfs /var/log tmpfs defaults,noatime,nosuid,nodev,noexec 0 0
|
||||
tmpfs /var/lib/dhcp tmpfs defaults,noatime,nosuid,nodev,noexec 0 0
|
||||
|
||||
@@ -18,16 +18,10 @@ in
|
||||
nixpkgs.buildPlatform.system = "x86_64-linux"; # Change if building on a different architecture
|
||||
imports = [
|
||||
<nixpkgs/nixos/modules/installer/sd-card/sd-image-aarch64.nix>
|
||||
"${builtins.fetchGit { url = "https://github.com/NixOS/nixos-hardware.git"; }}/raspberry-pi/4"
|
||||
# "${builtins.fetchTarball "https://github.com/NixOS/nixos-hardware/archive/master.tar.gz"}/raspberry-pi/4"
|
||||
];
|
||||
|
||||
# nixpkgs.overlays = [
|
||||
# (final: super: {
|
||||
# makeModulesClosure = x:
|
||||
# super.makeModulesClosure (x // { allowMissing = true; });
|
||||
# })
|
||||
# ];
|
||||
|
||||
nix.settings.trusted-users = [ "root" "dominik" ];
|
||||
|
||||
swapDevices = [ { device = "/swapfile"; size = 2048; } ]; # 2GB swap
|
||||
@@ -44,58 +38,21 @@ in
|
||||
};
|
||||
networking.firewall.logRefusedConnections = false;
|
||||
|
||||
# boot.kernelPackages = pkgs.linuxPackages_rpi3;
|
||||
# hardware.deviceTree.enable = true;
|
||||
# hardware.deviceTree.overlays = [ {
|
||||
# name = "hifiberry-dacplus";
|
||||
# dtboFile = "${pkgs.linuxKernel.kernels.linux_rpi3}/dtbs/overlays/hifiberry-dacplus.dtbo";
|
||||
# } ];
|
||||
|
||||
hardware.deviceTree.filter = "bcm2708-rpi-zero*.dtb"; # This line does not change anything in this case
|
||||
hardware.deviceTree.enable = true;
|
||||
hardware.deviceTree.overlays = [
|
||||
{
|
||||
name = "hifiberry-dacplusadc";
|
||||
dtboFile = "${pkgs.device-tree_rpi.overlays}/hifiberry-dacplus.dtbo";
|
||||
# dtsText = ''
|
||||
# /dts-v1/;
|
||||
# /plugin/;
|
||||
#
|
||||
# / {
|
||||
# compatible = "brcm,bcm2835";
|
||||
#
|
||||
# fragment@0 {
|
||||
# target = <&i2s>;
|
||||
# __overlay__ {
|
||||
# status = "okay";
|
||||
# };
|
||||
# };
|
||||
#
|
||||
# fragment@1 {
|
||||
# target-path = "/";
|
||||
# __overlay__ {
|
||||
# dacplus_codec: dacplus-codec {
|
||||
# #sound-dai-cells = <0>;
|
||||
# compatible = "hifiberry,hifiberry-dacplus";
|
||||
# status = "okay";
|
||||
# };
|
||||
# };
|
||||
# };
|
||||
#
|
||||
# fragment@2 {
|
||||
# target = <&sound>;
|
||||
# __overlay__ {
|
||||
# compatible = "hifiberry,hifiberry-dacplus";
|
||||
# i2s-controller = <&i2s>;
|
||||
# status = "okay";
|
||||
# };
|
||||
# };
|
||||
# };
|
||||
# '';
|
||||
}
|
||||
];
|
||||
hardware.raspberry-pi."4".apply-overlays-dtmerge.enable = true;
|
||||
systemd.services = {
|
||||
"load-dacplus-overlay" = {
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
};
|
||||
wantedBy = ["multi-user.target"];
|
||||
script = ''
|
||||
${pkgs.libraspberrypi}/bin/dtoverlay -d ${config.boot.kernelPackages.kernel}/dtbs/overlays/ hifiberry-dacplus || echo "already in use"
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
sound.enable = true;
|
||||
# sound.enable = true;
|
||||
# hardware.pulseaudio.enable = true;
|
||||
|
||||
systemd.services.snapclient = {
|
||||
|
||||
57
raspberry/zero-w.md
Normal file
57
raspberry/zero-w.md
Normal file
@@ -0,0 +1,57 @@
|
||||
install rasberry pi os
|
||||
connect via ssh
|
||||
|
||||
edit /boot/firmware/config.txt
|
||||
dtoverlay=hifiberry-dacplus-std
|
||||
force_eeprom_read=0
|
||||
auto_initramfs=1
|
||||
disable_fw_kms_setup=1
|
||||
disable_overscan=1
|
||||
arm_boost=1
|
||||
[cm4]
|
||||
otg_mode=1
|
||||
[cm5]
|
||||
dtoverlay=dwc2,dr_mode=host
|
||||
|
||||
edit /boot/firmware/cmdline.txt
|
||||
add ro to the end of the line
|
||||
|
||||
disable unused stuff
|
||||
sudo systemctl disable bluetooth
|
||||
sudo systemctl disable hciuart.service
|
||||
sudo systemctl disable avahi-daemon.service
|
||||
sudo systemctl disable triggerhappy.service
|
||||
sudo systemctl disable dphys-swapfile.service
|
||||
sudo systemctl disable apt-daily.timer
|
||||
sudo systemctl disable apt-daily-upgrade.timer
|
||||
|
||||
|
||||
setup user and group snapclient, add to audio group
|
||||
sudo groupadd -r snapclient
|
||||
sudo useradd -r -g snapclient -G audio snapclient
|
||||
download snapclient release arm64 with pulse
|
||||
wget https://github.com/badaix/snapcast/releases/download/v0.31.0/snapclient_0.31.0-1_armhf_bookworm_with-pulse.deb
|
||||
|
||||
install deb package
|
||||
sudo dpkg -i snapclient_0.31.0-1_armhf_bookworm_with-pulse.deb
|
||||
sudo apt install -f -y
|
||||
vim.tiny /etc/default/snapclient
|
||||
START_SNAPCLIENT=true
|
||||
SNAPCLIENT_OPTS="--hostID music-bedroom -h snapcast.cloonar.com"
|
||||
|
||||
sudo systemctl enable snapclient
|
||||
make filesystem read-only
|
||||
mv /etc/resolv.conf /var/run/
|
||||
ln -s /var/run/resolv.conf /etc/resolv.conf
|
||||
add /etc/NetworkManager/NetworkManager.conf main
|
||||
rc-manager=file
|
||||
|
||||
change /etc/fstab
|
||||
proc /proc proc defaults 0 0
|
||||
PARTUUID=3bd31f85-01 /boot/firmware vfat defaults,ro 0 2
|
||||
PARTUUID=3bd31f85-02 / ext4 defaults,noatime,ro 0 1
|
||||
|
||||
tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev 0 0
|
||||
tmpfs /var/tmp tmpfs defaults,noatime,nosuid,nodev 0 0
|
||||
tmpfs /var/log tmpfs defaults,noatime,nosuid,nodev,noexec 0 0
|
||||
tmpfs /var/lib/dhcp tmpfs defaults,noatime,nosuid,nodev,noexec 0 0
|
||||
64
scripts/test-configuration
Executable file
64
scripts/test-configuration
Executable file
@@ -0,0 +1,64 @@
|
||||
#!/usr/bin/env bash
|
||||
set -Euo pipefail
|
||||
|
||||
VERBOSE=false
|
||||
SHOW_TRACE_OPT=""
|
||||
|
||||
# Parse options
|
||||
if [[ "$1" == "-v" || "$1" == "--verbose" ]]; then
|
||||
VERBOSE=true
|
||||
SHOW_TRACE_OPT="--show-trace"
|
||||
shift # Remove the verbose flag from arguments
|
||||
fi
|
||||
|
||||
# Check if hostname argument is provided
|
||||
if [ "$#" -ne 1 ]; then
|
||||
echo "Usage: $0 [-v|--verbose] <hostname>" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
HOSTNAME="$1"
|
||||
|
||||
# Check if 'nixos-rebuild' command is available
|
||||
if ! command -v nixos-rebuild > /dev/null; then
|
||||
echo "ERROR: 'nixos-rebuild' command not found. Please ensure it is installed and in your PATH." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Determine the absolute directory where the script itself is located
|
||||
SCRIPT_DIR=$(dirname "$(readlink -f "$0")")
|
||||
|
||||
# Construct the absolute path to the host's configuration file
|
||||
# and resolve it to a canonical path
|
||||
CONFIG_PATH=$(readlink -f "$SCRIPT_DIR/../hosts/$HOSTNAME/configuration.nix")
|
||||
|
||||
# Verify that the CONFIG_PATH exists and is a regular file
|
||||
if [ ! -f "$CONFIG_PATH" ]; then
|
||||
echo "ERROR: Configuration file not found at '$CONFIG_PATH' for host '$HOSTNAME'." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "INFO: Attempting dry-build for host '$HOSTNAME' using configuration '$CONFIG_PATH'..."
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo "INFO: Verbose mode enabled, --show-trace will be used."
|
||||
fi
|
||||
|
||||
# Execute nixos-rebuild dry-build
|
||||
# Store the output and error streams, and the exit code
|
||||
NIX_OUTPUT_ERR=$(nixos-rebuild dry-build $SHOW_TRACE_OPT -I nixos-config="$CONFIG_PATH" 2>&1)
|
||||
NIX_EXIT_STATUS=$?
|
||||
|
||||
# Check the exit status
|
||||
if [ "$NIX_EXIT_STATUS" -eq 0 ]; then
|
||||
echo "INFO: Dry-build for host '$HOSTNAME' completed successfully."
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
echo "Output from nixos-rebuild:"
|
||||
echo "$NIX_OUTPUT_ERR"
|
||||
fi
|
||||
exit 0
|
||||
else
|
||||
echo "ERROR: Dry-build for host '$HOSTNAME' failed. 'nixos-rebuild' exited with status $NIX_EXIT_STATUS." >&2
|
||||
echo "Output from nixos-rebuild:" >&2
|
||||
echo "$NIX_OUTPUT_ERR" >&2
|
||||
exit "$NIX_EXIT_STATUS"
|
||||
fi
|
||||
Reference in New Issue
Block a user