• 0 Posts
  • 79 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle

  • I didn’t consider it as valid, one on (phone and internal nvme1), the second one on nvme2 and the third one in the cloud.

    Though I have only two copies of normal data myself, I consider live and cloud to be enough for most data. Everything very important has more backups in other ways (bitwarden has an exportable local version on every logged in device, images are stored in immich on my server making it 3 devices)


    • Maintain three (3) copies of your data: This includes the original data and at least two copies.
    • Use two (2) different types of media for storage: Store your data on two distinct forms of media to enhance redundancy.
    • Keep at least one (1) copy off-site: To ensure data safety, have one backup copy stored in an off-site location, separate from your primary data and on-site backups.

    You have 3 copies, one on your phone and nvme, one on the backup nvme and one in the cloud. You have 2 media, internal SSD and cloud (your phone would count as a third if it wasn’t auto synced) You have 1 off-site in the cloud








  • // abandon all hope ye who commit here
    (?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])
    

    Edith: damit, Not the first to post this abomination








  • I have rss feeds for my main service updates so I know what new features I have, the services mostly run in podman containers and update automatically each Monday. I also have daily backups (timed to run just before the update on monday) in case anything does break.

    If it breaks I fix it depending on how much I want/need it, mostly it’s a matter of half an hour to fix it and with my current NixOS/Podman system I haven’t yet needed to fix anything this year so it breaks infrequently.

    Also why are you using Kubernetes on a single host if you want minimal maintenance? XD

    My recommendation is to switch to just managing containers, you should just be able to export the volumes out of kubernetes and import them as normal volumes, as long as they’re mounted in the right place you keep your data and if it doesn’t work just try again. Not like you need to destroy the current system to slowly replace it.

    Edit: I also recommend to update and reboot frequently, this stops updates and unstable configurations from piling up.




  • Yeah it works great and is very secure but every time I create a new service it’s a lot of copy paste boilerplate, maybe I’ll put most of that into a nix function at some point but until then here’s an example n8n config, as loaded from the main nixos file.

    I wrote this last night for testing purposes and just added comments, the config works but n8n uses sqlite and probably needs some other stuff that I hadn’t had a chance to use yet so keep that in mind.
    Podman support in home-manager is also really new and doesn’t support pods (multiple containers, one loopback) and some other stuff yet, most of it can be compensated with the extraarguments but before this existed I used pure file definitions to write quadlet/systemd configs which was even more boilerplate but also mostly copypasta.

    Gaze into the boilerplate
    { config, pkgs, lib, ... }:
    
    {
        users.users.n8n = {
            # calculate sub{u,g}id using uid
            subUidRanges = [{
                startUid = 100000+65536*( config.users.users.n8n.uid - 999);
                count = 65536;
            }];
            subGidRanges = [{
                startGid = 100000+65536*( config.users.users.n8n.uid - 999);
                count = 65536;
            }];
            isNormalUser = true;
            linger = true; # start user services on system start, fist time start after `nixos-switch` still has to be done manually for some reason though
            openssh.authorizedKeys.keys = config.users.users.root.openssh.authorizedKeys.keys; # allows the ssh keys that can login as root to login as this user too
        };
        home-manager.users.n8n = { pkgs, ... }:
        let
            dir = config.users.users.n8n.home;
            data-dir = "${dir}/${config.users.users.n8n.name}-data"; # defines the path "/home/n8n/n8n-data" using evaluated home paths, could probably remove a lot of redundant n8n definitions....
        in
        {
            home.stateVersion = "24.11";
            systemd.user.tmpfiles.rules =
            let
                folders = [
                    "${data-dir}"
                    #"${data-dir}/data-volume-name-one" 
                ];
                formated_folders = map (folder: "d ${folder} - - - -") folders; # a function that takes a path string and formats it for systemd tmpfiles such that they get created as folders
            in formated_folders;
    
            services.podman = {
                enable = true;
                containers = {
                    n8n-app = { # define a container, service name is "podman-n8n-app.service" in case you need to make multiple containers depend and run after each other
                        image = "docker.n8n.io/n8nio/n8n";
                        ports = [
                            "${config.local.users.users.n8n.listenIp}:${toString config.local.users.users.n8n.listenPort}:5678" # I'm using a self defined option to keep track of all ports and uids in a seperate file, these values just map to "127.0.0.1:30023:5678", a caddy does a reverse proxy there with the same option as the port.
                        ];
                        volumes = [
                            "${data-dir}:/home/node/.n8n" # the folder we created above
                        ];
                        userNS = "keep-id:uid=1000,gid=1000"; # n8n stores files as non-root inside the container so they end up as some high uid outside and the user which runs these containers can't read it because of that. This maps the user 1000 inside the container to the uid of the user that's running podman. Takes a lot of time to generate the podman image for a first run though so make sure systemd doesn't time out
                        environment = {
                            # MYHORSE = "amazing";
                        };
                        # there's also an environmentfile option for secret management, which works with sops if you set the owner of the secret/secret template
                        extraPodmanArgs = [
                            "--pull=newer" # always pull newer images when starting, I could make this declaritive but I haven't found a good way to automagically update the container hashes in my nix config at the push of a button.
                        ];
                     # few more options exist that I didn't need here
                    };
                };
            };
        };
    }