Immutable distros were originally very focused on servers, and more recently distros for workstations has stated gaining more interest as the concept has matured.
With the advent of cloud computing “immutable infrastructure” started becoming more and more popular. This concept started out as someone sitting down and grabbing a normal Linux distribution, and installing all the necessary bits for the server purpose they needed. Then baking that into an image. Now you could launch new copies of that machine whenever you felt like it, and they would behave exactly the same. If any of them started doing something wonky, you just destroyed it and launched a new copy. This was very useful for software developers and operations people who could now more easily reason about how things behaved. And be sure that the difference in behaviour wasn’t because someone forgot to enable a setting, install a tool, or skipped a step in the setup.
On the software development side, you also simultaneously saw more and more developers make use of functional programming methods, and al’ng with those immutable data structures. Fundamentally, instead of adding an item to a list, you make a new list with all the old and the new items in it. You never change the data after it’s creation. Each “change” is a new copy, with the difference already built in.
Then containers started becoming popular. Which allowed software developers to build a container image on their local computer, and then ship that image to a server, where the image behaved exactly as it did on their local machine. This also meant that the actual OS became less and less important, as everything needed by the container was already bundled in the container. The containers also worked as “immutable”, since everything you would install or change within the containers would be immediately lost when the container was destroyed, and recreating it would be exactly as when the image was built.
The advent of containerised workloads, gave rise to a lot of different Linux distributions. Since the containers pretty much only needed the Linux kernel from the OS, it was pretty easy to make a container-centric operating system. And in turn lock down everything else, even completely omitting having a package manager. Stuff like CoreOS, Flatcar, Rancher OS, and many others were immutable linuxes that only catered to containers. I don’t know the exact mechanism for all of these, but at least the original CoreOS and Flatcar make the actual system read only, and on top of that had two man partitions, one of the partitions would be the current system, and the other would be where updates were downloaded. Once an update was downloaded and ready, you just rebooted the machine, and it would be running off the updated partition. Which also meant easy rollback if you got a broken update. You could just boot off the other unupdated partition.
Containers were however rather ill suited for desktop applications, as there were no good way to provide a GUI. You could serve up a Web page, but native GUI apps were tricky.
That’s where Flatpak, Snaps and all that came, which essentially brings the container mentality to normal desktop apps. This brought immutability to individual apps, as they brought their own dependencies. And therefore didn’t have to rely on the correct versions of dependencies being available on the machine.
The logical next step was of course to add immutability to workstation distributions. This is where the popularity of Fedora Silverblue, NixOS, and many others really started taking shape.
I believe Fedora Silverblue uses ostree to make the system “immutable”. Of course you can still make changes to your system, but the system is built to be completely aware of the state before and the state after, this is what’s called “atomic”. There’s no such thing as a partially installed package. There is only the state before installing something, and the state when the thing is fully installed. You can roll back to any of the previous states, to recover from a broken update or misconfiguration. This also makes trying out new things with no risk. Trying out a new desktop environment, and it broke your system? Just roll back. Accidentally uninstalled a critical package? Just roll back. What to try out a new display manager? Just apply the config and roll back if you don’t like it.
SteamOS also does the thing with multiple partitions, and even allows you to turn off the immutability. Other distributions aren’t as lenient. There’s no way to turn off the immutability in NixOS or Fedora Silverblue.
ZFS doesn’t really support mismatched disks. In OP’s case it would behave as if it was 4x 2TB disks, making 4 TB of raw storage unusable, with 1 disk of parity that would yield 6TB of usable storage. In the future the 2x 2TB disks could be swapped with 4 TB disks, and then ZFS would make use of all the storage, yielding 12 TB of usable storage.
BTRFS handles mismatched disks just fine, however it’s RAID5 and RAID6 modes are still partially broken. RAID1 works fine, but results in half the storage being used for parity, so this would again yield a total of 6TB usable with the current disks.
SSD longevity seems to be better than HDDs overall. The limiting factor is how many write cycles the SSD can handle, but in most cases the write endurance is so high that it’s unreachable by most home/NAS systems.
SSDs are however really bad for cold storage, as they will lose the charge stored in their cells if left unpowered too long. When the SSD is powered it will automatically refresh the cells in the background to ensure they don’t lose their charge.
Are you profiting from running systemd?
My home-assistant installation alone is too much for my Raspberry Pi 3. It depends entirely on how much data it’s processing and needing to keep in memory.
Octoprint needs to respond in a timely manner, so you will want to have the system mostly idle (at least below 60 percent CPU at all times), preferably octoprint should be the only thing running on the system unless it’s rather powerful.
If I were you, I would install octoprint exclusively on your Raspberry Pi 3, and then buy a Raspberry Pi 4 for the other services.
I’m running Pi-hole and a wireguard VPN on an old Raspberry Pi 2, which is perfectly fine if you are not expecting gigabit speeds on the VPN.
Docker for webdev? You know that Docker is server side right?
Sure… Which is why Valve has built Proton, which makes nearly all PC games run on Linux… Sure, the developers of the games themselves should have made the Linux port, but for many developers it’s cost prohibitive to support another platform with very few potential customers.
But the more players who run Linux (and Steam Deck by extension), the larger the incentive for developers to support Linux natively. And in turn more games will get made for Linux, which will draw in more people to switch to Linux.
So as long as my game runs, then I don’t care whether it was the original developer, Valve or an open-source developer why wrote the code that made it work. And luckily I’m one of those people that don’t mind having to tinker a bit to make things work (hence why I’m on Linux in the first place)
If we as gamers stubbornly refuse to switch to Linux until our games are natively ported, then developers might as well just develop their games for Windows, where the players are…
Ah, you’re on Windows, I completely missed that from your original post… Then it makes complete sense.
Can I ask why you say that Docker is “too much”?
Docker generally simplifies the installation and deployment of very many applications, and makes upgrades of the apps extremely easy.
And it has an extremely low overhead…
I really don’t see much benefit to running two clusters.
I’m also running single clusters with multiple ingress controllers both at home and at work.
If you are concerned with blast radius, you should probably first look into setting up Network Policies to ensure that pods can’t talk to things they shouldn’t.
There is of course still the risk of something escaping the container, but the risk is rather low in comparison. There are options out there for hardening the container runtime further.
You might also look into adding things that can monitor the cluster for intrusions or prevent them. Stuff like running CrowdSec on your ingresses, and using Falco to watch for various malicious behaviour.