Give portainer a try. It’s actually pretty good for getting a birdseye view, and let’s you manage more than one docker server.
It’s not perfect of course.
Give portainer a try. It’s actually pretty good for getting a birdseye view, and let’s you manage more than one docker server.
It’s not perfect of course.
Pfblockerng on pfsense is very powerful.
Can you not just backup the pg txn logs (with periodic full backups, purged in accordance with your needs?). That’s a much safer way to approach DBs anyway.
(exclude the online db files from your file system replication)
My concern (back then) with keeping the greens spun up would be that I’d lose the energy savings potential of them without the benefits of a purpose built NAS drive.
In my current NAS, I just have a pair of WD Red+. I don’t have a NVME cache or anything but it’s never been an issue given my limited needs.
I am starting to plan out my next NAS though, as the current on (Synology DS716+) has been running for a long time. I figure I can get a couple more years out of it, but I want to have something in the wings planned just in case. (seriously looking at a switch to TrueNas but grappling with price for HW vs appliance…). My hope is that SSDs drop on price enough to make the leap when the time comes.
I had WD Greens in my first NAS (they were HDDs, though). This was ill-advised. Definitely better for power consumption, but they took forever to spin up for access to the point where it seemed like the NAS was always on the fritz.
Now I swear by WD Red. Much, much better (in my use case).
(I’m not sure how things pan out in SSD land though. Right now it’s just too pricey for me to consider.)
Exactly. The best solution is one that is simple, covers almost all scenarios and generally doesn’t require rethinking when new things come along.
I do wish the Apple stuff played a bit more nicely - my wife uses it and it’s honestly the biggest headache of the design.
Onedrive /google drive for immediate stuff. Other stuff (too big for cloud services) from local to Synology, or simply served from Synology. Cloudsync from OneDrive/Google drive to Synology. (Periodic verification that things are sync’d this is very important!). Snapshots on Synology for local ‘oops’ recovery. Synology hyperbackup to Wasabi for catastrophic recovery. (used to use Glacier for this but it was a bit unwieldy for the amount of money saved - I don’t have that much data)
I’m aware that the loopback from onedrive/Google drive to synology doubles network traffic in the background but, again, I don’t have that much data and a consistent approach makes things easier/safer in the long run. And with more than one computer sharing a cloud drive link, the redundancy/complexity is further diminished. (let the cloud drive experts deal solving race conditions and synchronization/concurrency fun).
This works because every computer I have can plug into the process. Everything ends up on Synology (direct or via onedrive/Google drive) and everything ends up off site at Wasabi.
I very rarely need to touch the Wasabi stuff (unless to test, or because of boneheaded mistakes I make (not often) while configuring things.
It’s a good model (for me), adapts well to almost every situation and let’s me control my data.
Note that if you want actual virtualization then perhaps Proxmox (not sure if it manages multiple hypervisors - I haven’t obtained something to test it on yet). Portainer is best for Docker management (it, and it’s client agents, run as docker containers themselves. Don’t forget to enable web sockets if proxying.