I use Wasabi S3, but only for my most critical data. For full backups including large media I setup a offsite NAS.
Regarding tooling I’m really happy with Restic (coming from Borg).
I use Wasabi S3, but only for my most critical data. For full backups including large media I setup a offsite NAS.
Regarding tooling I’m really happy with Restic (coming from Borg).
I’m currently having a good experience with MikroTik. I think their products provide a good combination of features and pricing. There are a “CRS317-1G-16S+” and a “CSS326-24G-2S+RM” in my rack and I have my eyes on the “CSS610-8P-2S+IN” as a efficient little POE switch.
I haven’t used Ubiquity, so I can’t compare these two brands.
For APs I’m currently using TP Link Omada with a selfhosted Omada Controller and for Routing, DNS, Firewall and stuff I use OPNsense.
If you try to spin up multiple services but get stuck on creating a directory, you’re moving too fast. I think you’ll need to start a bit slower and more structured.
Learn how to do basic tasks in the terminal and a bit about how linux works in general. There is a learning curve, but it will be fun! Then move on to docker and get one service up and running. Go on from there with everything you learned along the way and solve the other problems you’ll encounter - one at a time.
Do you want to build one yourself or are you mainly interested in off-the-shelf solutions? What’s your budget? Do you run your services as containers? Do you need hardware acceleration for streaming with Jellyfin/Plex?
I would like to already have some redundancy, can I use the hard drives as they are or will I have to do something to them besides adding other hard drives?
Why do you want redundancy? To keep your data available or to keep your data safe?
Does it have to be by monitoring emails or do you have control over the backup script? I’m using Uptime Kuma to monitor my backups via push monitors. My backup scripts call a webhook to indicate success or failure. If the webhook isn’t called for X hours, the backup is also marked as failed. Works really well.
Works fine for me with Firefox. Cookies? Browser extensions?
A great investment! Just a few nights ago my power died three times for a few seconds while my NAS was in a degraded state and resilvering. My UPS saved my ass.
Nice rack btw!
Since you’re already familiar with a debian based distro, switching to the OG debian would be an option.
Yep, I couldn’t run half of the services in my homelab if they weren’t containerized. Running random, complex installation scripts and maintaining multiple services installed side-by-side would be a nightmare.
TIL that a proxmox app exists. Thanks!
I’m not using Authelia myself but I don’t think you’d need to run beta releases to get security patches.
Interesting - I didn’t bother to set the X-Real-IP headers until now and this might speed up my instance too. Thanks!
Then I wondered: what if the program is “smart” and throttles it by itself without any warning to the admin if it thinks that an ip address is sending too many requests?
The word you’re looking for is “Rate Limit(ing)” and according to the documentation you could also disable it completely.
But I guess the cleanest and most secure solution would be to just set the headers on the reverse proxy.
I upload encrypted daily snapshots to a bucket in the cloud using restic.
How do you upload a snapshot? I’m using TrueNAS where I can make snapshots visible in a otherwise hidden .zfs directory. Do you just backup from there or something similar? Is there an upside to backing up a snapshot instead of just the current data?
Do yourself a favor and do not use the latest tag. Always use the tag for the explicit version you want. Makes things a lot more stable.
You also mentioned that jellyfin prompted you with the installation wizard after the update. It seems like you fixed your problem but for the next time: this is not expected behaviour. A short downtime right after the new container started is normal but afterwards the server should run normally with the same configuration as before.