I just built my own automation around their official documentation; it’s fantastic.
https://www.wireguard.com/#conceptual-overview
I just built my own automation around their official documentation; it’s fantastic.
https://www.wireguard.com/#conceptual-overview
vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.
Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.
If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.
Talescale proper gives you an external dependency (and a lot of security risk), but the underlying technology (wireguard) does not have the same limitation. You should just deploy wireguard yourself; it’s not as scary as it sounds.
Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation
https://docs.docker.com/engine/network/packet-filtering-firewalls/
For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.
The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)
+1 for cmk. Been using it at work for an entire data center + thousands of endpoints and I also use it for my 3 server homelab. It scales beautifully at any size.
I’ve heard anecdotally that some 911 services were down in my area, but I can’t speak to how wide that was.
You would expose a single port to multiple vlans, and then bind multiple addresses to that single physical connected interface. Each service would then bind itself to the appropriate address, rather than “*”
You should consider reversing the roles. There’s no reason your homelab cannot be the client, and have your vps be the server. Once the wireguard virtual network exists, network traffic doesn’t really care which was the client and which was the server. Saves you from opening a port to attackers on your home network.
There is also the argument that it’s more complicated under the hood and harder to troubleshoot, particularly because of it’s inherent parallelism and dependency-tree design, whereas initv was inherently serial. It was much more straightforward to pick the order in which services started and shut down on an initv system.
For example, say I write a service and I want it to always be the first service stopped during a shutdown, and I want all other services to wait for it to stop before shutting down. That was trivial to do on an initv system, it’s basically impossible on systemd.
For those wondering, yes I did run into this situation. My solution was clobbering the shutdown, poweroff, and restart binaries with scripts earlier in path search that stop my service, verify that they’re stopped, and then hook back to systemd to do the power event.
Sorry I should have said “carbons and carbons related qol extensions”
Did you ever get carbons working properly? (As in, mobile and desktop clients of the same user both getting messages and marking as read remotely between them)
There are also full-suites like rancher which will abstract away a lot of the complexity
How has nobody in this thread said check_mk yet?
It’s free, you host it yourself. It’s built off of nagios, compatible with nagios plugins, supports snmp or agent based checks. It can email, SMS, slack or discord you when something breaks, you can write your own custom checks in any language that can output to a local console… I could never imagine even looking for something else.
has xmpp figured out carbons yet between multiple clients? also are there any good mobile clients?
If one doesn’t exist, it would seem to be a fairly straightforward (if not a smidge tedious) thing to implement. Ever thought about learning web development?
A raspberry pi or orange pi could definitely run all of those things at very low power consumption.
Apparmor will complain and block the nfs mount unless you disable apparmor for the container. Then in a lot of cases the container won’t be able to stop itself properly. At least that was my experience.
Nobody should run k8s/k3s without understanding how they work lol, that’s a recipe for lost data.
Proxmox uses scsi for disk images, which are single access only
Smb would be quite a lot of overhead, and it doesn’t natively support linux filesystem permissions. You’ll also run into issues with any older programs that rely on file locks to operate. nfs would be a much more appropriate choice. That said, apparmor in container images will usually prevent you from mounting remote nfs shares without jumping through hoops (that are in your way for a reason). You’ll be limited to doing that with virtual machines only, no openvz/containerd.
Fun fact, it was literally the problems of sharing media storage between multiple workflows that got me to stop using virtual machines in proxmox and start building custom docker containers instead.
the best way to learn is by doing!