• 3 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: July 31st, 2023

help-circle
  • Podman is not yet ready for mainstream, in my experience

    My experience varies wildly from yours, so please don’t take this bit as gospel.

    Have yet to find a container that doesn’t work perfectly well in podman. The options may not be the same. Most issues I’ve found with running containers boil down to things that would be equally a problem in docker. A sample:

    • “rootless” containers are hard to configure. It can almost always be fixed with “–privileged” or some combination of permission flags. This would be equally true for docker; the only meaningful difference is podman tries to push everything into rootless. You don’t have to.
    • network filesystems cause headaches, especially smbfs + sqlite app. I’ve had to use NFS or ext4 inside a network-mounted image for some apps. This problem is identical for docker.
    • container networking–for specific cases–needs to managed carefully. These cases are identical for docker.

    And that’s it. I generally run things once from the podman command line, then use podlet to create a quadlet out of that configuration, something you can’t do with docker. If you are having any trouble with running containers under podman, try the --privileged shortcut, see that it works, and then double back if you think you really need rootless.


  • I haven’t deployed Cloudflare but I’ve deployed Tailscale, which has many similarities to the CF tunnel.

    • Is the tunnel solution appropriate for Jellyfin?

    I assume you’re talking about speed/performance here. The overhead added by establishing the connection is mostly just once at the connection phase, and it’s not much. In the case of Tailscale there’s additional wireguard encryption overhead for active connections, but it remains fast enough for high-bandwidth video streams. (I download torrents over wireguard, and they download much faster than realtime.) Cloudflare’s solution is only adding encryption in the form of TLS to their edge. Everything these days uses TLS, you don’t have to sweat that performance-wise.

    (You might want to sweat a little over the fact that cloudflare terminates TLS itself, meaning your data is transiting its network without encryption. Depending on your use case that might be okay.)

    • I suppose it’s OK for vaultwarden as there isnt much data being transfered?

    Performance wise, vaultwarden won’t care at all. But please note the above caveat about cloudflare and be sure you really want your vaultwarden TLS terminated by Cloudflare.

    • Would it be better to run nginx proxy manager for everything or can I run both of the solutions?

    There’s no conflict between the two technologies. A reverse proxy like nginx or caddy can run quite happily inside your network, fronting all of your homelab applications; this is how I do it, with caddy. Think of a reverse proxy as just a special website that branches out to every other website. With that model in mind, the tunnel is providing access to the reverse proxy, which is providing access to everything else on its own. This is what I’m doing with tailscale and caddy.

    • General recs

    Consider tailscale? Especially if you’re using vaultwarden from outside your home network. There are ways to set it up like cloudflare, but the usual way is to install tailscale on the devices you are going to use to access your network. Either way it’s fully encrypted in transit through tailscale’s network.





    1. Seems like a very reasonable objection to me. I’d guess that most of us Immich users are using it in the first place because it improves the privacy of our photos, and a third party seeing our location data certainly undermines that.
    2. I would have complained had I noticed, so you might be the first one to notice. Immich’s userbase isn’t huge right now, it’s definitely possible.
    3. Featurewise, I’d like: a) a clearly documented way to disable map data leaving my server; b) a set of well-integrated choices (maybe even just two, as long as one of them is something like openstreetmap); c) the current configurability to be well documented.
    4. I’d love it if all such outbound data streams are also documented. Many security and privacy-focused products give you a “quiet” mode of some kind, where you can turn off everything that sends your data somewhere else. It’s a requirement in many enterprise installations.


  • xantoxis@lemmy.worldtoSelfhosted@lemmy.worldNginx 502, ssh not working.
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 months ago

    Some troubleshooting thoughts:

    What do you mean when you say SSH is “down”:

    1. connection refused (fail2ban’s activity could result in a connection refused, but a VPN should have avoided that problem, as you said)
    2. connection timeout. probably a failure at the port forwarding level.
    3. connection succeeded but closed; this can happen for a few reasons, such as the system is in an early boot up state. there’s usually a message in this case.
    4. connection succeeded but auth rejected. this can happen if your os failed to boot but came up in a fallback state of some kind.

    Knowing which one of these it is can give you a lot more information about what’s wrong:

    System can’t get past initial boot = Maybe your NAS is unplugged? Maybe your home DNS cache is down?

    Connection refused = either fail2ban or possibly your home IP has moved and you’re trying to connect to somebody else’s computer? (nginx is very popular after all, it’s not impossible somebody else at your ISP has it running). This can also be a port forwarding failure = something’s wrong with your router.

    Connection succeeded + closed is similar to “can’t get past initial boot”

    Auth rejected might give you a fallback option if you can figure out a default username/password, although you should hope that’s not the case because it means anyone else can also get in when your system is in fallback.

    Very few of these things are actually fixable remotely, btw. I suggest having your sister unplug everything related to your setup, one device at a time. Internet router, raspberry pi, NAS, your VM host, etc. Make sure to give them a minute to cool down. Hardware, particularly cheap hardware, tends to fail when it gets hot, and this can take a while to happen, and, well, it’s been hot.

    Here’s a few things with a high likelihood of failing when you’re away from home:

    • heat, as previously mentioned.
    • running out of disk space. Maybe you’re logging too much, throw some more disk in there and tune down the logging. This can definitely affect SSH, and definitely won’t be fixed by a reboot.
    • OOM failures (or other resource leaks). This isn’t likely to affect your bare metal ssh, but it could. Some things leak memory, and this can lead to cascading process destruction by the OS. In this scenario you’d probably be able to connect to things in the first few minutes after a reboot, though.
    • shitty cabling. Sometimes stuff just falls out of the socket, if it wasn’t plugged in perfectly to begin with. (Heat can also contribute to this one.)
    • reliance on a cloud service that’s currently down. (This can include: you didn’t pay the bill.) Hopefully your OS boot doesn’t fail due to a cloud service, but I’ve definitely seen setups that could.












  • Well people use ansible for a wide variety of things so there’s no straightforward answer. It’s a Python program, it can in theory do anything, and you’ll find people trying to do anything with it. That said, some common ways to replace it include

    • you need terraform or pulumi or something for provisioning infrastructure anyway, so a ton of stuff can be done that way instead of using ansible. Infra tools aren’t really the same thing, but there are definitely a few neat tricks you can do with them that might save you from reaching for ansible.
    • Kubernetes + helm is a big bear to wrestle, but if your company is also a big bear, it’s worth doing. K8s will also solve a lot of the same problems as ansible in a more maintainable way.
    • Containerization of components is great even if you don’t use kubernetes.
    • if you’re working at the VM level instead of the container level, cloud-init can allow you to take your generic multipurpose image and make it configure itself into whatever you need at boot. Teams sometimes use ansible in the cloud-init architecture, but it’s usually doing only a tiny amount of localhost work and no dynamic invetory in that role, so it’s a lot nicer there.
    • maybe just write a Python program or even a shell script? If your team has development skills at all, a simple bespoke tool to solve a specific problem can be way nicer.

  • Really all of these have solutions, but they’re constantly biting you and slowing down development and requiring people to be constantly trained on the gotchas. So it’s not that you can’t make it work, it’s that the cost of keeping it working eats away at all the productive things you can be doing, and that problem accelerates.

    The last bullet is perhaps unfair; any decent system would be a maintainable system, and any unmaintainable system becomes less maintainable the bigger your investment in it. Still, it’s why I urge teams to stop using it as soon as they can, because the problem only gets worse.


  • Sure, I mean, we could talk about

    • dynamic inventory on AWS means the ansible interpreter will end up with three completely separate sets of hostnames for your architecture, not even including the actual DNS name. if you also need dynamic inventory on GCP, that’s three completely different sets of hostnames, i.e. they are derived from different properties of the instances than the AWS names.
    • btw, those names are exposed to the ansible runtime graph via different names i.e. ansible_inventory vs some other thing, based on who even fuckin knows, but sometimes the way you access the name will completely change from one role to the next.
    • ansible-vault’s semantics for when things can be decrypted and when they can’t leads to completely nonsense solutions like a yaml file with normal contents where individual strings are encrypted and base64-encoded inline within the yaml, and others are not. This syntax doesn’t work everywhere. The opaque contents of the encrypted strings can sometimes be treated as traversible yaml and sometimes cannot be.
    • ansible uses the system python interpreter, so if you need it to do anything that uses a different Python interpreter (because that’s where your apps are installed), you have to force it to switch back and forth between interpreters. Also, the python setting in ansible is global to the interpreter meaning you could end up leaking the wrong interpreter into the role that follows the one you were trying to tweak, causing almost invisible problems.
    • ansible output and error reporting is just a goddamn mess. I mean look at this shit. Care to guess which one of those gives you a stream which is parseable as json? Just kidding, none of them do, because ansible always prefixes each line.
    • tags are a joke. do you want to run just part of a playbook? --start-at. But oops, because not every single task in your playbook is idempotent, that will not work, ever, because something was supposed to happen earlier on that didn’t. So if you start at a particular tag, or run only the tasks that have a particular tag, your playbook will fail. Or worse, it will work, but it will work completely differently than in production because of some value that leaked into the role you were skipping into.
    • Last but not least, using ansible in production means your engineers will keep building onto it, making it more and more complex, “just one more task bro”. The bigger it gets, the more fragile it gets, and the more all of these problems rears its head.