• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • That’s not the case, you just need to be able to make an outbound connection.

    The minutiae of how certbot works or if that specific person actually did it right or wrong is kind of aside the point of my “intended to be funny but seemingly was not” comment about how sometimes the easiest solution to implement is the one you remember, even if it’s overkill for the immediate problem.



  • This is confusing to me, because the point of the request seems to be “get a certificate”, not “get a self signed certificate generated by running the openssl command”. If you know how to get the result, it doesn’t really matter if you remembered offhand the shitty way or the overkill way.

    Is it really more helpful to say “I remember how to do this, but let me lookup a different way that doesn’t use the tools I’m familiar with”?


  • Do you think that, in this example, using certbot is fucking shit up, or breaking something?

    The thing about overkill is that it does work. If you’re accustomed to using a solution in a professional setting, it’s probably both overkill and also vastly more familiar than the bare minimum required for a class project that would be entirely unacceptable in a professional setting.

    In OPs anecdote, they did get their certificates, so I don’t quite see your “intentionally fucking things up” claim as what’s happening.


  • I’ll be honest, I’ve had times where there’s the “simple” solution, and “the solution I remember off the top of my head”, and 10/10 the one that’s happening is the one that I remember because I just did it last week.

    I have no desire to google the arguments for self signing a cert with openssl, and I cannot remember which webserver wants the cabundle and the public cert in the same file. If I had done it even kinda recently I’d still remember what to poke in the certbot config.




  • ricecake@sh.itjust.workstolinuxmemes@lemmy.worldsystemdeez nuts
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    3 months ago

    No, not everyone thinks it’s a bad thing. It is, however, infectious, which is a reason some people don’t like it.

    Knowing why people dislike something isn’t the same as thinking it’s the worst thing ever, and liking something doesn’t mean you can’t acknowledge it’s defects.

    I think it’s a net benefit, but that it would be better if they had limited the scope of the project a bit, rather than trying to put everything in the unit system.


  • ricecake@sh.itjust.workstolinuxmemes@lemmy.worldsystemdeez nuts
    link
    fedilink
    arrow-up
    38
    arrow-down
    2
    ·
    3 months ago

    It’s that it also decided to take over log management, event management, networking, DNS resolution, etc, etc.

    If it were just an init system that would be perfectly portable. People were able to write software that way with sysv for years.

    It’s that in order to do certain low level tasks on a systemd system, you need to integrate with systemd, not just “be started by it”. Now if a distro wants that piece of software, it needs to use systemd, and other pieces of software that want to be on that distro need to implement integration with systemd.

    A dependency isn’t infectious, but a dependency you can’t easily swap out is, particularly if it’s positioned near the base of a dependency tree.

    Almost all of my software can run on x86 or arm without any issues beyond changing compiler targets. It’s closer to how it’s tricky to port software between Mac and Linux, or Linux and BSD. Targeting one platform entails significant, potentially prohibitive, effort to support another, despite them all being ostensibly compatible unix like systems.


  • ricecake@sh.itjust.workstolinuxmemes@lemmy.worldsystemdeez nuts
    link
    fedilink
    arrow-up
    71
    arrow-down
    6
    ·
    3 months ago

    It’s also “infectious” software. The way systemd positions itself on the system, it can make it more difficult for software to be written in an agnostic way. This isn’t all software, and is often more of a complaint by lower level software, like desktop environments.
    https://catfox.life/2024/01/05/systemd-through-the-eyes-of-a-musl-distribution-maintainer/ This isn’t a terrible summary of some of the aspects of it.

    Another aspect is that when it was first developed, the lead on the project was exceptionally hostile to anyone who didn’t immediately agree that systemd definitely should take over most of the system, often criticizing people who pointed out bugs or questionable design decisions as being afraid of change or relics of the past.
    It’s more of a social reason, but if people feel like the developer of a tool they’re forced to use doesn’t even respect their concerns, they’re going to start rejecting the tool.


  • That’s totally fair. Most of my “caring about swap” time is when I was managing servers, and so you wouldn’t have inactive apps to get swapped out, thus swap usage was a sign that you needed to get a new server and put down the old one.

    Turns out I don’t monitor my home computer the way I monitor the work ones. :)


  • So, swap is when the computer writes used memory to slow, long term storage because of memory pressure.
    Cache is when the OS sticks random bits of files into unused memory to they can be used faster.

    Using swap is a sign you need more ram. Using cache is harmless, and the OS will try to fill all free memory with cached files, because worst case scenario there’s no cost.



  • It’s not a simple task, so I won’t list many specifics, but more general principles.

    First, some specifics:

    • disable remote root login via ssh.
    • disable password login, and only permit ssh keys.
    • run fail2ban to lock people out automatically.

    Generally:

    • only expose things you must expose. It’s better to do things right and secure than easy. Exposing a webservice requires you to expose port 443 (https). Basically everything else is optional.
    • enable every security system that you don’t have reason to disable. Selinux giving you problems? Don’t turn it off, learn how to write rules to let your application do the specific things it needs. Only make firewall exceptions where needed, rather than disabling the firewall.
    • give system users the minimum access they require to function.
    • set folder permissions as restrictively as possible. FACLs will help, because it lets you be much more nuanced.
    • automatic updates. If you have to remember to do it, it won’t happen. Failure to automate updates means your software is out of date.
    • consider setting up a dedicated authentication setup like authellia or keycloak. Applications tend to, frankly, suck at security. It’s not what they’re making so it’s not as good as a dedicated security service. There are other follow on benefits.
    • if it supports two factor, enable it.

    You mentioned using cloud flare, which is good. You might also consider configuring your firewall to disallow outbound connections to your local network. That way if your server gets owned, they can’t poke other things on your network.



  • So, you’re going to run into some difficulties because a lot of what you’re dealing with is, I think, specific to casaOS, which makes it harder to know what’s actually happening.

    The way you’ve phrased the question makes it seem like you’re following a more conventional path.

    It sounds like maybe you’ve configured your public traffic to route to the nginx proxy manager interface instead of to nginx itself.
    Instead of having your router send traffic on 80/443 to 81, try having it send the traffic to 80/443, which should be being listened to by nginx.

    Systems that promise to manage everything for you are great for getting started fast, but they have the unfortunate side effect of making it so you don’t actually know what it’s doing, or what you have running to manage everything. It can make asking for help a lot harder.


  • You’ll be fine enough as long as you enable MFA on your Nas, and ideally configure it so that anything “fun”, like administrative controls or remote access, are only available on the local network.

    Synology has sensible defaults for security, for the most part. Make sure you have automated updates enabled, even for minor updates, and ensure it’s configured to block multiple failed login attempts.

    You’re probably not going to get hackerman poking at your stuff, but you’ll get bots trying to ssh in, and login to the WordPress admin console, even if you’re not using WordPress.

    A good rule of thumb for securing computers is to minimize access/privilege/connectivity.
    Lock everything down as far as you can, turn off everything that makes it possible to access it, and enable every tool for keeping people out or dissuading attackers.
    Now you can enable port 443 on your Nas to be publicly available, and only that port because you don’t need anything else.
    You can enable your router to forward only port 443 to your Nas.

    It feels silly to say, but sometimes people think “my firewall is getting in the way, I’ll turn it off”, or “this one user needs read access to one file, so I’ll give read/write/execute privileges to every user in the system to this folder and every subfolder”.

    So as long as you’re basically sensible and use the tools available, you should be fine.
    You’ll still poop a little the first time you see that 800 bots tried to break in. Just remember that they’re doing that now, there’s just nothing listening to write down that they tried.

    However, the person who suggested putting cloudflare in front of GitHub pages and using something like Hugo is a great example of “opening as few holes as possible”, and “using the tools available”.
    It’s what I do for my static sites, like my recipes and stuff.
    You can get a GitHub action configured that’ll compile the site and deploy it whenever a commit happens, which is nice.


  • Oh, the system is totally pushing everyone to try to be the worst person possible.
    However, they might not actually be out competed if they’re not being as exploitative as possible. If they’re not charging as much as the market will tolerate they’re being inefficient but in the way costs profit but attracts consumers.
    I literally only have one billionaire who might not be a problem, but that’s what they did. $1 for a year of access sold to a few billion people, with something like 50 employees.

    It’s why the billionaires who shaft consumers and their workers are so gross. Reducing profit margins doesn’t impact efficiency, it only impacts money in their already overstuffed pockets.