I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • 520@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    11 months ago

    It’s very, very useful.

    For one thing, its a ridiculously easy way to get cross-distro support working for whatever it is you’re doing, no matter the distro-specific dependency hell you have to crawl through in order to get it set up.

    For another, rather related reason, it’s an easy way to build for specific distros and distro versions, especially in an automated fashion. Don’t have to fuck around with dual booting or VMs, just use a Docker command to fire up the needed image and do what you gotta do.

    Cleanup is also ridiculously easy too. Complete uninstallation of a service running in Docker simply involves removal of the image and any containers attached to it.

    A couple of security rules you should bear in mind:

    1. expose only what you need to. If what you’re doing doesn’t need a network port, don’t provide one. The same is true for files on your host OS, RAM, CPU allocation, etc.
    2. never use privileged mode. Ever. If you need privileged mode, you are doing something wrong. Privileged mode exposes everything and leaves your machine ripe for being compromised, as root if you are using Docker.
    3. consider podman over docker. The former does not run as root.