• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: August 27th, 2023

help-circle












  • The backup and easy set up on other servers is not necessarily super useful for a homelab but a huge selling point for the enterprise level. You can make a VM template of your host with docker set up in it, with your Compose definitions but no actual data. Then spin up as many of those as you want and they’ll just download what they need to run the images. Copying VMs with all the images in them takes much longer.

    And regarding the memory footprint, you can get that even lower using podman because it’s daemonless. But it is a little more work to set things up to auto start because you have to manually put it into systemd. But still a great option and it also works in Windows and is able to parse Compose configs too. Just running Docker Desktop in windows takes up like 1.5GB of memory for me. But I still prefer it because it has some convenient features.







  • Yes that’s correct. And containers are not really portable in the way you described. They do have a mini OS in them but the state is not saved when they are “offline”. So you can think of it as more of a template, called an image. You can save the image to a file, and move that to a new pc and load that back into Docker, but that’s usually unnecessary. As long as you have Internet, you just need to know the name of the image and Docker will just download it if it doesn’t already have it. For most popular programs, you’ll find an image for it already created so just follow the instructions on what settings to use for things like volume mounts and environment variables. This configuration of variables can be saved into a Docker Compose file for easy reuse instead of typing really long command line to run your container. This Compose file is all you really need to move to the other PC and it’ll just download your image and run everything as before.


  • The data that your software interacts with is external to a container but can be mapped to the container file system so that the software can interact with it. For example, your Minecraft server relies on disk files consisting of the world state and server config files (and plugins if you have any). These will still be on your host system, not inside the container. In order to transfer this to another system, you have to take those files from your host and move them to the new host. Then copy your docker config files over too so you can start a new container that points to your files on disk and it should function the same as it did before, but possibly with a different IP address.

    An easy way to think of it is that a container can be restarted, like maybe when you restart the PC. When a container is restarted, all the files in the container reset to their initial state. If you go into a container and create files then restart the container, those files will be gone. Any data that needs to be persisted between restarts needs to go on your host filesystem and volume mount it to the container.