![](https://lemmy.max-p.me/pictrs/image/d3667ced-4ea5-4fbf-b229-461c68192570.jpeg)
![](https://lemmy.world/pictrs/image/4271bdc6-5114-4749-a5a9-afbc82a99c78.png)
Yeah steep is putting it mildly, it’s not worth it below a certain scale. What it excels at is highly dynamic environments where things get spun up and down on the regular and all with auto scaling.
Yeah steep is putting it mildly, it’s not worth it below a certain scale. What it excels at is highly dynamic environments where things get spun up and down on the regular and all with auto scaling.
Mine’s running on a single docker-compose.yml and it’s like 4 services: the backend, the frontend, the database and pictrs. That’s hardly insane nor complicated nor ruining existing setups.
It’s probably one of the easiest services I’ve run in quite a while.
Envoy proxy gang
Prometheus/VictoriaMetrics/Grafana are pretty good, had no issues with it and there’s an exporter for damn near anything. They’re pretty easy to custom write too.
25G pictrs
13G postgres
38G total
Seems fairly reasonable to me
I think it can also get weird when you call other makefiles, like if you go make -j64
at the top level and that thing goes on to call make on subprojects, that can be a looooot of threads of that -j
gets passed down. So even on that 64 core machine, now you have possibly 4096 jobs going, and it surfaces bugs that might not have been a problem when we had 2-4 cores (oh no, make is running 16 jobs at once, the horror).
I forgot about that, I should try it on my new laptop.
Yeah that’s what it does, that was a shitpost if it wasn’t obvious :p
Though I do use ZFS which you configure the mountpoints in the filesystem itself. But it also ultimately generates systemd mount units under the hood. So I really only need one unit, for /boot
.
You guys still use fstab? It’s systemd/Linux, you use mount units.
If you want FRP, why not just install FRP? It even has a LuCI app to control it from what it looks like.
NGINX is also available at a mere 1kb in size for the slim version, full version also available as well as HAproxy. Those will have you more than covered, and support SSL.
Looks like there’s also acme.sh support, with a matching LuCI app that can handle your SSL certificate situation as well.
The concern for the specific disk technology is usually around the use case. For example, surveillance drives you expect to be able to continuously write to 24/7 but not at crazy high speeds, maybe you can expect slow seek times or whatever. Gaming drives I would assume are disposable and just good value for storage size as you can just redownload your steam games. A NAS drive will be a little bit more expensive because it’s assumed to be for backups and data storage.
That said in all cases if you use them with proper redundancy like RAIDZ or RAID1 (bleh) it’s kind of whatever, you just replace them as they die. They’ll all do the same, just not with quite the same performance profile.
Things you can check are seek times / latency, throughput both on sequential and random access, and estimated lifespan.
I keep hearing good things about decomissioned HGST enterprise drives on eBay, they’re really cheap.
Wordpress or some of its alternatives would probably work well for this. Another alternative would be static site generators, where you pretty much just write the content in Markdown.
It’s also a pretty simple project, it would be a great project to learn basic web development as well.
It could be a disk slowly failing but not throwing errors yet. Some drives really do their best to hide that they’re failing. So even a passing SMART test I would take with some salt.
I would start by making sure you have good recent backups ASAP.
You can test the drive performance by shutting down all VMs and using tools like fio to do some disk benchmarking. It could be a VM causing it. If it’s an HDD in particular, the random reads and writes from VMs can really cause seek latency to shoot way up. Could be as simple as a service logging some warnings due to junk incoming traffic, or an update that added some more info logs, etc.
There’s always the command escape hatch. Ultimately the roles you’ll use will probably do the same. Even a plugin would do the same, all the ZFS tooling eventually shells to the zfs/zpool, probably same with btrfs. Those are just very complex filesystems, it would be unreliable to reimplement them in Python.
We use tools to solve problems, not make it harder for no reason. That’s why command/shell actions exist: sometimes it’s just better to go that way.
You can always make your own plugin for it, but you’re still just writing extra code to eventually still shell out into the commands and parse their output.
Even then, those requirements are easily satisfied by a Raspberry Pi and most other SBCs out there. Seems rather reasonable to dedicate one to HA. It’s not too crazy when you take into consideration how powerful cheapo hardware can be these days.
Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.
Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?
I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.
You probably need the server to do relatively aggressive keepalive to keep the connection alive. You go through CGNAT, so if the server doesn’t talk over the VPN for say 30 seconds, the NAT may drop the mapping and now it’s gone. WireGuard doesn’t send any packet unless it’s actively talking to the other peer, so you need to enable keepalive so it’s sending stuff often enough the connection doesn’t drop and if it does, quickly bring it back up.
Also make sure if you don’t NAT the VPN, that everything has a route that goes back to the VPN. If 192.168.1.34 (main location) talks to 192.168.2.69 (remote location) over a VPN 192.168.3.0/24, without NAT, both ends needs to know to route it through the VPN network. Your PIVPN probably does NAT so it works one way but not the other. Traceroute from both ends should give you some insight.
That should absolutely work otherwise.
For the backup scenario in particular, it makes sense to pipe them through right to the destination. Like, tar -zcv somefiles | ssh $homeserver dd of=backup.tar.gz
, or mysqldump | gzip -c | ssh $homeserver dd of=backup.sql.gz
. Since it’s basically a download from your home server’s perspective it should be pretty fast, and you don’t need temporary space at all on the VPS.
File caching might be a little tricky. You might be best self host some kind of object storage and put varnish/NGINX/dedicated caching proxy software in front of it on your VPS, so it can cache the responses but will ultimately forward to the home server over VPN if it doesn’t have it cached.
If you use NextCloud for your photos and videos and stuff, it can use object storage instead of local filesystem, so it would work with that kind of setup.
Everyone ends up on MS Teams because they bundle it with Office365, so execs have the choice of “free” or another $12/mo/user for Slack. It immediately makes it a case of “justify how Slack is so much better we spend thousands on it when Microsoft gives us Teams for free”. Those execs don’t use chat software in the first place.
That’s why the EU forced them to unbundle Teams.
Plug drive in main computer, install Debian on it along with network config and SSH access, put drive back into server and power on.
I guess technically you can also make an ISO that will just auto wipe the drive and install upon booting it but you still need a keyboard to get into the boot menu.