https://u.drkt.eu/PZJz6H.png I don’t know how to embed an image link
It’s not fundamentally different
https://u.drkt.eu/PZJz6H.png I don’t know how to embed an image link
It’s not fundamentally different
I already saw copyparty but it appears to me to be a pretty large codebase for something so simple. I don’t want to have to keep up with that because there’s no way I’m reading and vetting all that code; it becomes a security problem.
It is still easier and infinitely more secure to grab a USB drive, a bicycle and just haul ass across town. Takes less time, too.
Sending is someone else’s problem.
It becomes my problem when I’m the one who wants the files and no free service is going to accept an 80gb file.
It is exactly my point that I should not have to deal with third parties or something as massive and monolithic as Nextcloud just to do the internet equivalent of smoke signals. It is insane. It’s like someone tells you they don’t want to bike to the grocer 5 minutes away because it’s currently raining and you recommend them a monster truck.
Why is it so hard to send large files?
Obviously I can just dump it on my server and people can download it from a browser but how are they gonna send me anything? I’m not gonna put an upload on my site, that’s a security nightmare waiting to happen. HTTP uploads have always been wonky, for me, anyway.
Torrents are very finnicky with 2-peer swarms.
instant.io (torrents…) has never worked right.
I can’t ask everyone to install a dedicated piece of software just to very occasionally send me large files
For one I don’t use software that updates constantly. If I had to log in to a container more than once a year to fix something, I’d figure out something else. My NAS is just harddrives on a Debian machine.
Everything I use runs either Debian or is some form of BSD
what?
The misunderstanding seems to be between software and hardware. It is good to reboot Windows and some other operating systems because they accumulate errors and quirks. It is not good to powercycle your hardware, though. It increases wear.
I’m not on an OS that needs to be rebooted, I count my uptime in months.
I don’t want you to pick up a new anxiety about rebooting your PC, though. Components are built to last, generally speaking. Even if you powercycled your PC 5 times daily you’d most likely upgrade your hardware long before it wears out.
Powercycling is not healthy lol
To me, the appeal is that my workflow depends less on my computer and more on my ability to connect to a server that handles everything for me. Workstation, laptop or phone? Doesn’t matter, just connect to the right IPs and get working. Linux is, of course, the holy grail of interoperability, and I’m all Linux. With a little bit of set up, I can make a lot of things talk to each other seamlessly. SMB on Windows is a nightmare but on Linux if I set up SSH keys then I can just open a file manager and type sftp://<hostname> and now I’m browsing that machine as if it was a local folder. I can do a lot of work from my genuinely-trash laptop because it’s the server that’s doing the heavy lifting
TL;DR -
My workflow becomes “client agnostic” and I value that a lot
I’m sure there’s ways to do it, but I can’t do it and it’s not something I’m keen to learn given that I’ve already kind of solved the problem :p
I think it’s great you brought up RAID but I believe when Immich or any software mess things up it’s not recoverable right?
RAID is not a backup, no. It’s redundancy. It’ll keep your service up and running in the case of a disk failure and allow you to swap in a new disk with no data loss. I don’t know how Immich works but I would put it in a container and drop a snapshot anytime I were to update it so if it breaks I can just revert.
I recommend it over a full disk backup because I can automate it. I can’t automate full disk backups as I can’t run dd reliably from a system that is itself already running.
It’s mostly just to ensure that I have config files and other stuff I’ve spent years building be available in the case of a total collapse so I don’t have to rebuilt from scratch. In the case of containers, those have snapshots. Anytime I’m working on one, I drop a snapshot first so I can revert if it breaks. That’s essentially a full disk backup but it’s exclusive to containers.
edit: if your goal is to minimize downtime in case of disk failure, you could just use RAID
My method requires that the drives be plugged in at all times, but it’s completely automatic.
I use rsync from a central ‘backups’ container that pulls folders from other containers and machines. These are organized in
/BACKUPS/(machine/container)_hostname/...
The /BACKUPS/
folder is then pushed to an offsite container I have sitting at a friends place across town.
For example, I backup my home folder on my desktop which looks like this on the backup container
/BACKUPS/Machine_Apollo/home/dork/
This setup is not impervious to bitflips a far as I’m aware (it has never happened). If a bit flip happens upstream, it will be pushed to backups and become irrecoverable.
doesn’t this just mean the bots hammer your server looping forever?
Yes
How much processing do you do of those forms
None
It costs me nothing to have bots spending bandwidth on me because I’m not on a metered connection and electricity is cheap enough that the tiny overhead of processing their requests might amount to a dollar or two per year.
I am currently watching several malicious crawlers be stuck in a 404 hole I created. Check it out yourself at https://drkt.eu/asdfasd
I respond to all 404s with a 200 and then serve them that page full of juicy bot targets. A lot of bots can’t get out of it and I’m hoping that the driveby bots that look for login pages simply mark it (because it responded with 200 instead of 404) so a real human has to go and check and waste their time.
I just cancelled my premium tuta account because they wouldn’t stop trying to upsell and now I have to look at their ads here!?
Well if they ever pull another “you must use snap or die”, you’ll have to imagine it. Thankfully, this exists https://github.com/acmesh-official/acme.sh
Iteration one, the original https://drkt.eu/library/Museum/old_website_hw.jpg
Iteration two, taking it seriously https://drkt.eu/library/Museum/ye_olde_server-rack.jpg
Iteration three, evolved LACK rack https://drkt.eu/library/Museum/new_apartment.jpg
Bonus https://drkt.eu/library/Museum/backside_mess.jpg
'Artemis' Server
MOBO : GigaByte MB GA-Z170XP-SLI
CPU : Intel Core i5 6600K 4c/4t
RAM : 2x DDR4 8GB CL14 2133 Kingston HyperX
PSU : ## TO BE ADDED ##
Storage - SATA : SSD 2TB
- SATA : HDD 4TB
- SATA : SSD 1TB
'Deimos' Server
MOBO : ASRock H81M-ITX
CPU : Intel Pentium G3220 2c/2t
RAM : 2x DDR3 8GB C8 1600 Crucial Ballistix OC
PSU : ## TO BE ADDED ##
Storage - SATA : HDD 300GB
'Phobos' Server
MOBO : Intel H81 Express Chipset
CPU : Intel Core i3 4330T 2c/4t
RAM : 2x DDR3 4GB 1333
PSU : 65 watts AC/DC adapter
Storage - SATA : SSD 2TB
😏