I was in the same place as you a few years ago - I liked swarm, and was a bit intimidated by kubernetes - so I’d encourage you to take a stab at kubernetes. Everything you like about swam kubernetes does better, and tools like k3s make it super simple to get set up. There _is& a learning curve, but I’d say it’s worth it. Swarm is more or less a dead end tech at this point, and there are a lot more resources about kubernetes out there.
They are, but I think the question was more “does the increased speed of an SSD make a practical difference in user experience for immich specifically”
I suspect that the biggest difference would be running the Postgres DB on an SSD where the fast random access is going to make queries significantly faster (unless you have enough ram that Postgres can keep the entire DB in memory where it makes less of a difference).
Putting the actual image storage on SSD might improve latency slightly, but your hard drive is probably already faster than your internet connection so unless you’ve got lots of concurrent users or other things accessing the hard drive a bunch it’ll probably be fast enough.
These are all Reckons without data to back it up, so maybe do some testing
Taking donations for a specific purpose (developing jellyfin core) then spending it on something else (donations to other related projects) is something donors and tax authorities generally frown on
Pretty much - I try and time it so the dumps happen ~an hour before restic runs, but it’s not super critical
pg_dumpall
on a schedule, then restic to backup the dumps. I’m running Zalando Postgres in kubernetes so scheduled tasks and intercontainer networking is a bit simpler, but should be able to run a sidecar container in your compose file
If you figure it out, I know several companies that would be more than willing to drop 7 figures a year to license the tech from you
Yeah, they are mostly designed for classification and inference tasks; given a piece of input data, decide which of these categories it belongs to - the sort of things you are going to want to do in near real time, where it isn’t really practical to ship off to a data centre somewhere for processing.
Seems pretty reasonable. At the end of the day people have to eat, so projects like this either trundle on as hobby-and-spare-time projects for a few years until people get bored and burnt out, or you find a way to make working on the project a paid gig for the core people
“is not exactly tailored to my specific requirements, aesthetic preferences and built using technology I’m familiar with” = “sucks” apparently
Yeah, I’ve learnt over the years that having non-computer based creative hobbies is really important. I did a bit of leather working for a bit - tools are cheap on AliExpress and it doesn’t take up a ton of space unless you go really deep. Spend a few hours on a weekend in the garage making a thing that is tangible and I can hold and doesn’t require maintenance
Oh don’t get me wrong, 99% of the time I love my career and 15 years in I still get a kick out of crafting code to make the stupid little machines do what I want.
The other 1% of the time - a couple of days a year - I get home at the end of the day with a profound sense that these machines are driving me slowly mad
Things made out of wood don’t suddenly stop working cos you looked away for 15 seconds and Wood v2.1.4 is incompatible with Nails v4.0, but if you upgrade Nails you also have to upgrade Paint to v2.2 and they completely changed their API because the old API wasn’t sufficiently cool anymore
At some point every professional computer person - programmer, sysadmin, whatever - will seriously consider piling all their computers into a big pile, lighting them on fire, and moving to the country to start a new life making things with their hands
This is an “x-y question” - what are you actually trying to achieve?
Clearly you are concerned about… someone… knowing your home IP address - who, and why?
I have a machine at work (no screenshots sorry) that is using ~200GB of RAM as disk cache and still has over 100GB of free RAM - not “used for cache but can be freed if an application needs it”, actually genuinely unallocated.
As in, hardware RAID is a terrible idea and should never be used. Ever.
With hardware RAID, you are moving your single point of failure from your drive to your RAID controller - when the controller fails, and they fail more often then you would expect - you are fucked, your data is gone, nice try, play again some time. In theory you could swap the controller out, but in practice it’s a coin flip if that will actually work unless you can find exactly the same model controller with exactly the same firmware manufactured in the same production line while the moon was in the same phase and even then your odds are still only 2 in 3.
Do yourself a favour, look at an external disk shelf/DAS/drive enclosure that connects over SAS and do RAID in software. Hardware RAID made sense when CPUs were hewn from granite and had clock rates measures in tens of megahertz so offloading things to dedicated silicon made things faster, but that’s not been the case this century.
It’s a really wicked problem to be sure. There is work underway in a bunch of places around different approaches to this; take a look at SBoM (software bill-of-materials) and reproducible builds. Doesn’t totally address the trust issue (the malicious xz releases had good gpg signatures from a trusted contributor), but makes it easier to spot binary tampering.
Also,
Arch is the most stable
Are you high?