Oof, that might as well be a fork bomb then
I’m a little teapot 🫖
Oof, that might as well be a fork bomb then
I’m pretty sure it’s “run as many threads as there are cores” mode, though if you’re running it in a terminal I always find it best to use nproc-1 or -2 so the machine actually stays usable.
Gentoo users would be really mad about this if they weren’t still building their web browsers and could get online
Thanks to pipewire’s pulseaudio emulation transitioning from one to the other is effectively seamless. Just install the pipewire pulseaudio package (it’s tiny) after installing the rest of pipewire and apps that depend on pulse just work.
That looks like it can support so much femboy junk
Ad money machine didn’t go brrrrrr
For RSS I like ReadYou, for feeds I like Mastodon with a variety of interests followed. There are a surprising number of orgs on Mastodon these days.
Depends on the SSD, the one I linked is fine for casual home server use. You’re unlikely to see enough of a write workload that endurance will be an issue. That’s an enterprise drive btw, it certainly wasn’t cheap when it was brand new and I doubt running a couple of VMs will wear it quickly. (I’ve had a few of those in service at home for 3-4y, no problems.)
Consumer drives have more issues, their write endurance is considerably lower than most enterprise parts. You can blow through a cheap consumer SSD’s endurance in mere months with a hypervisor workload so I’d strongly recommend using enterprise drives where possible.
It’s always worth taking a look at drive datasheets when you’re considering them and comparing the warranty lifespan to your expected usage too. The drive linked above has an expected endurance of like 2PB (~3 DWPD, OR 2TB/day, over 3y) so you shouldn’t have any problems there. See https://www.sandisk.com/content/dam/sandisk-main/en_us/assets/resources/enterprise/data-sheets/cloudspeed-eco-genII-sata-ssd-datasheet.pdf
Older gen retired or old stock parts are basically the only way I buy home server storage now, the value for your money is tremendous and most drives are lightly used at most.
Edit: some select consumer SSDs can work fairly well with ZFS too, but they tend to be higher endurance parts with more baked in over provisioning. It was popular to use Samsung 850 or 860 Pros for a while due to their tremendous endurance (the 512GB 850s often had an endurance lifespan of like 10PB+ before failure thanks to good old high endurance MLC flash) but it’s a lot safer to just buy retired enterprise parts now that they’re available cheaply. There are some gotchas that come along with using high endurance consumer drives, like poor sync write performance due to lack of PLP, but you’ll still see far better performance than an HDD.
+1 automate your backup rolling, setup your monitoring and alerting and then ignore everything until something actually goes wrong. I touch my lab a handful of times a year when it’s time for major updates, otherwise it basically runs itself.
That’s what I’d do here, used enterprise SSDs are dirt cheap on fleaBay
If I had to guess there was a code change in the PVE kernel or in their integrated ZFS module that led to a performance regression for your use case. I don’t really have any feedback there, PVE ships a modified version of an older kernel (6.2?) so something could have been backported into that tree that led to the regression. Same deal with ZFS, whichever version the PVE folks are shipping could have introduced a regression as well.
Your best bet is to raise an issue with the PVE folks after identifying which kernel version introduced the regression, you’ll want to do a binary search between now and the last known good time that this wasn’t occurring to determine exactly when the issue started - then you can open an issue describing the regression.
Or just throw a cheap SSD at the problem and move on, that’s what I’d do here. Something like this should outlast the machine you put it in.
Edit: the Samsung 863a also pops up cheaply from time to time, it has good endurance and PLP. Basically just search fleaBay for SATA drives with capacities of 400/480gb, 800/960gb, 1.6T/1.92T or 3.2T/3.84T and check their datasheets for endurance info and PLP capability. Anything in the 400/800/1600/3200Gb sequence is a model with more overprovisioning and higher endurance (usually refered to as mixed use) model. Those often have 3 DWPD or 5 DWPD ratings and are a safe bet if you have a write heavy workload.
iowait is indicative of storage not being able to keep up with the performance of the rest of the system. What hardware are you using for storage here?
I’d go with cross-fit here personally, or rock climbing
I use Arch, btw.
Serious posting: I got tired of making backports for Debian and Ubuntu. I use Arch BTW.
But can they run Crysis?
Yeah, you’ll be fairly limited as far as GPU solutions go. I have a handful of hh AMD cards kicking around that were originally shipped in t740s and similar but they’re really only good for hardware transcoding or hanging extra monitors off the machine - it’s difficult to find a hh board with a useful amount of vram for ml/ai tasks.
Distcc, maybe gluster. Run a docker swarm setup on pve or something.
Models like those are a little hard to exploit well because of limited network bandwidth between them. Other mini PC models that have a pcie slot are fun because you can jam high speed networking into them along with NVMe then do rapid fail over between machines with very little impact when one goes offline.
If you do want to bump your bandwidth per machine you might be able to repurpose the wlan m2 slot for a 2.5gbe port, but you’ll likely have to hang the module out the back through a serial port or something. Aquantia USB modules work well too, those can provide 5gbe fairly stably.
Edit: Oh, you’re talking about the larger desktop elitedesk g1, not the USFF tiny machines. Yeah, you can jam whatever hh cards into these you want - go wild.
Bus issues usually. Having a disk (or 4) drop out of a ZFS filesystem regularly isn’t a good time.
If you can find a combination of enclosure, driver/firmware and USB port that provides you with a reliable connection to the drive then USB is just another storage bus. It’s generally not recommended because that combination (enclosure, chipset, firmware, driver, port) is so variable from situation to situation but if you know how to address the pitfalls it can usually work fine.
Openssh backdoor via a trojan’ed release of liblzma
Looks like someone fucked up package dependencies somewhere.
I’m surprised they don’t have some basic automated testing running in a VM after new package releases but I suppose they don’t need it if they can farm that duty out to their free userbase.