• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • From the “redirect the vents” side of things, I’ve been doing this manually for the 7 years with no ill effects. Last year I added a Flair system and Ecobee to automatically balance using the registers. They have back pressure detection to prevent damage to the HVAC system so there’s always enough vents open. At least in my scenario it’s been a game changer for the third floor of our townhouse. As we’ve headed into warmer months our bedroom is actually cool in the evenings and the lower floors are normal temperatures. During the winter our living space on the second floor was cozy without blasting the bedrooms and making it too hot to sleep. With the number of vents I had it cost just over 1K to do, but that was way cheaper than it would have been to have the house and system rezoned.

    I’m into smarthome stuff so now I’ve actually got room level presence detection going and tying that back to Flair with Home Assistant so we only cool or heat occupied rooms. Wife is a very happy camper in her now temperature controlled office, and it only targets the office when she’s in it.


  • Lots of good advice here. I’ve got a bunch of older WD Reds still in service (from before the SMR BS). I’ve also had good luck shucking drives from external enclosures as well as decommissioned enterprise drives. If you go that route, depending on your enclosure or power supply in these scenarios you may run into issues with a live 3.3V SATA power pin causing drives to reboot. I’ve never had this issue on mine but it can be fixed with a little kapton tape or a modified SATA adapter. It’s definitely cheaper to shuck or get used enterprise for capacity! I’m running at least a dozen shucked drives right now and they’ve been great for my needs.

    Also, if you start reaching the point of going beyond the ports available on your motherboard, do yourself a favor and get a quality HBA card flashed in IT mode to connect your drives. The cheapo 4 port cards I originally tried would have random dropouts in Unraid from time to time. Once I got a good HBA it’s been smooth sailing. It needs to be in IT mode to prevent hardware raid from kicking in so that Unraid can see the individual identifiers of the disks. You can flash it yourself or use an eBay seller like ThArtOfServer who will preflash them to IT mode.

    Finally, be aware that expanding your array is a slippery slope. You start with 3 or 4 drives and next thing you know you have a rack and 15+ drive array.



  • Great advice from everyone here. For the transcoding side of things you want an 8th gen or newer Intel chip to handle quicksync and have a good level of quality. I’ve been using a 10th gen i5 for a couple of years now and it’s been great. Regularly handles multiple transcodes and has enough cores to do all the other server stuff without an issue. You need Plex Pass to do the hardware transcodes if you don’t already have it or can look at switching to Jellyfin.

    As mentioned elsewhere, using an HBA is great when you start getting to large numbers of drives. I haven’t seen random drops the way I’ve seen occasionally on the cheap SATA PCI cards. If you get one that’s flashed in “IT mode” the drives appear normally to your OS and you can then build software raid however you want. If you don’t want to flash it yourself, I’ve had good luck with stuff from The Art of Server

    I know some people like to use old “real” server hardware for reliability or ECC memory but I’ve personally had good luck with quality consumer hardware and keeping everything running on a UPS. I’ve learned a lot from serverbuilds.net about compatibility works between some of the consumer gear, and making sense of some of the used enterprise gear that’s useful for this hobby. They also have good info on trying to do “budget” build outs.

    Most of the drives in my rack have been running for years and were shucked from external drives to save money. I think the key to success here has been keeping them cool and under consistent UPS power. Some of mine are in a disk shelf, and some are in the Rosewill case with the 12 hot swap bays. Drives are sitting at 24-28 degrees Celsius.

    Moving to the rack is a slippery slope… You start with one rack mounted server, and soon you’re adding a disk shelf and setting up 10 gigabit networking between devices. Give yourself more drive bays than you need now if you can so you have expansion space and not have to completely rearrange the rack 3 years later.

    Also if your budget can swing it, it’s nice keeping other older hardware around for testing. I leave my “critical” stuff running on one server now so that a reboot when tinkering doesn’t take down all the stuff running the house. That one only gets rebooted or has major changes made when it’s not in use (and wife isn’t watching Plex). The stuff that doesn’t quite need to be 24/7 gets tested on the other server that is safe to reboot.


  • I’ve been using one for several years now with one of the documented switches that add multiple ports. https://docs.pikvm.org/ezcoo/#connections First in a DIY and then with the v3 hat Kickstarter I guess total I’m at $270 between the Kickstarter HAT and ezcoo switch plus the cost of a Pi (which I already had) I can reach 4 machines over my Tailnet and jump between them reliably. I can also control power on my primary server. (others are on a network managed PDU and can be forcibly reset that way if needed)

    I had an old console from a job but it was so old that it required an ancient version of Java to access through the web interface. I’m sure there may be better options, but for my homelab setup the pikvm has worked well at a price that fit in my budget.