

I currently have a Raspberry Pi 3 B+ but I’m aware it’s a bit old and is ARM so I’m thinking of buying a Pi 5.
The Pi 5 lacks a H264 hardware encoder/decoder, making it unsuitable for most streaming/transcoding purposes.


I currently have a Raspberry Pi 3 B+ but I’m aware it’s a bit old and is ARM so I’m thinking of buying a Pi 5.
The Pi 5 lacks a H264 hardware encoder/decoder, making it unsuitable for most streaming/transcoding purposes.


I can’t speak for client capabilities on Apple devices, but what’s your server hardware? CPU or GPU transcoding?
I have an AMD GPU in my server and have no issues transcoding AV1 and H265 for my lesser capable clients.
You can also setup Jellyfin in parallel to Plex and give it a whirl.


Sir, this is a /c/selfhosted.


Do you mean Zigbee in general or the ZBT-2?


In addition to these guys knowing what they are doing and pushing firmware updates straight through Home Assistant, every purchase also supports the Open Home Foundation.
I’m pretty sure you can achieve similar performance with cheaper dongles.


When they first released ZHA the interface was very barebones compared to Z2M. I saw the current Home Assistant interface in their stream on the ZBT-2 and it looks a lot more like a proper Zigbee interface now.
I don’t think there is going to be much of a performance difference between ZHA and Z2M, mostly just how you interact with it.


I have been waiting for them to release the Zigbee equivalent to their ZWA-2. Ordered one.
Does anybody use Zigbee directly in Home Assistant? I’m currently still on Zigbee2MQTT but I’m wondering if I should switch over to the Zigbee integration in Home Assistant.


Yes, but that doesn’t help you with the large providers (Gmail, Outlook, …) unfortunately.


I finally moved my mail server from Hetzner to my homelab.
Pretty smooth sailing so far. For now I’m using Scaleway for outgoing mails since I can’t set a PTR record here but I might just try sending a few without PTR to see how other providers react.


Does it do replaygain? The only feature I’m lacking on desktop Finamp right now.


Self-hosting is trivial and everyone can do it.
Exposing services to the internet is not.
Just like everyone doing open heart surgery on dummies is fine, everyone self-hosting in their own network is fine. You can buy hardware right now that connects to power and wifi and you are self-hosting.


Not sure if it counts as “budget friendly” but the best and cheapest method right now to run decently sized models is a Strix Halo machine like the Bosgame M5 or the Framework Desktop.
Not only does it have 128GB of VRAM/RAM, it sips power at 10W idle and 120W full load.
It can run models like gpt-oss-120b or glm-4.5-air (Q4/Q6) at full context length and even larger models like glm-4.6, qwen3-235b, or minimax-m2 at Q3 quantization.
Running these models is otherwise not currently possible without putting 128GB of RAM in a server mainboard or paying the Nvidia tax to get a RTX 6000 Pro.
The Matrix server is a normal Signal client that can encrypt/decrypt messages from your account.
Assuming you trust your server, no. I would not use it on a third party Matrix server.
Sure, I got all my Signal/Telegram chats synced to my Matrix server.
That explains why my Matrix <-> Signal bridge was complaining about being disconnected.


Thanks, Github thinking it is merged explains why I was not able to find it.


Do you have a link for that PR?
If you don’t follow their tuning guide, Nextcloud does run very poorly on SQLite and without Redis/caching. Apache also performs significantly worse than nginx + php-fpm.
https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html
It does run very well with Postgres + Redis + php-fpm + OPcache and has been pretty much the center of my selfhosting endeavor since ownCloud times.


mailcow-dockerized is great, really makes email setup so much easier.
Do you ever send mails to Gmail and Office365? Do you get through the spam filter without PTR record?
Depends on what they settle on, especially for screen sharing. Many downscale content for people with weaker connections.