Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.

Introverted, yet I enjoy discussion to a fault.

  • 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • I have no clue what could be causing that. I’d start looking into each link in the chain and making sure it’s working.

    But any halfway reasonable config should be able to handle audio playback, no matter how lossless. Audio-only just doesn’t achieve datarates that would choke up… Anything.

    Essentially, benchmark file transfers, transcoding, etc. Make sure each step of how it works is in fact working. Check drive SMART health… Whatever you can think of.

    Also logs. No need to read through thousands of lines, but looking at the lines time stamped around when the issue occurs is always a good idea. FFMPEG logs, JF logs, client player logs, does SMB or whatever network drive protocol you’re using have logs? If it does, check em.












  • MentalEdge@sopuli.xyztolinuxmemes@lemmy.worldSteam on Linux
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    3 months ago

    It’s not that hard, actually.

    Got it working with Armoured Core VI on KDE, you just have to run the game in gamescope with some flags to enable HDR, and then KDE will pick that up as long as your monitor is HDR and it’s enabled.

    Forbidden West crashes when I enable HDR in the game settings, and Helldivers HDR is just so bad it’s not worth using.



  • Theoretically a load average could be as high as it likes, it’s essentially just the length of the task queue, after all.

    Processes having to queue to get executed is no problem at all for lots of workloads. If you’re not running anything latency-sensitive, a huge load average isn’t a problem.

    Also it’s not really a matter of parallelization. Like I mentioned, ffmpeg impacted other processes even when restricted to running in a single thread.

    That’s because most other processes will do work in small chunks that complete within nanoseconds. Send a network request, parse some data, decode an image, poll HID device, etc.

    A transcode meanwhile can easily have a CPU running full tilt for well over a second, working on just that one thing. Most processes will show up and go “I need X amount of CPU time” while ffmpeg will show up and go “give me all available CPU time” which is something the scheduler can’t actually quantify.

    It’s like if someone showed up at a buffet and asked for all the food that no-one else is going to eat. How do you determine exactly how much that is, and thereby how much it is safe to give this person without giving away food someone else might’ve needed?

    You don’t. Without CPU headroom it becomes very difficult for the task scheduler to maintain low system latency. It’ll do a pretty good job, but inevitably some CPU time that should have gone to other stuff, will go the process asking for as much as it can get.


  • I think the difference is simply that most processes only have a certain amount that needs accomplishing in a given unit of time. As long as they can get enough CPU time, and do so soon enough after getting in line for it, they can maintain real-time execution.

    Very few workloads have that much to do for that long. But I would expect other similar workloads to present the same problem.

    There is a useful stat which Linux tracks in addition to a simple CPU usage percentage. The “load average” represents the average number of processes that have requested CPU time, but have to queue for it.

    As long as the number is lower than the available number of cores, this essentially means that whenever one process is done running a task, the next in line can get right on with theirs.

    If the load average is less than the number of cores available, that means the cores have idle time where they are essentially just waiting for a process to need them for something. Good for time-sensitive processes.

    If the load average is above the number of cores, that means some processes are having to wait for several cycles of other processes having their turn, before they can execute their tasks. Interestingly, the load average can go beyond this threshold way before the CPU hits 100% usage.

    I found that I can allow my system to get up to a load average of about 1.5 times the number of cores available, before you start noticing it when playing on one of the servers I run.

    And whenever ffmpeg was running, the load average would spike to 10-20 times the number of cores. Not good.


  • I manage a machine that runs both media transcodes and some video game servers.

    The video game servers have to run in real-time, or very close to it. Otherwise players using them suffer noticeable lag.

    Achieving this at the same time that an ffmpeg process was running was completely impossible. No matter what I did to limit ffmpegs use of CPU time. Even when running it at lowest priority it impacted the game server processes running at top priority. Even if I limited it to one thread, it was affecting things.

    I couldn’t understand the problem. There was enough CPU time to go around to do both things, and the transcode wasn’t even time sensitive, while the game server was, so why couldn’t the Linux kernel just figure it out and schedule things in a way that made sense?

    So, for the first time I read up on how computers actually handle processes, multi-tasking and CPU scheduling.

    As FFMPEG is an application that uses ALL available CPU time until a task is done, I came to the conclusion that due to how context switching works (CPU cores can only do one thing, they just switch out what they do really fast, but this too takes time) it was causing the system to fall behind on the video game processes when the system was operating with zero processing headroom. The scheduler wasn’t smart enough to maintain a real-time process in the face of FFMPEG, which would occupy ALL available cycles.

    I learned the solution was core pinning. Manually setting processes to run on certain cores of the CPU. I set FFMPEG to use only one core, since it doesn’t matter how fast it completes. And I set the game processes to use all but that one core, so they don’t accidentally end up queueing for CPU time on a core that doesn’t have the headroom to allow the task to run within a reasonable time range.

    This has completely solved the problem, as the game processes and FFMPEG no longer wait for CPU cycles in the same queue.




  • Consider using the KDE keyboard shortcut tools to set up a permanent paste keybind instead of using the history.

    For example, I have a keybind that sends a known mouse movement input, which I use to set that known mouse input to always correspond to ten centimeters of on-screen movement.

    Using a keybind would remove the need to ever select the right item from the history, and reduce the clutter in it for copy-pasting other things.