It looks like !buildapc community isn’t super active so I apologize for posting here. Mods, let me know if I should post there instead.

I built my first PC when I was I think 10-11 years old. Built my next PC after that and then sort of moved toward pre-made HP/Dell/etc. My last PC’s mobo just gave out and I’m looking to replace the whole thing. I’ve read over the last few years that prefabs from HP/Dell/etc. have gone to shit and don’t really work like they used to. Since I’m looking to expand comfortably, I’ve been thinking of giving building my own again.

I remember when I was a young lad, that there were two big pain points when putting the rig together: motherboard alignment with the case (I shorted two mobos by having it touch the bare metal of the grounded case; not sure how that happened but it did) and CPU pin alignment so you don’t bend any pins when inserting into the socket.

Since it’s been several decades since my last build, what are some things I should be aware of? Things I should avoid?

For example, I only recently learned what M.2 SSD are. My desktop has (had) SATA 3.5" drives, only one of which is an SSD.

I’ll admit I am a bit overwhelmed by some of my choices. I’ve spent some time on pcpartpicker and feel very overwhelmed by some of the options. Most of my time is spent in code development (primarily containers and node). I am planning on installing Linux (Ubuntu, most likely) and I am hoping to tinker with some AI models, something I haven’t been able to do with my now broken desktop due to it’s age. For ML/AI, I know I’ll need some sort of GPU, knowing only that NVIDIA cards require closed-source drivers. While I fully support FOSS, I’m not a OSS purist and fully accept that using a closed source drivers for linux may not be avoidable. Happy to take recommendations on GPUs!

Since I also host a myriad of self hosted apps on my desktop, I know I’ll need to beef up my RAM (I usually go the max or at least plan for the max).

My main requirements:

  • Intel i7 processor (I’ve tried i5s and they can’t keep up with what I code; I know i9s are the latest hotness but don’t think the price is worth it; I’ve also tried AMD processors before and had terrible luck. I’m willing to try them again but I’d need a GOOD recommendation)
  • At least 3 SATA ports so that I can carry my drives over
  • At least one M.2 port (I cannibalized a laptop I recycled recently and grabbed the 1TB M.2 card)
  • On-board Ethernet/NIC (on-board wifi/bluetooth not required, but won’t complain if they have them)
  • Support at least 32 GB of RAM
  • GPU that can support some sort of ML/AI with DisplayPort (preferred)

Nice to haves:

  • MoBo with front USB 3 ports but will accept USB 2 (C vs A doesn’t matter)
  • On-board sound (I typically use headphones or bluetooth headset so I don’t need anything fancy. I mostly listen to music when I code and occasionally do video calls.)

I threw together this list: https://pcpartpicker.com/list/n6wVRK

It didn’t matter to me if it was in stock; just wanted a place to start. Advice is very much appreciated!

EDIT: WOW!! I am shocked and humbled by the great advice I’ve gotten here. And you’ve given me a boost in confidence in doing this myself. Thank you all and I’ll keep replying as I can.

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    9 months ago

    The responsiveness between a hard drive and an SSD is night and day. NVMe is even faster but not noticeable unless you move a hell of a lot of data around. A motherboard having at least 1 M.2 NVMe slot is common, so installing the OS on it is an option. Hard drives have more storage per price, but unless space is significant factor I suggest using SSDs (also quieter than a spinning disk!). More info on storage formats in this video

    Recent generations of motherboards use DDR5 RAM, which were very expensive on release. I think the price has come down but I am not up to date this generation. You may be able to save money making a DDR4 system but you’ll be stuck on a less supported platform.

    AMD had like ~10 years of bad/power hungry processors and Intel stagnated, re-releasing 4-core processors over and over. AMD made a big comeback with their Ryzen series becoming best bang for buck, then even over taking Intel. I think it’s pretty even now.

    If you don’t intend to game or do certain compute workloads then you can avoid buying a GPU. Integrated CPUs have come quite far (still low end compared to a dedicated GPU). Crypto mining, Covid and now AI has made the GPUs market expensive and boring. Nvidia has more higher-end cards, mid range is way more expensive for both and low end sucks ass. On Linux AMD GPUs drivers come with the OS, but Nvidia you have to get their proprietary drivers (Linux gaming has come a long way).

    • youmaynotknow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 months ago

      DDR5 has gone down dramatically compared to launch. You can get 64GB with a very fast bus for under 200 dollars now. At launch 32GB would easily set you back 250+. AMD has made a killing with Ryzen. Never mind the new naming convention that Intel came up with to make it even more complicated to choose the right CPU for your use cases, ridiculous. As for Nvidia GPU drivers, at the end of the day, they just work, regardless their proprietary drivers philosophy (which, again, I agree sucks). But if the OP is going to be doing AI development, machine learning and all that cool stuff, he’d be better served by getting a few CUDA TPUs. You can get those anywhere from 25 dollars to less than 100, and they come in all types (USB, PCI, M.2). https://coral.ai/products/#prototyping-products I have 1 USB Coral running the AI on my Frigate dicker for 16 cameras, and my CPU never reaches more than 12% while the TPU itself barely touches 20% utilization. You put 2 of those bad boys together, and the CPU would probably not even move from idle 🤣

      • Fubber Nuckin'@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Hold on a second, how come every time i look for TPUs i get a bunch of not-for-sale nvidia and Google cards, but this just exists out there and i never heard of it?

        • youmaynotknow@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          I found out about those about 6 months ago only, and it was by chance while going over the UnRaid forum for Frigate, so I decided to do some research. It took me almost 4 months to finally get my paws on one. They were seriously scarce back then, but have been available for a couple of month now. I only got mine finally at the end of November. They seem to be in an availability trend similar to Raspberry Pis.

      • CeeBee@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 months ago

        getting a few CUDA TPUs

        https://coral.ai/products/#prototyping-products

        Those aren’t “CUDA” anything. CUDA is a parallel processing framework by Nvidia and for Nvidia’s cards.

        Also, those devices are only good for inferencing smaller models for things like object detection. They aren’t good for developing AI models (in the sense of training). And they can’t run LLMs. Maybe you can run a smaller model under 4B, but those aren’t exactly great for accuracy.

        At best you could hope for is to run a very small instruct model trained on very specific data (like robotic actions) that doesn’t need accuracy in the sense of “knowledge accuracy”.

        And completely forgot any kind of generative image stuff.

        • youmaynotknow@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 months ago

          Same reply. And you can add as many TPUs as you want to push it to whatever level you want. At 59 bucks a piece, they’ll blow any 4070 out of the water for the same or less cost. But to the OP, you don’t have to believe any of us. You’re in that field, I’m sure you can find the jnfo on if these would fit your needs or not.

          • CeeBee@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            And you can add as many TPUs as you want to push it to whatever level you want

            No you can’t. You’re going to be limited by the number of PCI lanes. But putting that aside, those Coral TPUs don’t have any memory. Which means for each operation you need to shuffle the relevant data over the bus to the device for processing, and then back and forth again. You’re going to be doing this thousands of times per second (likely much more) and I can tell you from personal experience that running AI like is painfully slow (if you can get it to even work that way in the first place).

            You’re talking about the equivalent of buying hundreds of dollars of groceries, and then getting everything home 10km away by walking with whatever you can put in your pockets, and then doing multiple trips.

            What you’re suggesting can’t work.

            • youmaynotknow@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              Let’s get this out of the way. Not a single consumer grade board has more than 16 lanes on 1 PCI slot. With the exception of 2 or 3 very expensive new boards out there, you’ll be hard pressed to find a board with 3 slots giving you a total mas of 28 lanes (16+8+4). So, regardless of TPU or GPU that’s going to be your limit. GPUs are designed as general purpose processors that have to support millions of different applications and software. So while a GPU can run multiple functions at once, in order to do so, it must access registers or shared memory to read and store the intermediate calculation results. And since the GPU performs tons of parallel calculations on its thousands of ALUs, it also expends large amounts of energy in order to access memory, which in turn increases the footprint of the GPU. TPUs are application-specific integrated circuits (ASIC) designed specifically to handle the computational demands of machine learning and accelerate AI calculations and algorithms. They are created as a domain-specific architecture. What that means is that instead of designing a general purpose processor like a GPU or CPU, they were designed as a matrix processor that was specialized for neural network work loads. Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power. Get your facts straight and read more before you try to send others on wild goose chases. As I said, the OP already works this field, it shouldn’t be hard for him to find the information and make an educated decision.

              • CeeBee@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                9 months ago

                A lot of what you said is true.

                Since the TPU is a matrix processor instead of a general purpose processor, it removes the memory access problem that slows down GPUs and CPUs and requires them to use more processing power.

                Just no. Flat out no. Just so much wrong. How does the TPU process data? How does the data get there? It needs to be shuttled back and forth over the bus. Doing this for a 1080p image with of data several times a second is fine. An uncompressed 1080p image is about 8MB. Entirely manageable.

                Edit: it’s not even 1080p, because the image would get resized to the input size. So again, 300x300x3 for the past model I could find.

                /Edit

                Look at this repo. You need to convert the models using the TFLite framework (Tensorflow Lite) which is designed for resource constrained edge devices. The max resolution for input size is 224x224x3. I would imagine it can’t handle anything larger.

                https://github.com/jveitchmichaelis/edgetpu-yolo/tree/main/data

                Now look at the official model zoo on the Google Coral website.

                https://coral.ai/models/

                Not a single model is larger than 40MB. Whereas LLMs start at well over a big for even smaller (and inaccurate) models. The good ones start at about 4GB and I frequently run models at about 20GB. The size in parameters really makes a huge difference.

                You likely/technically could run an LLM on a Coral, but you’re going to wait on the order of double-digit minutes for a basic response, of not way longer.

                It’s just not going to happen.

                • youmaynotknow@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  9 months ago

                  OK mman, dont pop a vein over this. I’m a hobbyist, with some experience, but a hobbyist nonetheless. I’m speaking from personal experience, nothing else. You may well be right (and thanks for the links, they’re really good for me to learn even more).

                  I guess, at the end of the day, the OP will need to make an informed decision on what will work for him while adhering to his budget.

                  I’m glad to be here, because I can help people (at least some times) and learn at the same time.

                  I just hope the OP ends up with something that’ll fit his needs and budget. I will he adding a K80 to my rig soon, only because I can let go of 50 bucks and want to test it until it burns.

                  I wish you all a very nice weekend, and keep tweaking, its too Much fun.

                  • CeeBee@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    8 months ago

                    OK mman, dont pop a vein over this

                    That’s incredibly rude. At no point was I angry or enraged. What you’re trying to do is minimize my criticism of your last comment by intentionally making it seem like I was unreasonably angry.

                    I was going to continue with you in a friendly manner, but screw you. You’re an ass (and also entirely wrong).

    • CosmicTurtle@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      I was really hoping to cannibalize the 32 GBs of DDR3 RAM but I couldn’t find a MoBo that supports it anymore. Then I saw DDR5 is the latest!

      I don’t really do any gaming. If I wasn’t going to tinker with AI, I’d just need a card for dual DisplayPort output. I can support HDMI but…I prefer DP

      • youmaynotknow@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        The 4070 TI will give you quite a few years out of it for sure. You could also completely forego the GPU and get a couple of CUDAs for a fraction of the cost. Just use the integrated graphics and you’re golden.

          • CeeBee@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            9 months ago

            Are CUDAs something that I can select within pcpartpicker?

            I’m not sure what they were trying to say, but there’s no such thing as “getting a couple of CUDA’s”.

            CUDA is a framework that runs on Nvidia hardware. It’s the hardware that will have “CUDA cores” which are large amounts of low power processing units. AMD calls them “stream processors”.

          • youmaynotknow@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            I misspoke, and I apologize. I could not recall the term TPU, so I just went with the name of the protocol (CUDA). Nvidia has various TPU devices that use CUDA protocol (like the K80 for example). TPUs (Tensor Processing Units) are coprocessors designed to run some GPU intensive tasks without the expense of an actual GPU unit. They are not a one to one replacement, as they perform calculations in completely different ways.

            I believe you would be well served by researching a bit and then making an informed decision on what to get (TPU, GPU or both).

        • CeeBee@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          You could also completely forego the GPU and get a couple of CUDAs for a fraction of the cost.

          What is this sentence? How do you “get a couple of CUDA’s”?

          • youmaynotknow@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            8
            ·
            9 months ago

            Dude, you KNOW I’m talking about TPUs. The name escaped my mind at the moment. Sorry if my English is not to your royalty level. Are you really so hired that you have to make a party out of that? Ran out of credits on pornhub or something?