I’m looking to build a low-end ollama LLM server to improve home assistant voice control, Immich image recognition and a few other services. With the current cost of hardware components like memory, I’m looking to build something small, but somewhat expandable.

I have an old micro-atx form factor computer that I’m thinking will be a good option to upgrade. I’d love recommendations on motherboards, processors, and video card combos that would likely be compatible and sufficient to run a decent server while keeping costs lower, basically, the best bang for the buck. I have a couple of M.2 SSDs I can re-purpose. Would prefer the motherboard has 2.5Gbit Ethernet, but otherwise I’m open.

Also recommendations on sites to purchase good quality memory at reasonable prices that ship to the US. I’d be willing to look at lightly used components, too.

Any advice on any of these topics would be greatly appreciated. The advice I’ve found has all been out of date especially with crypto fading so video cards are not as expensive, but LLM data centers eating up and reserving memory before it’s even manufactured.

  • chrash0@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    15 hours ago

    honestly it’s hard to beat Macs these days in this space for two reasons:

    • unified memory means that you don’t have to load up on RAM just to load the model and then also shell out for a video card with barely enough VRAM to fit a basic language model
    • their supply chain is solid and has mostly avoided the constraints that other OEMs and parts manufacturers are struggling with

    pricing is tough. sure, crypto is on its way out, but GPUs are still the platform of choice for most neural net workloads (outside of SoCs like Apple M-series). i built a PC in late 2024, and it’s easily worth twice what i paid for it.

      • chrash0@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 hours ago

        super fair. i am a Linux guy normally. i’m just being honest. i wish there was a better more open alternative.

        if you want to go with the Linux alternative it’s going to cost. get at least 32GB of RAM and at least a 4090 to run the kind of models you’re asking for. it’s the way she goes

      • ryokimball@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        11 hours ago

        The apple silicon is more energy efficient but the latest Intel and AMD CPUs deliver more processing power and can also share a significant amount of RAM to the GPU / AI components.

    • curbstickle@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      Going to second this, its all my m2 does right now. Putting together a solution for the office with some m4s.

      Its a lot of bang for the buck specifically for llm use despite being horribly overpriced otherwise.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      13 hours ago

      i built a PC in late 2024, and it’s easily worth twice what i paid for it.

      spoiler

      I wrote the vendor and asked him if the decimal was in the right place or was this the model that was beta testing alien technology. Got to be a misprint.