Something to handle code, text and math.

  • thingsiplay@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    4 hours ago

    I use local LLM with 8gb VRAM and 32gb system RAM, thanks to Vulkan support. My GPU is a RX 7600. I can run qwen/qwen3.6-35B-A3B-Q4_K_M.gguf and gemma-4-26B-A4B-it-Q4_K_M.gguf in example. It will first fill in the GPU and the rest will use the system RAM instead, which is slower but at least it will fit and run bigger models. I just need to lower the context length, which has a great impact (current custom value is 64k for anyone who wants to know).

    But this is still highly limited and not competitive at all. I mostly play around with it and occasionally ask a question here or there and that’s it. So if you are serious about your system, you need something faster and with more than just 8gb VRAM.

    • Domi@lemmy.secnd.me
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      As a side note, Qwen3.6-27B is much more capable than Qwen3.6-35B, even though it is much slower.

      https://huggingface.co/unsloth/Qwen3.6-27B-GGUF

      For coding tasks where you don’t mind waiting, you should be able to barely squeeze in the 8-bit quantized version with 32 GB RAM + 8 GB VRAM and have a pretty competent local model. 4-bit quants work but they have issues with complex tool calls.

      If you use the MTP branch of llama.cpp (and a suitable model) you can even double or triple your token generation speed: https://github.com/ggml-org/llama.cpp/pull/22673

      For easier tasks, disable reasoning for instant responses.

      • thingsiplay@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        I probably have to wait for my client (for noobs) to support MTP. So until then I play around with what I have. I’m not even that deep into Ai anyway and mostly play around and only use it occasionally to help. But thanks for the suggestion.

        I’m still experimenting, and just started doing some custom settings. What makes these “bigger” models more usable is, lowering the context to free up VRAM a bit and in exchange load more of the core model into VRAM. In example I’m trying this with a 31B unsloth gemma 4 model, but Q3_K_M and get 4 tok/sec. It’s slow and doesn’t have huge context, but for the occasional questions this is tolerable, with respect to the hardware I have.

        My main models are the previously mentioned 35B-A3B and 26B-A4B (where only a few billion parameters are active from a bigger pool) anyway, as they are pretty fast with 17 to 50 tok/sec. While the quality is acceptable and not really much different from the “bigger” models I can run.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    14 hours ago

    heavily depends on the model and quantization level

    choose the model you want on this website and it’ll give you some specs likely to run it

    https://runthisllm.com/

    any/most distros will do, especially if you run it on Docker

    if you’re going with intel cards (best $ per GB VRAM right now), you could get a decent machine under $3k

  • monovergent@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    16 GB VRAM GPU, models stored on SSD, rest of the computer doesn’t have to be crazy. Intel Arc is best bang for the buck at the moment. You can get LLM running on 8 GB cards or even the CPU, but IMO such small models are more novelties than workhorses. I personally use Debian but you’ll be fine as long as your distro’s repo has drivers recent enough for your GPU.

    For perspective, I’m using such a build to help with boilerplate code, single-use scripts that I don’t have the patience to trial-and-error (like ones that have to deal with directory structures and special characters), getting an idea of what’s what when decompiling and reverse engineering, brainstorming tip-of-the-tongue ideas, and upscaling images.

    • thingsiplay@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      I’m on the low end with 8gb VRAM, that can partially run on GPU and system RAM. That makes it halway usable. I’m not an Ai guy at all and use it mostly to play around. Occasionally it can be used here and there for simple stuff like as you suggest for brainstorming, to extract text from images or translate them. And I also used it to help with programming here and there asking questions when being offline for a month, help refactor program code and functions just to see what can be done.

      For anyone wanting to use it as a main tool and replacement of ChatGPT and the likes, they clearly need stronger hardware. I wish I had 16gb… this is extremely limiting. But token speed is at least often 17 tokens per second and sometimes over 50. That’s about what I can do.

  • Ftumch@lemmy.today
    link
    fedilink
    arrow-up
    2
    ·
    12 hours ago

    If you need or want to run an LLM on limited hardware, you may want to look into so-called bitnets with ternary connections. These should be efficient enough to run an OK LLM on a CPU with 16 GB of ram if not less. Unfortunately they’re barely out of the experimental stage, so you’ll probably have to compile BitNet.cpp yourself or wait a few months until full support lands in Ollama.

    I haven’t run a bitnet myself yet, so I can’t personally vouch for their effectiveness or usefulness.

  • meowmeow@quokk.au
    link
    fedilink
    English
    arrow-up
    3
    ·
    14 hours ago

    A budget build is going to run you $4k+ for something like qwen3-coder:30b, and you’ll probably be annoyed at the speed of you’re used to Codex or Claude.

      • meowmeow@quokk.au
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        Fast is relative. I’m also commenting on the cost of the entire system, not just the gpu, fyi

        • infinitevalence@discuss.online
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          14 hours ago

          That’s fair, but nearly any modern CPU at least 32gb of RAM and a current GPU with 16gb is plenty. No need for a 4k system when a 1k-1.5k will do it.

          If you’re willing to Frankenstein things some of the used AI/ML/mining cards can be a decent value.

          • meowmeow@quokk.au
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            12 hours ago

            Yes, but when you compare it to codex and Claude though, it’s significantly slower. Especially over time. Better crank that AC.

            I think in a few years we will have current cloud levels running pretty efficiently on current computers.