Do you use it to help with schoolwork / work? Maybe to help you code projects, or to help teach you how to do something?

What are your preferred models and why?

  • calmluck9349@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I employ this technique to embellish my email communications, thereby enhancing their perceived authenticity and relatability. Admittedly, I am not particularly adept at articulating my thoughts in comprehensive, well-structured sentences. I tend to favor a more primal, straightforward cognitive style—what one might colloquially refer to as a “meat-and-potatoes” or “caveman” approach to thinking. Ha.

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    Mostly to help quickly pull together and summarize / organize information from web searches done via Open WebUI

    Also to edit copy or brainstorm ideas for messaging and scripts etc

    Sometimes to have discussions around complex topics to ensure I understand them.

    Favorite model to run locally now is easily Qwen3-30B-A3B. It can do reasoning or more quick response stuff and runs very well on my 24 GB of VRAM RTX 3090. Plus, because it has a MoE architecture with only 3B parameters active when doing inference its lightning fast.

    • brendansimms@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      have you tried LM Studio as an interface? I havent tried open webui yet, just lm studio and text-generation-webui, so I am not sure if I’m limiting myself by using LM studio so much (I am very much a novice to the tech and do not work in computer science, so I’m trying to balance ease of use with customization)

      • FrankLaskey@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        It sounds like we’re on similar levels of technical proficiency. I have learned a lot by reading and going down wormholes on how LLMs work and how to troubleshoot and even optimize them to an extent but I’m not a computer engineer or programmer for sure.

        I started with LM studio before Ollama/Open WebUI and it does have some good features and an overall decent UI. I switched because OWUI seems to have more extensibility with tools and functions etc and I wanted something I could run as a server and use on my phone and laptop elsewhere etc. OWUI has been great for that although setting up remote access for the server on the web did take a lot of trial and error. The OWUI team also develops and updates the software very quickly so that’s great.

        I’m not familiar with text-generation-WebUI but at this point I’m not really wanting for much more out of a setup than my docker stack with Ollama and OWUI

        • brendansimms@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Thanks for the excellent response! I’m going to give openwebui a try and do some of that trial and error as well - best way to learn!

  • seathru@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I currently don’t. But I am ollama-curious. I would like to feed it a bunch of technical manuals and then be able to ask it to recite specs or procedures (with optional links to it’s source info for sanity checking). Is this where I need to be looking/learning?

    • brendansimms@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      you might want to look into RAG and ‘long-term memory’ concepts. I’ve been playing around with creating a self-hosted LLM that has long-term memory (using pre-trained models), which is essentially the same thing as you’re describing. Also - GPU matters. I’m using an RTX 4070 and it’s noticeably slower than something like in-browser chatgpt, but I know 4070 is kinda pricey so many home users might have earlier/slower gpu’s.

      • Styxia@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        How have you been making those models? I have a 4070 and doing it locally has been a dependency hellscape, I’ve been tempted to rent cloud GPU time just to save the hassle.

      • Styxia@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        How have you been making those models? I have a 4070 and doing it locally has been a dependency hellscape, I’ve been tempted to rent cloud GPU time just to save the hassle.