
Thanks for the excellent response! I’m going to give openwebui a try and do some of that trial and error as well - best way to learn!
Thanks for the excellent response! I’m going to give openwebui a try and do some of that trial and error as well - best way to learn!
have you tried LM Studio as an interface? I havent tried open webui yet, just lm studio and text-generation-webui, so I am not sure if I’m limiting myself by using LM studio so much (I am very much a novice to the tech and do not work in computer science, so I’m trying to balance ease of use with customization)
you might want to look into RAG and ‘long-term memory’ concepts. I’ve been playing around with creating a self-hosted LLM that has long-term memory (using pre-trained models), which is essentially the same thing as you’re describing. Also - GPU matters. I’m using an RTX 4070 and it’s noticeably slower than something like in-browser chatgpt, but I know 4070 is kinda pricey so many home users might have earlier/slower gpu’s.
I’m downloading pre-trained models. I had a bunch of dependency issues getting text-generation-webui to work and honestly I probably installed some useless crap in the process, but I did get it to work. LM Studio is much simpler, but less customization(or I just don’t know how to use it all in lm studio). But yea, I’m just downloading pre-trained models and running them in these UI’s (right now I just loaded up ‘deepseek-r1-distill-qwen-7b’ in LM Studio). I also have the nvidia app installed and I make sure my gpu drivers are always up to date.