

I will look into this one. First impressions looks interesting, thanks for mentioning it!


I will look into this one. First impressions looks interesting, thanks for mentioning it!


I will check how that one works. I was not planning to have another machine to do dadhboarding, bit maybe there are ways to host this as a VM or lxc and make it that way


Those are good questions. I would prefer somewhere in the page of proxmox to have this info. If that is not feasible for any amount of reasons, second best option is to have a service to have a dashboard page. I did forget about graphana, but I can look into it
I do not use models in general online, but my needs are also much smaller. Max I use my local model for ollama is translations. I am always interested in seeing more focused models so we can use on lower end hardware
Did you try to do this workflow with local models? If so, in your experience what are the better models for this?


Fixed on bios, but from what I see, the dbx part is still missing in some models. They are working on it at least
I tried a couple of times with Jen ai and local llama, but somehow does not work that well for me.
But at the same time i have a 9070xt, so, not exactly optimal
So is not on this rack. OK because for a second I was thinking somehow you were able to run ai tasks with some sort of small cluster.
I have nowadays a 9070xt on my system. I just dabbled on this, but until now I havent been that successful. Maybe I will read more into it to understand better.
I have a question about ai usage on this: how do you do this? Every time I see ai usage some sort of 4090 or 5090 is mentioned, so I am curious what kind of ai usage you can do here
I did know about zabbix before, and I actually did try to install it before using the proxmox helper scripts page. Somehow, by the end I got a blank page. Hence I made this post to see more alternatives.
I do know zabbix is super recognized in this area. I just did not install it successfuly on my previous attempt