The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
- 0 Posts
- 3 Comments
Joined 7 months ago
Cake day: February 12th, 2025
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
NewOldGuard@lemmy.mlto Linux@lemmy.ml•Bazzite has gained nearly 10k users in 3 months while other Fedora Atomic distros remain fairly stagnantEnglish6·6 days agoThat’s not the case for the newer open source drivers from nvidia. They’re only compatible with the last few generations of cards but they’re performant and the only feature they lack is CUDA to my knowledge. Not talking nouveau here
It depends on the model but I’ve seen image generators range from 8.6 wH per image to over 100 wH per image. Parameter count and quantization make a huge difference there. Regardless, even at 10 wH per image that’s not nothing, especially given that most ML image generation workflows involve batch generation of 9 or 10 images. It’s several orders of magnitude less energy intensive than training and fine tuning, but it is not nothing by any means.