From Newsgroup: comp.ai
Greetings.
On 2024-11-28 13:05, Carl Fink wrote:
I have a system with a very high-end CPU (for a workstation), a 12th generation Intel i9, but a low-end GPU (Radeon RX470 with 6 gb of RAM).
Is it possible to do any sort of AI training on that setup, say if you're willing to process at 5% of an nVidia GPU's speed? I just want to play with it, see if it's worth investing more money, at this point. I don't mind, if, say, training a LORA takes an hour per photo, or even 6 hours per photo.
Yes, it's absolutely possible to train generative AI on a CPU rather
than a GPU, but whether this is feasible depends entirely on what sort
of model you are expecting to end up with, how much training time you're willing to tolerate, and how much RAM you have available. Without
further details on your requirements it's difficult to make any specific recommendations. I've seen a few reports comparing CPU and GPU training
of LoRA models for Stable Diffusion, for example, that indicate that CPU training can require four times more memory and/or forty times more
time. If you don't already have the requisite GPU hardware and don't
want to buy it yourself, it might be economical to rent the processing
power from a cloud service.
Regards,
Tristan
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Tristan Miller
Free Software developer, ferret herder, logologist
https://logological.org/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
--- Synchronet 3.21a-Linux NewsLink 1.2