Pytorch on AMD Cpu and Nvidia Rtx 3080

Hello

I have to train deep learning models on videos that require time consuming augmentations.

I found that most of the training and testing time is related to the preprocessing on the CPU.

Currently, I have Intel 10700k processor. I use Nvidia RTX 3080 GPU for training and testing with pytorch.

My question is:

If I switch to using AMD processor does that affect pytorch in any way?

Can I train on Nvidia GPU while using AMD processor for the videos augmentations?

Many Thanks
Hussein

Main bottleneck here is actually Python itself and how it manages CPU (GIL for example). If you can parallelize your calculations I would advise you to implement multiprocessing for the CPU part. 10700k is rather fast 8-core CPU, switching to AMD won’t give you significant speed improvements (except switching to something fast and with a lot of cores, like 5650x or threadripper)

Thanks @my3bikaht for your help. Yes I agree with you. After doing some experiments I found that the Cpu is fast and can handle the operations efficiently.
The bottleneck is using one process to do the work.
Using multiple processes will decrease the runing time and use the resources effectively.

Thank you very much man