Using GPU which does not support CUDA

Hi, I am using a Macbook Pro with Intel Iris Pro graphics which is not Cuda compatible. Is there any way I can use my existing GPU to speed up PyTorch computation? Currently Numpy seems slightly faster than PyTorch as evidenced by these matrix multiplication results:

Matrix size = 10000x10000
Numpy time = 14.2417030334
Torch time = 14.5167078972

Any help would be greatly appreciated!
– Sourya

no there is no current way to use non-CUDA GPU

Thanks for your answer.

I have a lot of existing code in Numpy which I am considering porting to a different library which can use my non-CUDA GPU. Since PyTorch cannot use it, is there any way in which it can be faster than Numpy in doing deep learning computations?

My code is a neural network to train convolutional and sparse MLP layers on datasets such as MNIST and CIFAR.

If you are doing deep learning, PyTorch will be way easier than trying to do everything in raw numpy. And unless you are using really tiny networks, you will want to use a GPU. There might be some deep learning library out there that supports non-CUDA GPUs, but currently NVIDIA GPUs are so dominant in deep learning that you should just get one or you will be fighting an uphill battle.

1 Like

DirectML for Tensorflow and Pytorch,
Read Google docs