I have a big data set of images, from which I am working on extracting features to Machine Learning. In my notebook, even with Parallelism through joblib…it didn’t work. So I ran into PyTorch and GPU processing. But testing some codes with IPython, i get this:
Blockquote
AssertionError: Torch not compiled with CUDA enabled
I installed PyTorch using Mint CLI, and my notebook has a Intel HD Graphics 620. Is it possible to run GPU Torch functions in it? What could be wrong with this installation?
As far as I know currently only CUDA is supported.
I believe @hughperkins did some work on OpenCL support, but I’m not sure, if that’s even compatible with an Intel on-board GPU.
However, you could try Google Collab with the free GPU support.
I tried it recently and they used a K80 GPU. I believe your session will close/restart after 24 hours, so your experiment should be done by then.
Somehow, I just managed to install PyTorch version 0.3.0 following an instruction, but I’m sure I was too impatient to get the recent version running.
Add these lines to the beginning of your script and you should be good to go:
# Install Pytorch and torchvision
from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q http://download.pytorch.org/whl/{accelerator}/torch-0.3.0.post4-{platform}-linux_x86_64.whl torchvision
You can run pyTorch without GPU acceleration, the API could be useful for your problem even if you won’t get any super-speedup. It certainly will be faster than python loops even without the GPU, if pytorch is compiled with Intel MKL support etc. (I’m just guessing, I noticed that it uses MKL but I haven’t done any real pure CPU benchmarks)