Using GPU fot image feature extraction

I have a big data set of images, from which I am working on extracting features to Machine Learning. In my notebook, even with Parallelism through joblib…it didn’t work. So I ran into PyTorch and GPU processing. But testing some codes with IPython, i get this:

Blockquote
AssertionError: Torch not compiled with CUDA enabled

I installed PyTorch using Mint CLI, and my notebook has a Intel HD Graphics 620. Is it possible to run GPU Torch functions in it? What could be wrong with this installation?

Thank you so much for any directions

Cuda works only on nvidia hardware but there may be some libraries converting it to run on cpu cores(not igpu).

I think you should run the code (containing CUDA ops) with NVIDIA GPU.

1 Like

Oh, man…thanks, I will follow this direction

There is no emulator for this, right? Only using another computer all together

As far as I know currently only CUDA is supported.
I believe @hughperkins did some work on OpenCL support, but I’m not sure, if that’s even compatible with an Intel on-board GPU.

However, you could try Google Collab with the free GPU support.
I tried it recently and they used a K80 GPU. I believe your session will close/restart after 24 hours, so your experiment should be done by then.

Somehow, I just managed to install PyTorch version 0.3.0 following an instruction, but I’m sure I was too impatient to get the recent version running. :wink:

Add these lines to the beginning of your script and you should be good to go:

# Install Pytorch and torchvision
from os import path
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())

accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'

!pip3 install -q http://download.pytorch.org/whl/{accelerator}/torch-0.3.0.post4-{platform}-linux_x86_64.whl torchvision
1 Like

You can run pyTorch without GPU acceleration, the API could be useful for your problem even if you won’t get any super-speedup. It certainly will be faster than python loops even without the GPU, if pytorch is compiled with Intel MKL support etc. (I’m just guessing, I noticed that it uses MKL but I haven’t done any real pure CPU benchmarks)

1 Like

You could be right about the MKL, I will try that too