Bug: [W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware

Yes, this would be expected, as the env variable doesn’t have any effect on the pre-built pip wheels, so you would need to build PyTorch from source with this env var.

Thanks @ptrblck for the clarification! Now I get what you mean.

One more question: I built from source by the steps at GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration. The installation finishes, but it seems CUDA is not “linked” to my installation:

import torch
torch.cuda.is_available()

gives False. Then, running something that requires CUDA will give no gpu device available.

I then saw your answer at Can I "link" pytorch to already installed CUDA - #7 by ptrblck. Is this related? How should I build from source to “link” CUDA to my build? For your reference, the commands I used to build were

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
export USE_NNPACK=0; python setup.py develop

Thanks!

Your local CUDA toolkit should be detected automatically. If that’s not the case, set the location via the env var e.g. to:

CUDA_HOME=/usr/local/cuda

in case you are using the default location.
The install log would then also show the detected CUDA toolkit version as well as its location.

Hello, I create a venv, downloaded the lastest yolov5 version from Ultralytics and runs pip install -r requirements.txt and it runs ok when i run: python detect.py with the parameters.