Bug: [W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware

Building from source and running the above command now raises this error:

raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command ‘[‘cmake’, ‘–build’, ‘.’, ‘–target’, ‘install’, ‘–config’, ‘Release’, ‘–’, ‘-j’, ‘8’]’ returned non-zero exit status 1.

Could the issue be due to incompatibility with the M1 ARM-based system?

The posted stack trace doesn’t include the error message, so you would need to search a little bit further up to see why the build is failing.

This worked for me, Thanks!!

Hey @ptrblck ,
I’am running in the same error. But mine is looking different and I don’t really know if this is only one error messsage.
My OS:
Kali Linux

My error message:

/home/anaconda3/envs/speech/lib/python3.7/site-packages/torch/functional.py:516: UserWarning: stft will require the return_complex parameter be explicitly  specified in a future PyTorch release. Use return_complex=False  to preserve the current behavior or return_complex=True to return  a complex output. (Triggered internally at  /opt/conda/conda-bld/pytorch_1607370152014/work/aten/src/ATen/native/SpectralOps.cpp:653.)
  normalized, onesided, return_complex)
/home/anaconda3/envs/speech/lib/python3.7/site-packages/torch/functional.py:516: UserWarning: The function torch.rfft is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and calling torch.fft.fft or torch.fft.rfft. (Triggered internally at  /opt/conda/conda-bld/pytorch_1607370152014/work/aten/src/ATen/native/SpectralOps.cpp:590.)
  normalized, onesided, return_complex)
[W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware.

After this error message is being displayed it is sort of creating an endless loop which is printing out every second a new row. So my program is being completely crashed because of this error message

Is someone able to help me out with that or know what this error message is caused by?
Thank’s for every suggestion and help in advance:)

It seems you are hitting the same issues, so you could also try to rebuild PyTorch without NNPACK or ignore the warning (do you know, if this is a warning or an error?).

@ptrblck I think it’s more like an error because the program is breaking down completely after this message.
Is this one error message? Or are there two?
Because of this:

/home/anaconda3/envs/speech/lib/python3.7/site-packages/torch/functional.py:516: UserWarning: stft will require the return_complex parameter be explicitly  specified in a future PyTorch release. Use return_complex=False  to preserve the current behavior or return_complex=True to return  a complex output. (Triggered internally at  /opt/conda/conda-bld/pytorch_1607370152014/work/aten/src/ATen/native/SpectralOps.cpp:653.)
  normalized, onesided, return_complex)
/home/anaconda3/envs/speech/lib/python3.7/site-packages/torch/functional.py:516: UserWarning: The function torch.rfft is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and calling torch.fft.fft or torch.fft.rfft. (Triggered internally at  /opt/conda/conda-bld/pytorch_1607370152014/work/aten/src/ATen/native/SpectralOps.cpp:590.)
  normalized, onesided, return_complex)

@ptrblck are you still there?

The two new outputs are warnings, so you might want to check them and fix the usage in your code.
If the NNPACK issue is creating an error, please refer to the previous post and try to rebuild PyTorch without NNPACK.

Hey @ptrblck is there any way to install pytorch and right after torchaudio? Becuase when I try to install torchaudio via conda or pip my torch module which I installed from source is being deleted

You could try to use --no-deps while installing torchaudio, but note that pip or conda would only downgrade PyTorch, if torchaudio specifies a specific PyTorch version as a dependency, so your installation might break in case your source build is not compatible with the desired torchaudio version.

A proper approach would be to either build torchaudio also from source or to try to install the nightly releases, which should have relaxed requirements.

Hi @ptrblck, I tried both of these options to set USE_NNPACK=0, but still got the error message as in the title.

For the first option, I did

USE_NNPACK=0 pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu111/torch_nightly.html

to install the latest nightly package; for the second, I added

export USE_NNPACK=0;

to right before my previous command.

Is this behavior expected? Thanks!

(This error comes right after the UserWarning that I should use torch.linalg.qr instead of torch.qr. Is it related to the error here?)

Yes, this would be expected, as the env variable doesn’t have any effect on the pre-built pip wheels, so you would need to build PyTorch from source with this env var.

Thanks @ptrblck for the clarification! Now I get what you mean.

One more question: I built from source by the steps at GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration. The installation finishes, but it seems CUDA is not “linked” to my installation:

import torch
torch.cuda.is_available()

gives False. Then, running something that requires CUDA will give no gpu device available.

I then saw your answer at Can I "link" pytorch to already installed CUDA - #7 by ptrblck. Is this related? How should I build from source to “link” CUDA to my build? For your reference, the commands I used to build were

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
export USE_NNPACK=0; python setup.py develop

Thanks!

Your local CUDA toolkit should be detected automatically. If that’s not the case, set the location via the env var e.g. to:

CUDA_HOME=/usr/local/cuda

in case you are using the default location.
The install log would then also show the detected CUDA toolkit version as well as its location.

Hello, I create a venv, downloaded the lastest yolov5 version from Ultralytics and runs pip install -r requirements.txt and it runs ok when i run: python detect.py with the parameters.

Hi,

I also had this problem when I was training yolov8 with a Macbook M1 pro chip, but the training process worked fine. Does this mean I can ignore this warning.

Thanks for your reply,
Yucheng

Yes, you might ignore the warning as NNPACK does not seem to support macOS on ARM as seen here. I don’t know which code path or accelerator library will be picked instead on your Macbook, but if the training continues I assume a fallback is taken.

1 Like

Thanks!
I didn’t use an accelerator, but simply let it train on my Macbook. I’m guessing that if the training can run continuously, does that mean the warning doesn’t affect the results so I can ignore it.

Yes, that’s what I would assume so.
The warning is raised from init_nnpack via _nnpack_available which is used to determine the dispatching path for convolutions as seen here. Based on this logic the NNPACK path will be skipped after raising the warning once and the “slow” path will be picked.

1 Like

Oh, I see, this warning means that NNPACK can’t pick the “fast” path, it can only use the “slow” path, so the training speed will be slow, but it doesn’t affect the training result.

1 Like