M1 macOS 12.3 torchvision.ops.nms error

I installed the lastest torch and torchvision nightly as per online instructions, eager to testdrive M1 GPU support. Testing with mps.is_available() returns True (yeah!). But when running YoloX model, the system crashes with the following error:

ox_files/detector.py", line 321, in _postprocess
    nms_out_index = torchvision.ops.nms(
   File "/usr/local/Caskroom/miniforge/base/envs/pt/lib/python3.8/site-packages/torchvision/ops/boxes.py", line 40, in nms
   File "/usr/local/Caskroom/miniforge/base/envs/pt/lib/python3.8/site-packages/torchvision/extension.py", line 33, in _assert_has_ops
    raise RuntimeError(
2022-05-30 11:07:08 root  ERROR:  RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.

NB: The above code has been CI/CD-tested and runs perfectly on M1 macOS (torch cpu), Intel macOS, Windows and Linux, so it’s not a code error.

My M1 MacBook Air specs:

  • macOS 12.3.1
  • python 3.8
  • platform() = macOS-12.3.1-arm64-arm-64bit
  • torch==1.13.0.dev20220529
  • torchvision==0.14.0a0+d592925

From the error messages, it seems that either

  1. torchvision.ops.nms is buggy, or
  2. the torch and torchvision versions are incompatible.

Wonder if any one knows whether it is 1, in which case I will abort my M1 test until later builds, or 2? If 2, how do I get and build a compatible torchvision version (appreciate some detailed instructions/commands here).


We very recently added the torchvision nightly to avoid this. Can you try to uninstall and re-install both packages now?

Hi @albanD,

Thanks for the update. I have uninstall and reinstalled the nightly builds (via pip). The C++ error is gone, but it gets replaced by a type error:

   File "/Users/dotw/src/ongtw/PeekingDuck/peekingduck/pipeline/nodes/model/yoloxv1/yolox_files/model.py", line 309, in forward
    outputs_tensor = self.decode_outputs(outputs_tensor, xin[0].type())
   File "/Users/dotw/src/ongtw/PeekingDuck/peekingduck/pipeline/nodes/model/yoloxv1/yolox_files/model.py", line 339, in decode_outputs
    grids_tensor = torch.cat(grids, dim=1).type(dtype)
2022-06-03 11:31:47 root  ERROR:  ValueError: invalid type: 'torch.mps.FloatTensor'

Looks like torch.mps.FloatTensor type is not understood by torch.cat: perhaps some internals need updating?

Current torch versions:

  • torch==1.13.0.dev20220602
  • torchvision==0.14.0a0+f9f721d

Thanks for the report.

The type() method is indeed not supported. In general, we would recommend not to use it and specify explicitely device/dtype.
I opened an issue to track this: Add type() support for mps backend · Issue #78929 · pytorch/pytorch · GitHub