RuntimeError: "argmax_cuda" not implemented for 'Bool'

I am getting this error when I run the following code:

score_flattened = score_.view(B * NP * SQ, B2 * NS * SQ)
target_flattened = target_.view(B * NP * SQ, B2 * NS * SQ)
target_flattened = target_flattened.argmax(dim=1)

This error only happens on Google Colab but not on my own laptop (which has windows 10 on it)

Which version are you using locally and on Google Colab?
Could you update the Colab version, if it’s not the latest release?

Updated the colab version but I still get the same issue. Nevermind, I fixed the issue but just recasting my tensor as a float (before it was uint8)

I hit the same issue for the booleans, but not for bytes/uint8.
This is my environment:

Collecting environment information...
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: 10.0.130

OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce GTX TITAN X
Nvidia driver version: 418.40.04
cuDNN version: Could not collect

Versions of relevant libraries:
[pip] numpy==1.17.3
[pip] torch==1.3.1
[pip] torchvision==0.4.2
[conda] blas                      1.0                         mkl  
[conda] mkl                       2019.4                      243  
[conda] mkl-service               2.3.0            py37he904b0f_0  
[conda] mkl_fft                   1.0.15           py37ha843d7b_0  
[conda] mkl_random                1.1.0            py37hd6b4f25_0  
[conda] pytorch                   1.3.1           py3.7_cuda10.0.130_cudnn7.6.3_0    pytorch
[conda] torchvision               0.4.2                py37_cu100    pytorch

I don’t have the issue with the pytorch 1.2/cuda 10.0 combo.

Recast your tensor to float:

my_tensor = my_tensor.double()

1 Like

I found this useful: RuntimeError: “argmax_cuda” not implemented for ‘Bool’