Moving tensor to cuda | with and without Apex

I cannot reproduce this issue in Colab using this code snippet:

!pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
!git clone https://github.com/NVIDIA/apex
!pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex
import torch
torch.randn(1, 1, 32000).to(device='cuda:0')

However, if you are fine with updating to the latest nightly binary, you could use the core amp implementation, so that you wouldn’t need to install apex to use mixed-precision training.