Moving tensor to cuda | with and without Apex

putting tensor to cuda, with 1 gpu only

import torch
torch.randn(1, 1, 32000).to(device='cuda:0')

In google colab, with

Apex

!git clone https://github.com/NVIDIA/apex
!pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex

CUDA Version 10.0.130
torch 1.4.0+cu100
!pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
CUDNN 7.6.5

I am getting following error
RuntimeError: CUDA error: an illegal memory access was encountered

However, the code runs fine with

CUDA Version 10.0.130
torch 1.4.0
CUDNN 7.6.5

I cannot reproduce this issue in Colab using this code snippet:

!pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
!git clone https://github.com/NVIDIA/apex
!pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex
import torch
torch.randn(1, 1, 32000).to(device='cuda:0')

However, if you are fine with updating to the latest nightly binary, you could use the core amp implementation, so that you wouldn’t need to install apex to use mixed-precision training.

I installed nightly

!pip3 install torch_nightly -f ttps://download.pytorch.org/whl/nightly/cu100/torch_nightly.html --user

however, import torch.cuda.amp throws error
AttributeError: module 'torch.cuda' has no attribute 'amp'

how should I import this?