Porting a PyTorch VAE to CUDA

I am trying to port a Pytorch VAE implementation to CUDA in a Docker environment (nvidia-docker 2.0), but I’m getting some errors after appending .cuda() to the encoder, decoder and discriminator:

File “/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/nn/modules/linear.py”, line 53, in forward
return F.linear(input, self.weight, self.bias)

Any help would be greatly appreciated!

Your data X and Y are not on GPU.

I had the wrong syntax on the GPU -> host copy via .cpu(), now it’s ported over and running on the GPU.

Thanks Simon!