Expected backend cpu but got backend gpu

Hi,

I am new in Pytroch. Implementing a VAE example, I got the error message “expected backend CPU but got backend GPU”. I build the network within a class (nn.Module). I solve the error passing the device as a parameter during the instantiation of the VAE class.

In the forward method, originally I use:

def forward(self, x):
   if x.dim() > 2:
       x = x.view(-1, 28 * 28)

    batch_size = x.size(0)
    means, log_var = self.encoder(x)
    std = log_var.mul(0.5).exp_()
    eps = torch.randn([batch_size, self.latent_size])
    z = ((eps * std) + means) 
    recon_x = self.decoder(z)

    return recon_x, means, log_var, z

After passing the device as parameters, I modify eps and z as follow:

eps = torch.randn([batch_size, self.latent_size]).to(self.device)
z = ((eps * std) + means).to(self.device)

Although, this way makes the code works correctly in a GPU or CPU. I would like to know if it is a correct practice or if there is the best way to do it.

Best.

Instead of passing the device argument to your model, you could also just use the current device of another tensor, e.g. x:

eps = torch.randn([batch_size, self.latent_size], device=x.device)
1 Like

Thanks @ptrblck for the advice!