RuntimeError: expected device cpu but got device cuda:0

I’m getting this error for my following code snippet.
def feature_map_running_mean(self, x):
running_mean = torch.zeros(x.size(1))
for i,im in enumerate(x,1):
running_mean += im.mean(dim=(1,2))
return running_mean
This code is calling in between every convolutional layer in VGG16. Then I try to run my code for a pre-trained model of VGG16. I got the above error. Can anyone please solve this problem as quickly as possible.

I got solved this problem. Issue was with

   running_mean = torch.zeros(x.size(1))

line, it has been changed to

  running_mean = torch.zeros(x.size(1)).cuda()

Think this will help others well. :grin: :grin: :grin: