Error in loss.backward() function

I was trying to implement WGAN-GP (gradient penalty) using PyTorch. I tried to use the implementation given here It uses the pytorch’s autograd.grad function call. For this, I had to uninstall and install pytorch from source. Part of the code is attached below:

while j < d_iterations and i < len(dataloader):
			j += 1
			input, objects =
			i += 1

			batch_size = input.size(0)
			errD_real = self.modelD.forward(self.input)

			# train with fake,, 1, 1).normal_(0, 1)
			fake = self.modelG.forward(self.noise)
			errD_fake = self.modelD.forward(fake.detach())

			# train with gradient penalty
			gradient_penalty = self.calc_gradient_penalty(self.modelD, self.input, fake, batch_size)

			errD = errD_fake - errD_real + gradient_penalty

def calc_gradient_penalty(self, netD, real_data, fake_data, batch_size):
	alpha = torch.rand(batch_size, 1, 1, 1)
	alpha = alpha.expand(real_data.size())
	alpha = alpha.cuda(gpu) if self.cuda else alpha

	interpolates = alpha * + ((1-alpha) *

	if self.cuda:
		interpolates = interpolates.cuda()
	interpolates = Variable(interpolates, requires_grad=True)

	disc_interpolates = netD.forward(interpolates)

	grad_outputs = torch.ones(disc_interpolates.size())
	if self.cuda:
		grad_outputs = grad_outputs.cuda()

	# gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
		# grad_outputs=grad_outputs, create_graph=True, retain_graph=True, only_inputs=True)[0]
	gradients = autograd.grad(disc_interpolates, interpolates, create_graph=True)[0]

	gradient_penalty = ((gradients.norm(2, dim=1) - 1) ** 2).mean() * self.gp_lambda
	return gradient_penalty

However, on running my code, I got an error in the line errD.backward() call. The error looks like this:

Traceback (most recent call last):
  File "", line 59, in <module>
    loss_train = trainer.train(epoch, loader_train)
  File "/home/rahul/CANVAS/pytorchnet/", line 123, in train
  File "/home/rahul/anaconda2/lib/python2.7/site-packages/torch/autograd/", line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/home/rahul/anaconda2/lib/python2.7/site-packages/torch/autograd/", line 98, in backward
    variables, grad_variables, retain_graph)
RuntimeError: Expected a Tensor of type CPUFloatType but found an undefined Tensor for argument #9 'save_mean'

I couldn’t find any reference to such an error anywhere.

Any help would be much appreciated.