Backprop Twice Error Updated Pytorch

Hi – I updated my Pytorch version to the latest from source, and the backpropagation code for WGAN now gives the error “Trying to backward through the graph a second time…”

Here is the code for updating the discriminator:

self.D.zero_grad()
d_real_pred = self.D(real_data)
d_real_err = torch.mean(d_real_pred) #want to push d_real as high as possible
d_real_err.backward(one_neg)

z_input = to_var(torch.randn(self.batch_size, 128))
d_fake_data = self.G(z_input).detach()
d_fake_pred = self.D(d_fake_data)
d_fake_err = torch.mean(d_fake_pred) #want to push d_fake as low as possible
d_fake_err.backward(one)

gradient_penalty = self.calc_gradient_penalty(real_data.data, d_fake_data.data)
gradient_penalty.backward()

d_err = d_fake_err - d_real_err + gradient_penalty
self.D_optimizer.step()

For calculating the gradient penalty:

    def calc_gradient_penalty(self, real_data, fake_data):
        alpha = torch.rand(self.batch_size, 1, 1)
        alpha = alpha.expand_as(real_data)
        alpha = alpha.cuda() if self.use_cuda else alpha
        interpolates = alpha * real_data + ((1 - alpha) * fake_data)

        interpolates = interpolates.cuda() if self.use_cuda else interpolates
        interpolates = autograd.Variable(interpolates, requires_grad=True)

        disc_interpolates = self.D(interpolates)

        gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates,
                                  grad_outputs=torch.ones(disc_interpolates.size()).cuda() \
                                  if self.use_cuda else torch.ones(disc_interpolates.size()),
                                  create_graph=True, retain_graph=True, only_inputs=True)[0]

        gradient_penalty = self.lamda*((gradients.norm(2, 1).norm(2,1) - 1) ** 2).mean() #norm 2 times
        return gradient_penalty

Any guidance on what might be causing this error?

Hi,

This part of the code looks ok.
Could you share what is self.D?
Also where exactly is the error raised?

Hey I think I have the same issue. where self.D is probably a discriminator net that looks something like:

class DiscriminatorNet(torch.nn.Module):
    """
    A discriminative neural network
    """
    def __init__(self):
        super(DiscriminatorNet, self).__init__()
        n_out = 1
        
        self.hidden0 = nn.Sequential( 
            nn.Conv1d(in_channels=4,out_channels=100,kernel_size=1)
        )
        
        self.hidden1 = nn.Sequential(
            ResBlock(DIM),
            ResBlock(DIM),
            ResBlock(DIM),
            ResBlock(DIM),
            ResBlock(DIM)
        )
          
        self.out = nn.Sequential(
            nn.Linear(DIM * L, n_out),
        )

    def forward(self, x):
        # transpose x to match size of layers
        x = x.permute(0,2,1)
        x = self.hidden0(x)
        x = self.hidden1(x)
        x = x.view(-1, DIM * L)
        x = self.out(x)
        return x

And the error occurs at the line gradient_penalty.backward()