One of the variables needed for gradient computation has been modified by an inplace operation:

I’m training a GAN
the traininng code is here

            while images is not None:
                outputs=self.model['G'](images)
                self.model['D'].freeze(requires_grad=True)
                po=self.model['D'](outputs.detach())
                pg=self.model['D'](images)
                loss1=self.criteron['adv'](po,False)
                loss2=self.criteron['adv'](pg,True)


But I got this error message which makes me confused.
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256]] is at version 4; expected version 3 instead.

I tried several times,and I removed all inplace tag in RELU.
The code works fine while only loss1 is calculated and backward, or loss2 is calculated and backward.But It reported this error when I write them both into the training code.
And I found that I put gt and output both into the discriminator, then loss1 and loss2 is ok to be calculated.The critical code makes loss1 unable to backward is this line

pg=self.model['D'](images)

Removing this,my loss1 works fine.
But my dis network just conv and maxpool the image,and there is no in-place op.
I also replace this line:

pg=self.model['D'](images.clone().detach())

it still can’t work.
what’s wrong?

The new point I found is below:
I changed the line order,

pg=self.model['D'](images)
po=self.model['D'](outputs.detach())

then the loss1 works.But now loss2 can’t backward.
Seems the two are confilcts

Here is my Dis network code:

class Discriminator(nn.Module):
    def __init__(self,in_dim=3,inner_dim=64,norm=True):
        super(Discriminator, self).__init__()
        def getSpectralNormedModule(module,norm):
            if norm:
                 return nn.utils.spectral_norm(module)
            return module
        self.conv1=nn.Sequential(
            getSpectralNormedModule(nn.Conv2d(in_dim,inner_dim,4,2,1),norm),
            nn.BatchNorm2d(inner_dim),
            nn.LeakyReLU(0.2)
        )
        self.conv2=nn.Sequential(
            getSpectralNormedModule(nn.Conv2d(inner_dim,2*inner_dim,4,2,1),norm),
            nn.BatchNorm2d(2*inner_dim),
            nn.LeakyReLU(0.2)
        )
        self.conv3=nn.Sequential(
            getSpectralNormedModule(nn.Conv2d(2*inner_dim,4*inner_dim,4,2,1),norm),
            nn.BatchNorm2d(4*inner_dim),
            nn.LeakyReLU(0.2)
        )
        self.conv4=nn.Sequential(
            getSpectralNormedModule(nn.Conv2d(4 * inner_dim, 4 * inner_dim, 4, 1, 1), norm),
            nn.BatchNorm2d(4 * inner_dim),
            nn.LeakyReLU(0.2)
        )
        self.conv5 = nn.Sequential(
            getSpectralNormedModule(nn.Conv2d(4 * inner_dim, 4 * inner_dim, 4, 1, 1), norm),
            nn.BatchNorm2d(4 * inner_dim),
            nn.LeakyReLU(0.2)
        )
    def forward(self,x):
        x1=self.conv1(x)
        x2=self.conv2(x1)
        x3=self.conv3(x2)
        x4=self.conv4(x3)
        x5=self.conv5(x4)
        return x5

I still can’t figure out the key point.Someone Plz help me.I’m new to GAN.

I guess you might be facing a similar issue as described here and here.
Could you check if you are indeed trying to use stale gradients for already updated parameters?