Problem with gradient computation related to inplace operation

I’m making GAN training code with classification result with MSE loss.
the self.criterion_G_CNN is nn.MSEloss function, and cnn is resnet101.
But in backward(), I get this error, but I can’t figure out where did I make inplace operation.
Does anyone know where the inplace operation comes from??

File “/home/david/demptcnn/train.py”, line 128, in train_adversarial
loss_cls.backward()
File “/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py”, line 144, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

            self.discriminator.zero_grad()
            self.generator.zero_grad()
            self.cnn.fc.zero_grad()

            mask = self.generator(images.detach())
            mask *= 0.01

            img_resized = func.upsample_bilinear(images, size=(224, 224))
            mask_resized = func.upsample_bilinear(mask, size=(224, 224))

            for param in self.cnn.parameters():
                param.requires_grad = True
            self.cnn.fc.zero_grad()

            cnn_out = self.cnn(img_resized.detach()+mask_resized)
            cnn_out = func.softmax(cnn_out)
            label_target = Variable(torch.ones((40, 10))).cuda()
            label_target *= 0.1
            loss_cls = self.criterion_G_CNN(cnn_out, label_target.detach())
            loss_cls.backward()
            self.optim_G.step()

label_target *= 0 .1 uses in-place operation.
Change it to label_target = label_target * 0.1 which uses out of place operation.

2 Likes