RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 512]], which is output 0 of ViewBackward, is at version 4; expected version 0 instead

Dear all,
I’m getting the following error message: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 512]], which is output 0 of ViewBackward, is at version 4; expected version 0 instead.
I know that this question has been asked several times but I tried to remove all implace operations in my code and I still get the same error. Can someone give me a hint how to solve this problem? I’m using python=3.6, torch = 1.2.0, and tochvision=0.4.0. Here is the code I’m using:

‘’‘VGG11/13/16/19 in Pytorch.’’’
import torch
import torch.nn as nn

cfg = {
‘VGG11’: [64, ‘M’, 128, ‘M’, 256, 256, ‘M’, 512, 512, ‘M’, 512, 512, ‘M’],
‘VGG13’: [64, 64, ‘M’, 128, 128, ‘M’, 256, 256, ‘M’, 512, 512, ‘M’, 512, 512, ‘M’],
‘VGG16’: [64, 64, ‘M’, 128, 128, ‘M’, 256, 256, 256, ‘M’, 512, 512, 512, ‘M’, 512, 512, 512, ‘M’],
‘VGG19’: [64, 64, ‘M’, 128, 128, ‘M’, 256, 256, 256, 256, ‘M’, 512, 512, 512, 512, ‘M’, 512, 512, 512, 512, ‘M’],
}

class VGG(nn.Module):
def init(self, vgg_name):
super(VGG, self).init()
self.features = self._make_layers(cfg[vgg_name])
self.classifier = nn.Linear(512, 10)

def forward(self, x):
    out = self.features(x)
    emb = out.view(out.size(0), -1)
    out = self.classifier(emb)
    return out, emb

def _make_layers(self, cfg):
    layers = []
    in_channels = 3
    for x in cfg:
        if x == 'M':
            layers = layers + [nn.MaxPool2d(kernel_size=2, stride=2)]
        else:
            layers = layers + [nn.Conv2d(in_channels, x, kernel_size=3, padding=1),
                       nn.BatchNorm2d(x),
                       nn.ReLU(inplace=False)]
            in_channels = x
    layers = layers + [nn.AvgPool2d(kernel_size=1, stride=1)]
    return nn.Sequential(*layers)

def get_embedding_dim(self):
    return 512

def test():
net = VGG(‘VGG11’)
x = torch.randn(2,3,32,32)
y = net(x)
print(y.size())

The posted code snippet does not reproduce the issue and will raise an error in:

print(y.size())
> AttributeError: 'tuple' object has no attribute 'size'

since the forward method is returning a tuple.
Could you post a code snippet, which would reproduce this issue, please?
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier. :wink:

Thanks for the answer. Actually I found a hidden inplace operation in some other part of the code, which I didn’t posted. I didn’t realize that this was an inplace operation:
feature[0] = feature[0].detach()