(Updating output of Conv2d) RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

I’m trying to perform an unconventional convolution operation. First I have performed Conv2d operation, giving me output tensor shape (batch_size, 29,10,10), hence there are total 29 channels (numbered 0 to 28). Now I want to perform 1x1 convolution on part of this output (3 channels each time) as follows:

1x1 conv on channels (0,1,2), 1x1 conv on channels (0,3,4), 1x1 conv on channels (0,5,6),…, 1x1 conv on channels (0,27,28).

I perform this by creating a tensor x_in that would extract required 3 channels from input x. Then I perform 1x1 conv on x_in and gather all 1x1 convolution outputs in x_out tensor, which is returned from function myconv1x1. However I get the error mentioned in title.

I believe that Pytorch doesn’t want me to update x_out once it has received a 1x1 conv output in first for loop iteration.

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batchsize = 500

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)
trainloader = DataLoader(trainset, batch_size=batchsize,
                                          shuffle=True, num_workers=2)

testset = torchvision.datasets.CIFAR10(root='./data', train=False,
                                       download=True, transform=transform)
testloader = DataLoader(testset, batch_size=batchsize,
                                         shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat',
           'deer', 'dog', 'frog', 'horse', 'ship', 'truck')


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()

        self.conv1 = nn.Conv2d(3, 10, 5) # (10,28,28)
        self.conv2 = nn.Conv2d(10, 20, 5) # (20,24,24)

        self.pool = nn.MaxPool2d(2, 2) # (20,12,12)

        self.conv3 = nn.Conv2d(20, 29, 3) # (29,10,10)

        # myconv1*1 = (14,10,10) [1x1conv on channels: (0,1,2), (0,3,4), (0,5,6), (0,7,8),.... (0,27,28)]

        self.fc1 = nn.Linear(14*10*10, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)


    def myconv1x1(self, x): # (29,10,10)
        x_in = torch.zeros((x.shape[0], 3, x.shape[2], x.shape[3])) # (3,10,10)
        x_out = torch.zeros(x.shape[0], int((x.shape[1]-1)/2), x.shape[2], x.shape[3]) # (14,10,10)
        x_out = x_out.to(device)
        for i in range(x_out.shape[1]): # 14 times
            x_in[:, 0, :, :] = x[:, 0, :, :]
            x_in[:, 1, :, :] = x[:, (i*2)+1, :, :]
            x_in[:, 2, :, :] = x[:, (i*2)+2, :, :]
            x_out[:, i, :, :] = torch.squeeze(nn.Conv2d(3, 1, kernel_size=1)(x_in))
        return x_out


    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.pool(x)
        x = self.conv3(x)
        x = self.myconv1x1(x)
        x = x.view(-1, 14*10*10)
        x = self.fc1(x)
        x = self.fc2(x)
        x = self.fc3(x)
        return x

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)

net = Net()
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(500):  # loop over the dataset multiple times
    # running_loss = 0.0
    print("Epoch", epoch)
    for i, data in enumerate(trainloader, 0):
        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data[0].to(device), data[1].to(device)
        torch.autograd.set_detect_anomaly(True)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

Is there any way I can perform this task mentioned in myconv1x1?

@ptrblck Can you help with this one?