Activation gradient penalty

I’m trying to penalize the norm of activation gradients:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, kernel_size=5)
        self.conv2 = nn.Conv2d(32, 64, kernel_size=5)
        self.pool = nn.MaxPool2d(2, 2)
        self.relu = nn.ReLU()
        self.linear = nn.Linear(64 * 5 * 5, 10)

    def forward(self, input):
        conv1 = self.conv1(input)
        pool1 = self.pool(conv1)
        self.relu1 = self.relu(pool1)
        self.relu1.retain_grad()
        conv2 = self.conv2(relu1)
        pool2 = self.pool(conv2)
        relu2 = self.relu(pool2)
        self.relu2 = relu2.view(relu2.size(0), -1)
        self.relu2.retain_grad()
        return self.linear(relu2)

model = Net()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)

for i in range(1000):
    output = model(input)
    loss = nn.CrossEntropyLoss()(output, label)
    optimizer.zero_grad()
    loss.backward()

    grads = torch.autograd.grad(loss, [model.relu1, model.relu2], create_graph=True)

    grad_norm = 0
    for grad in grads:
	grad_norm += grad.pow(2).sum()

    grad_norm.backward(retain_graph=True)

    optimizer.step()

However, it does not produce the desired regularization effect. If I do the same thing for weight gradients, it works well. Am I doing this right? Specifically, what happens in grad_norm.backward() step? I just want to make sure the weight gradients are updated, and not activation gradients. Currently, when I print out gradients for weights and activations immediately before and after that line, both change - so I’m not sure what’s going on.

Hi @michaelklachko, if you were able to resolve this issue, can you take a look at Optimizing a loss based on autograd.grad output? I think I have a similar problem.