# Exclude some of the elements in a tensor from being optimized

``````class MyLayer(nn.module):
def __init__(self, num_units):
self.weight = Parameter(torch.rand(num_units, num_units))
``````

I want to optimize all of the weights in the tensor except for the ones across the diagonal (i.e. the weights across the diagonal should stay fixed). What is the simplest way to exclude these weights across the diagonal from being changed when I perform backpropagation (with `loss.backward()`)?

After `loss.backward()` and before `optimizer.step()`, you can do
`model.layer.weight.grad.diagonal().zero_()`.
Having gradient entries constant to zero on every invocation should leave the values fixed with most optimizers.

Sophisticated people might mention backward hooks, but I am not one of them.

Best regards

Thomas

1 Like