I was wandering that is it possible to autograd only one row of a model weights so that other rows are fixed?

You can zero the gradients on all other rows, so that you you don’t use those gradients when updating the weights:

```
weights = Variable(torch.randn(3, 3, 3) , requires_grad=True)
weights.grad.data[1:].zero_()
```

Thanks for the quick reply!

I’m still thinking where to insert this line?

`weights.grad.data[1:].zero_()`

Should it be inserted after

data, target = Variable(data), Variable(target)

optimizer.zero_grad()

before

`output = model(data)`

or after

`optimizer.step()`

?

After you compute the gradients with `something.backward()`

and before you step with the optimizer (`optimizer.step()`

).

1 Like

Thanks Richard, this works as I need!