i trained a MLP neural network and get whole weight value. Now i want to keep some weight to a fixed value(such as zero). when i retrain my MLP neural network, i don’t want to change fixed value. what can i do ?

You can alway zero the gradient of the weight so that optimizers will not update it:

```
specific_weight.grad.data.zero_()
```

first, very thanks for reply. I have question, when i always zero some weights , whether the backward() actually will calculate all the gradient of weight？I hope backward() does not calculate zeroed weight

The snippet I posted above will zero the gradients of your weight. `.backward()`

will still accumulate gradients in your `specific_weight`

Variable; we’re zeroing out the gradients after a call to `.backward()`

so the optimizer doesn’t use them.

1 Like

Thanks! But i want to realize neuron prune，so weight connected with pruned neurons don’t join accumulate gradients。

thanks for you reply. I later understood what you meant! thanks