During the training phase, how to fix subset of the weights of NN?

I want to build NN in which some of the weights always be 0, and other weights will be optimized through the training process.

From the StackOverflow, I found something closer for tensorflow.

But, I want to implement this in Pytorch.

Can anyone please help me?

1 Like

You could create a mask for the constant weights and zero out their parameters after calculating the gradients with loss.backward().