I am trying to zero out some filter weights of a pytorch model before and after training. Once located the correct layers and filters, I go ahead and replace that precise key in the OrderedDictionary that is state_dict with a value of torch.zeros(correct size). By changing the value in the state_dict, am I satisfactorily changing the whole model, making it ready for training with my intended change (in other words, does the change propagate also to model.parameters() or anything that is use in train.py)? If not, what’s the best way of doing so.
Thanks a lot!