???
I am trying this but its not working. is there any other method for this???
Actually i want to provide the weights by myself and don’t allow model to learn them by back propagation.
state_dict = model.state_dict()
fbias = state_dict["features.0.bias"]
state_dict["features.0.bias"] = Variable(fbias.data.new(64).normal_()) # make a random tensor of same type and device as original
model.load_state_dict(state_dict)
The copy is probably slightly nicer. If state_dict["features.0.bias"] is sharing storage with something else, then the copy makes sure that the sharing is preserved. (for example, if features.0.bias is a view of another Tensor).