Over Writing weights of a pre-trained network like alexnet

Hi Guys,
We can extract weights of a pre-trained network in the following ways:

model = models.alexnet(pretrained=True)
param = model.parameters()

weights_conv1 = next(param)
bias_conv1 = next(param)


weights_conv1 = list(list(model.features.children())[0].parameters())[0]
bias_conv1 = list(list(model.features.children())[0].parameters())[1]


value = model.state_dict()
weights_conv1 = value['features.0.weights']
bias_conv1 = value['features.0.bias']


weights_conv1 = model.state_dict()['features.0.weights']
bias_conv1 =  model.state_dict()['features.0.bias']

My question here is that can we overwrite these weights/biases like:

model.state_dict()['features.0.bias'] = Variable(torch.randn(64))


list(list(model.features.children())[0].parameters())[1] = Variable(torch.randn(64))

I am trying this but its not working. is there any other method for this???
Actually i want to provide the weights by myself and don’t allow model to learn them by back propagation.


What you need to do is:

state_dict = model.state_dict()
fbias = state_dict["features.0.bias"]
state_dict["features.0.bias"] = Variable(fbias.data.new(64).normal_()) # make a random tensor of same type and device as original

Which one is the best practice between above and this?
So far, I observe both are the same, but I am afraid there is a catch.

state_dict = model.state_dict()
fbias = state_dict["features.0.bias"]

# approach 1: from above
state_dict["features.0.bias"] = Variable(fbias.data.new(64).normal_()) 

# approach 2: from https://stackoverflow.com/questions/49446785/how-can-i-update-the-parameters-of-a-neural-network-in-pytorch
state_dict["features.0.bias"].copy_( Variable(fbias.data.new(64).normal_()) )

Thank you.

The copy is probably slightly nicer. If state_dict["features.0.bias"] is sharing storage with something else, then the copy makes sure that the sharing is preserved. (for example, if features.0.bias is a view of another Tensor).