Hi Guys,

We can extract weights of a pre-trained network in the following ways:

```
model = models.alexnet(pretrained=True)
param = model.parameters()
weights_conv1 = next(param)
bias_conv1 = next(param)
```

OR

```
weights_conv1 = list(list(model.features.children())[0].parameters())[0]
bias_conv1 = list(list(model.features.children())[0].parameters())[1]
```

OR

```
value = model.state_dict()
weights_conv1 = value['features.0.weights']
bias_conv1 = value['features.0.bias']
```

OR

```
weights_conv1 = model.state_dict()['features.0.weights']
bias_conv1 = model.state_dict()['features.0.bias']
```

My question here is that can we overwrite these weights/biases like:

```
model.state_dict()['features.0.bias'] = Variable(torch.randn(64))
```

OR

```
list(list(model.features.children())[0].parameters())[1] = Variable(torch.randn(64))
```

???

I am trying this but its not working. is there any other method for this???

Actually i want to provide the weights by myself and donâ€™t allow model to learn them by back propagation.