Hello everyone,

I want to change or reset the weights of specific layers to see the effect on the object classification accuracy of some models.

I basically want to test the fault resillience of certain object classification models and simulate a bitflip that changes the weights in one or multiple layers.

Right now I implemented that the weights of all layers get randomized after lets say 25 out of 50 epochs. Obviously the object classification accuracy goes down significantly from 95% to 30-35 % (basically 1/3, which makes sense, because its the âguessingâ accuracy), since I am resetting all weights of a pretrained model (AlexNet in my case).

def init_params(m):

```
if type(m)==nn.Linear or type(m)==nn.Conv2d:
m.weight.data=(torch.randn(m.weight.size())*.01).to(device) #Random weight initialisation
```

Thats how I randomize the weights and how I apply them to the model after 25 epochs:

setting the random weights after specific epoch

```
if epoch == 25:
alexnet.apply(init_params) # torch randn weight initalisation
```

From my understanding you can use *state_dict* to see the models changeable parameters. For my model for example that is:

Print modelâs state_dict

print(âModelâs state_dict:â)

for param_tensor in alexnet.state_dict():

```
print(param_tensor, "\t", alexnet.state_dict()[param_tensor].size())
```

Modelâs state_dict:

features.0.weight torch.Size([64, 3, 11, 11])

features.0.bias torch.Size([64])

features.3.weight torch.Size([192, 64, 5, 5])

features.3.bias torch.Size([192])

features.6.weight torch.Size([384, 192, 3, 3])

features.6.bias torch.Size([384])

features.8.weight torch.Size([256, 384, 3, 3])

features.8.bias torch.Size([256])

features.10.weight torch.Size([256, 256, 3, 3])

features.10.bias torch.Size([256])

classifier.1.weight torch.Size([4096, 9216])

classifier.1.bias torch.Size([4096])

classifier.4.weight torch.Size([4096, 4096])

classifier.4.bias torch.Size([4096])

classifier.6.weight torch.Size([3, 4096])

classifier.6.bias torch.Size([3])

And now I want to change the weights of âfeatures.0.weightâ resp. conv2d layer 1 or any other layer. The rest should remain the same though.

Any help would be appreciated.