Resetting or changing weights for specific layers, to simulate bitflips

Hello everyone,

I want to change or reset the weights of specific layers to see the effect on the object classification accuracy of some models.
I basically want to test the fault resillience of certain object classification models and simulate a bitflip that changes the weights in one or multiple layers.
Right now I implemented that the weights of all layers get randomized after lets say 25 out of 50 epochs. Obviously the object classification accuracy goes down significantly from 95% to 30-35 % (basically 1/3, which makes sense, because its the “guessing” accuracy), since I am resetting all weights of a pretrained model (AlexNet in my case).

def init_params(m):

if type(m)==nn.Linear or type(m)==nn.Conv2d:

   m.weight.data=(torch.randn(m.weight.size())*.01).to(device) #Random weight initialisation

Thats how I randomize the weights and how I apply them to the model after 25 epochs:

setting the random weights after specific epoch

    if epoch == 25:

        alexnet.apply(init_params) # torch randn weight initalisation

From my understanding you can use state_dict to see the models changeable parameters. For my model for example that is:

Print model’s state_dict

print(“Model’s state_dict:”)

for param_tensor in alexnet.state_dict():

print(param_tensor, "\t", alexnet.state_dict()[param_tensor].size())

Model’s state_dict:
features.0.weight torch.Size([64, 3, 11, 11])
features.0.bias torch.Size([64])
features.3.weight torch.Size([192, 64, 5, 5])
features.3.bias torch.Size([192])
features.6.weight torch.Size([384, 192, 3, 3])
features.6.bias torch.Size([384])
features.8.weight torch.Size([256, 384, 3, 3])
features.8.bias torch.Size([256])
features.10.weight torch.Size([256, 256, 3, 3])
features.10.bias torch.Size([256])
classifier.1.weight torch.Size([4096, 9216])
classifier.1.bias torch.Size([4096])
classifier.4.weight torch.Size([4096, 4096])
classifier.4.bias torch.Size([4096])
classifier.6.weight torch.Size([3, 4096])
classifier.6.bias torch.Size([3])

And now I want to change the weights of “features.0.weight” resp. conv2d layer 1 or any other layer. The rest should remain the same though.

Any help would be appreciated.

A small update, I found a way to manipulate the stat_dict from the pretrained AlexNet.

I did it with

sd = model.state_dict()
sd[‘features.0.weight’][0:2].zero_()

My question is, if there is a simular way to override the tensor within the state_dict with random numbers instead of just zeros

So basically
“sd[‘features.0.weight’][0:2].randn_()”

tensor.normal_ should work

1 Like