Trying to perturb parameters of BNN

I am currently trying to perturb the parameters of my trained bayesian neural net by a random amount using this function:

copied_bnn = copy.deepcopy(bnn)

perturbation_scale = 100

for name, param in copied_bnn.named_parameters():
if ‘mu’ in name:
rho_name = name.replace(‘mu’, ‘rho’)
sigma_param = torch.exp(bnn.state_dict()[rho_name] / 2.0)
perturbation = torch.normal(0.0, sigma_param * perturbation_scale)
param.data += perturbation

The problem is, no matter how much I perturb the parameters by, the evaluation on the test set stays the same.

Add float32_max as a quick test and check if your model outputs invalid values. If not, you might not be properly manipulating the parameters.

Just tried this and the NN is still outputting the same test accuracy. Is it something to do with the computational graph being copied across as well? This is the full code for the copying across:

import copy

copied_bnn = copy.deepcopy(bnn)

copied_bnn = copied_bnn.to(torch.device(device))

perturbation_scale = 3.4e38

for name, param in copied_bnn.named_parameters():
if ‘mu’ in name:
rho_name = name.replace(‘mu’, ‘rho’)
sigma_param = torch.exp(bnn.state_dict()[rho_name] / 2.0)
perturbation = torch.normal(0.0, sigma_param * perturbation_scale)
with torch.no_grad(): # Detach from computation graph
param.data += perturbation

correct = 0
total = 0

with torch.no_grad():
for inputs, labels in test_loader:
inputs, labels = inputs.to(device), labels.to(device)
outputs = copied_bnn(inputs.view(1, -1))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()