Hi,
I am not sure to follow what you want to do here.
As I understand this won’t change net’s output at all.
If you change the value of the activations, it will change the value of the next layer. Because it is very unlikely that the next layer treats A and B exactly the same way.
If you change the value of the weights, it will change also as it is very unlikely that the input for A and B are the same.
But what if you want to do is update the weights of a single layer (let’s say Linear), you can simply do
with torch.no_grad():
layer.weight[alive_idx] /= 2
layer.weight[dead_idx] = layer.weight[alive_idx]
layer.bias[alive_idx] /= 2
layer.bias[alive_idx] = layer.bias[alive_idx]