Hi, during the training process of a ResNet model I have modified the affine parameters just before the loss.back as below:
i = 0
for layer in model.modules():
if isinstance(layer, nn.BatchNorm2d):
layer.bias = nn.Parameter(value 1)
layer.weight = nn.Parameter(value 2)
i += 1
I realized the layer.bias and layer.weight does not get updated during the back-prob and remains as 0 and 1 respectively while req_grad is true and the optimizer produces the appropriate gradient tensor. I am wondering if these assignment has caused the problem? If yes, is there an alternative approach?