I want to do the same with a specific layer for the MobileNetV3. But I stuggle with finding out the right module name, where to apply my defined constraints.
As an example:
This is the first Inverted Residual block from the MobileNetV3, I only want to set constraints for the Conv2D layer.
(1): InvertedResidual(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=16, bias=False)
(1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(16, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
So basically my âmodel._modules[âl3â].apply()â in your example should say something like
model._modules[âconv2d layer in (block)sequential in (0)ConvNormActivationâ].apply().
I tried to use:
for name, module in model.named_modules():
print(name)
for the module name. But the module name âfeatures.1.block.0.0â for the weights I consequently want to constrain (Weightname:"features.1.block.0.0.weight ") throws a âKeyErrorâ.
model._modules[âfeatures.1.block.0.0â].apply()
I donât want to constrain all weights within the model to the same boundaries, since the value range is quite different in some layers and therefore I would cut of some weights after training if I would not use different boundaries for different layers.
For reference, I am injecting a BitFlip into the weights and therefore want to constrain the weights to the maximum and minimum values after training. So if a bitflip occurs the value does not go beyond the max and min value, and relies on the inherit fault resilience of DNN models for small value changes.
Any help would be greatly appreciated!