Modified a input_channel using replacement with new Conv2d working but not with using model.layer.in_channel=12

I have a model network and its parameters, i modified the parameters, then go to change a channel of a conv2d layer, at first, i used model.layer.in_channels = 12, and then check with model.modules, it showed that input_channel(is 12 as assigned) of the conv2d was modified, but when using model.load_state_dict(parameters), it showed errors with parameters’ shape is (16,12,3,3) but model’s shape is (16,3,3,3);
but i using model.layer = conv2d() to replace original layer works fine.
why model.layer.in_channels = 12 is not working? but comment code is working.
THANKS for any interpretation.
codes like:

m = Model()
print(m.features[0][0])
m.features[0][0].in_channels = 12
print(m.features[0][0])
print(m.modules)
# new_layer = nn.Conv2d(12, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
# m.features[0][0] = new_layer
m.load_state_dict(mv)

output is like:

Conv2d(3, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Conv2d(12, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
<bound method Module.modules of MobileNetV3LargeEncoder(
  (features): Sequential(
    (0): ConvBNActivation(
      (0): Conv2d(12, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
...
ERROR:size mismatch for features.0.0.weight: copying a param with shape torch.Size([16, 12, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 3, 3, 3]).

Changing the internal in_channels attribute won’t reinitialize the parameters. Creating a new module or reassigning new nn.Parameters should work.

THANK YOU very much,

reassigning new nn.Parameters should work

i use Parameter() reinitialize the parameters,
it works using like:

m.features[0][0].in_channels = 12
m.features[0][0].weight = nn.Parameter(mv['features.0.0.weight'])
m.load_state_dict(mv)