Transfer learning from checkpoint - reset Optimizer exp_avg

I am trying to load a checkpoint and do transfer learning for image sets with different inputs. The original model was trained in 2 channels, but I need to train it on images with one. I was able to change the actual layer as follows:

net.load_state_dict(torch.load(model_weights), strict=match_required)
input_conv_layer = torch.nn.Conv3d(1, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1))
input = torch.nn.init.xavier_uniform_(input_conv_layer.weight)
net['state_dict']['Fpn.C0.0.0.0.weight'] = input

However, when I begin training, the optimizer is trying to use the old state:

net[‘optimizer’][‘state’][0][‘exp_avg’] and net[‘optimizer’][‘state’][0][‘exp_avg_sq’]

both having 2 input channels. What is the correct way to change the optimizer to work with the new input_conv_layer?

I don’t mind resetting the optimizer, but would be interesting to know optimize w/ without reset. If I simply define a new optimizer:
optimizer = torch.optim.Adam(arch.parameters(), lr=1e-5)
It is not a dictionary, and doesn’t match what is in

net[‘optimizer’]

P.S.: What is the proper terminology for net = pytorch.load(path) vs other_thing.load_state_dict(net)