It seems all the answers I can find on the subject essentially state to not transfer weights if the models have different shapes, but I would like to try.
For my case, it either will usable or not but I feel like it is worth trying and in any event I might learn something.
Let’s say I have two RRDBNet models, one with 32 feature layers and the other with 64, all else identical:
model32 = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=32, num_block=23, num_grow_ch=32)
model64 = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32)
loadnet32 = torch.load(model_path32, map_location=torch.device('cpu'))
loadnet64 = torch.load(model_path64, map_location=torch.device('cpu'))
model32.load_state_dict(loadnet32["params_ema"], strict=False)
model64.load_state_dict(loadnet64["params_ema"], strict=True)
for n, p in model64.named_parameters():
...interpolate weights (or skip every other value)?