Accessing model's parameters by params key

Hello,
I created an optimizer like so

model = Sequential(
        Linear(in_features, 64),
        ReLU(),
        Dropout(p=.2),
        Linear(64, 32),
        ReLU(),
        Dropout(p=.2),
        Linear(32, 3),
        ReLU(),
        Linear(3, 1))

opt = torch.optim.SGD(
    [
        {'params': model[0].parameters(), 'lr': 0.1},
        {'params': model[1:3].parameters(), 'lr': 0.01},
        {'params': model[4:7].parameters(), 'lr': 0.001},
    ])

After I serialize opt.state_dict()

{'state': {},
 'param_groups': [{'lr': 0.1,
   'momentum': 0,
   'dampening': 0,
   'weight_decay': 0,
   'nesterov': False,
   'params': [139865343712712, 139865343712928]},
  {'lr': 0.01,
   'momentum': 0,
   'dampening': 0,
   'weight_decay': 0,
   'nesterov': False,
   'params': [139865343712640, 139865343713072]},
  {'lr': 0.001,
   'momentum': 0,
   'dampening': 0,
   'weight_decay': 0,
   'nesterov': False,
   'params': [139865343713144,
    139865343713216,
    139865343709616,
    139865343710048]}]}

Which arguments should I pass SGD constructor to recreate the optimizer given model and the above Json? I think if I was able to access the model’s parameters given the keys in params like 139865343712712 , I could recreate the initial input, but I haven’t been able to do that.

I’m really having a hard time understanding the Optimizer class source code, any help would be greatly appreciated.

Thanks

I’m not sure to understand the use case completely.
If you want to restore the optimizer, you could store its state_dict and load it afterwards.
To get the id of a parameter, you could use print(id(model.layer.weight)), however I’m not sure what you would like to do with it.

Thanks for your answer. The context around this is I’m serializing the model and optimizer and deserializing in another context where I don’t know the architecture of model and the initial call to SGD’s constructor

Since I’m already saving model.state_dict() entirely, it seems that I do not need to save optimizer.state_dict(), but only the param_groups.
For example, when there is only one param_group, SGD([*model.parameters()],**opt.state_dict()['param_groups'][0]) reconstructs the inital optimizer nicely, and I am trying to generalize this to more than one param_groups.

A Sequential model does not have a layer attribute, but not sure what you meant even after trying different calls. but yes, I would need a function that takes an id of a parameter that I would call. Then I would be able to call the optimizer’s constructor in the same way as I initially do, with a list of dictionaries.