Why are optimizer state_dict params integers and not tensors?

Trying to remap a saved optimizer to a new param_group and trying to check valid shaping of tensors between the optimizers, but realized that the params are integers and not tensors. How should I think of this? The docs say optimizer.state_dict()[“param_groups”][*][“params”] should be tensors.

> torch.optim.SGD(nn.Linear(3, 2).parameters(), 0.1).state_dict()["param_groups"]

[{'lr': 0.1,
  'momentum': 0,
  'dampening': 0,
  'weight_decay': 0,
  'nesterov': False,
  'maximize': False,
  'foreach': None,
  'differentiable': False,
  'params': [0, 1]}]  # <-- ints ¯\_(ツ)_/¯ 

What is the recommended way to validate that when I restore an optimizer that the parameters from param_group_orig match param_group_new [eg: different layerwise LR] ? I guess there is some link to the momentum terms as well somwehere behind the scenes?

From a cursory glance at the base Optimizer class I think the explanation is that these are indices forming a mapping rather than the raw tensors themselves: pytorch/optimizer.py at 18587cb31f484e4a9c4d4653c654ff27c71e9e2a · pytorch/pytorch · GitHub

This seems to make sense as the linear layer you defined would have two param tensors: a weight and a bias.