Future of References to Module Parameters

In the following code snippet, the two assertions hold:

import torch
import torch.nn as nn

conv = nn.Conv2d(8, 8, 3)
d = {k: v for k, v in conv.named_parameters()}

# torch.__future__.set_overwrite_module_params_on_conversion(True)
assert all(d[k] is v for k, v in conv.named_parameters())
assert all(d[k] is v for k, v in conv.named_parameters())

However, if the commented line is uncommented, they fail. Such a change would obviously have a significant impact on the future of lots of PyTorch code, e.g., any code that moves a model after creating an optimizer. In general, storing references to parameters is often more efficient than recursively traversing a module, which generates one such use case.

import copy
import torch.nn as nn

conv = nn.Conv2d(8, 8, 3)
seq = nn.Sequential(*[copy.deepcopy(conv) for _ in range(5)])
for _ in range(5):
    seq = nn.Sequential(*[copy.deepcopy(seq) for _ in range(5)])

d = {k: v for k, v in seq.named_parameters()}

%timeit list(d.items())
%timeit list(seq.named_parameters())

%timeit list(d.values())
%timeit list(seq.parameters())

The internet doesn’t seem to offer much on the future of when this change will be rolled out despite the fact that it has been merged in master for quite some time. Can the PyTorch team provide any insights into the rollout plan?

To the best of my knowledge, the advice has always been to move your model first as the fact that the parameters don’t change in this has been more an implementation detail than part of the design.

It would seem that in most use cases, the traversing of the modules upon setup is rather small compared to running or training the models.

Best regards


My apologies, I wasn’t entirely clear regarding my use case. I would like to be able to stably store references to parameters rather than recursively iterate over the entire set of parameters. This will be done frequently, as opposed to during the initialization of training.