TorchRL duplicates model weights in LossModule's functional paramers

The TorchRL LossModule.convert_to_functional(...) method creates a deep copy of the parameters. If I understand correctly, this leads to the parameters being duplicated in memory and a larger memory footprint than necessary. Is my understanding correct? If so, why is this necessary? Is there any way for the LossModule simply contain a single reference to the weights for, e.g., its actor and critic TorchModules?

This deepcopy’ing of parameters can be seen in the following line of code: rl/torchrl/objectives/common.py

This is the specific snippet of code containing the deep copy that this question pertains to:

# set the functional module: we need to convert the params to non-differentiable params
# otherwise they will appear twice in parameters
with params.apply(
    self._make_meta_params, device=torch.device("meta")
).to_module(module):
    # avoid buffers and params being exposed
    self.__dict__[module_name] = deepcopy(module)

Thanks for posting this!
Please see the answer on github.