Adding the same parameter to optimisation multiple times

Does it matter if you add the same parameter multiple times to the vector of parameters in an optimiser? Example:

std::vector<torch::Tensor> differentiable_nodes;

differentiable_nodes.emplace_back(some_node.parameters()[0]);
differentiable_nodes.emplace_back(some_node.parameters()[0]);

torch::optim::SGD optimizer(differentiable_nodes, 0.001);

Does this in any way affect the training of these parameters? I want to avoid checking for duplicate nodes.

You should get a warning, as duplicated parameters will cause an error in the future:

x = nn.Parameter(torch.randn(1))
optimizer = torch.optim.SGD([x, x], lr=1.)
> UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information

I’m not getting this warning in Libtorch 1.8. Seen that it gives a warning in python I probably better avoid duplicate parameters

I’m not sure how these warning are used in libtorch, but I agree that you should rather avoid it, as you would run into errors in future releases.