In my code, I need to use stack operation as follows:
mmtx1 = torch.stack([mtx1]*m)
where mtx1 is a torch.autograd.Variable with shape (n, d)
The loss is calculated using mmtx1, my question is that will this operation influence the autograd process considering that mtx1 is duplicated m times .
Depends on what you mean by “influence”. Of course the autograd will be impacted by stack as it is part of the computation graph. But the gradients will be correct.