Is there any way of creating a new variable which is a manipulation of V’s coordinates (including operations such as modulo or floor division) - which retains the computational graph, so we can backprop to V via the new variable?

They are shuffled, so that the total entries of the variable is not reduced (bijective) - but the shape of the two variables are not the same.

At the moment, this can be done by simple reassigning the variable’s entries (iteration). In the backprop stage the inversion of this process is done for the activations and the gradients.

Is there any way of saving a computational graph of this process?

By “saving a computational graph” do you mean so that you can differentiate through the manipulation? If so, then yes. Use view to change the shape and indexing to manipulate the ordering.

import torch
x = torch.autograd.Variable(torch.arange(16).view(4, 4), requires_grad=True)
indices = torch.randperm(16)
y = x.view(-1)[indices].view(8, 2) # bijection with a different shape
grad_output = torch.randn(8, 2)
y.backward(grad_output)
print(grad_output)
print(x.grad)

view() unfortunately isn’t complex enough for this task…

Also: is it possible to for view() to have a similar functionality to np.reshape with the argument order ? I know we can use permute, but it would be more convenient e.g. Equivalent of np.reshape() in pyTorch?