Manipulating Variable Coordinates in a Computational Graph

Suppose I have a multidimensional variable V.

Is there any way of creating a new variable which is a manipulation of V’s coordinates (including operations such as modulo or floor division) - which retains the computational graph, so we can backprop to V via the new variable?

those discrete operations are not differentiable…

It’s a manipulation of the coordinates.

They are shuffled, so that the total entries of the variable is not reduced (bijective) - but the shape of the two variables are not the same.

At the moment, this can be done by simple reassigning the variable’s entries (iteration). In the backprop stage the inversion of this process is done for the activations and the gradients.

Is there any way of saving a computational graph of this process?

By “saving a computational graph” do you mean so that you can differentiate through the manipulation? If so, then yes. Use view to change the shape and indexing to manipulate the ordering.

import torch
x = torch.autograd.Variable(torch.arange(16).view(4, 4), requires_grad=True)
indices = torch.randperm(16)
y = x.view(-1)[indices].view(8, 2)  # bijection with a different shape
grad_output = torch.randn(8, 2)
y.backward(grad_output)

print(grad_output)
print(x.grad)

view() unfortunately isn’t complex enough for this task…

Also: is it possible to for view() to have a similar functionality to np.reshape with the argument order ? I know we can use permute, but it would be more convenient e.g. Equivalent of np.reshape() in pyTorch?

What do you mean by “view isn’t complex enough”?

You can also just use torch.take. The output will have the same shape as the indices.