# Manipulating Variable Coordinates in a Computational Graph

Suppose I have a multidimensional variable V.

Is there any way of creating a new variable which is a manipulation of V’s coordinates (including operations such as modulo or floor division) - which retains the computational graph, so we can backprop to V via the new variable?

those discrete operations are not differentiable…

It’s a manipulation of the coordinates.

They are shuffled, so that the total entries of the variable is not reduced (bijective) - but the shape of the two variables are not the same.

At the moment, this can be done by simple reassigning the variable’s entries (iteration). In the backprop stage the inversion of this process is done for the activations and the gradients.

Is there any way of saving a computational graph of this process?

By “saving a computational graph” do you mean so that you can differentiate through the manipulation? If so, then yes. Use `view` to change the shape and indexing to manipulate the ordering.

``````import torch
indices = torch.randperm(16)
y = x.view(-1)[indices].view(8, 2)  # bijection with a different shape
grad_output = torch.randn(8, 2)