Hi all,

not sure if I can formulate my problem in the right way, as I may not see the actual problem.

Here is what I am trying to do:

- Network A is generating N numbers as output, in a Nx1 tensor, t.
- I want to use the elements of t to create a matrix M (a new tensor), and set some of the

matrix entries to an element from t. - Then, I generate a sample grid g’ using torch.matmul(M, g), where g is just a grid

mapping (i,j) to (i,j), the identity. - Then, I use g’ with torch.nn.functional.grid_sample() to resample an image.
- I use the resampled image and the input image to compute a loss.

My naive expectation was that the backward pass starting at the loss would compute

gradients for the entries in t, and that they would be updated by the optimizer.step() function.

However, that is not happening; t is not changing at all, while other parameters are.

Can anyone help me to find where I am making the mistake?

Thanks a lot!