I get an error if I try to replace some values of a Tensor via indexing.
I simply use a torch.Tensor
(dtype=float64) and try to set some values in it. I even tried to use the scatter_
function but it did not work either.
A simplified version of my code is:
import torch
# initialize tensor
tensor = torch.zeros((1, 400, 400)).double()
tensor.requires_grad_(True)
# create index ranges
x_range = torch.arange(150, 250).double()
x_range.requires_grad_(True)
y_range = torch.arange(150, 250).double()
y_range.requires_grad_(True)
# get indices of flattened tensor
x_range = x_range.long().repeat(100, 1)
y_range = y_range.long().repeat(100, 1)
y_range = y_range.t()
tensor_size = tensor.size()
indices = y_range.sub(1).mul(tensor_size[2]).add(x_range).view((1, -1))
# create patch
patch = torch.ones((1, 100, 100)).double()
# flatten tensor
tensor_flattened = tensor.contiguous().view((1, -1))
# set patch to cells of tensor_flattend at indices and reshape tensor
tensor_flattened.scatter_(1, indices, patch.view(1, -1))
tensor = tensor_flattened.view(tensor_size)
# sum up for scalar output for calling backward()
tensor_sum = tensor.sum()
# calling backward()
tensor_sum.backward()
# alternative to avoid summing tensor:
tensor.backward(torch.ones_like(tensor))
The error traceback is :
File “/home/students/schock/deep_aam_pytorch/testing_tensor.py”, line 32, in
tensor.backward(torch.ones_like(tensor))
File “/home/students/schock/miniconda3/envs/deep_aam/lib/python3.6/site-packages/torch/tensor.py”, line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/home/students/schock/miniconda3/envs/deep_aam/lib/python3.6/site-packages/torch/autograd/init.py”, line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: leaf variable has been moved into the graph interiorProcess finished with exit code 1
Any Idea where this comes from?
Thanks in advance!