I am testing the difference between “expand()” and “repeat()”.There are some problems when executing the backward. I have a little trouble understanding this error message.
a = nn.Parameter(torch.FloatTensor(torch.ones(3,1)), requires_grad=True)
b = a.expand(3,4)
c = a.repeat(1,4)
print(a.data_ptr(),b.data_ptr(),c.data_ptr())
with torch.no_grad():
a[0,0] = 3
print(b)
print(c)
d = torch.sum(3*b)
d.backward()
print(a.grad)
2152330348608 2152330348608 2152301237760
tensor([[3., 3., 3., 3.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], grad_fn=<AsStridedBackward>)
tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]], grad_fn=<RepeatBackward>)
Traceback (most recent call last):
File “d:\Desktop\deepul-master\flow.py”, line 232, in
d.backward()
File “D:\App\Anaconda3\lib\site-packages\torch\tensor.py”, line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “D:\App\Anaconda3\lib\site-packages\torch\autograd_init_.py”, line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Index out of range