Same process for tensors with same size, different result when run backward()

    aaa= torch.rfft(aa[0],2,onesided=False)
    an,aan= aaa[:,:,:,0],aaa[:,:,:,1]
    an  = torch.reshape(an, (1024, 16)).unsqueeze(1)
    aan = torch.reshape(aan,(1024, 16)).unsqueeze(1)

The code is simply like this. However, when I run “an.sum().backward()” every thing works fine, and when I run “aan.sum().backward()” I got error below:

  from ._conv import register_converters as _register_converters
Traceback (most recent call last):
  File "/home/zhang/open-reid/examples/test.py", line 12, in <module>
    o1, o2 = cr(inputs,targets)
  File "/home/zhang/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 460, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/zhang/open-reid/reid/loss/triplet.py", line 99, in forward
    dist_an, dist_ap = CirMat(inputs)
  File "/home/zhang/open-reid/reid/loss/triplet.py", line 33, in CirMat
    aan.sum().backward()
  File "/home/zhang/anaconda2/lib/python2.7/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/zhang/anaconda2/lib/python2.7/site-packages/torch/autograd/__init__.py", line 89, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Tensor: invalid storage offset at /home/zhang/pytorch/aten/src/THC/generic/THCTensor.c:759

How come that same process for two tensors with same size comes from same tensor got so different result? What should I do to make that error go away?