I am trying to do flip-augmented evaluation, so I need to flip my tensor. I find that
torch.flip is very helpful, but it will generate a new tensor each time, which would waste some memory. Do you have the operation to flip a tensor as
torch.flip does, but is inplace?
You can modify tensor.data and that should be inplace.
Thanks, do you mean I could save redundant memory like this?
a = torch.randn(2,3) a.data = torch.flip(a, (1,))