About error "more than one element of the written-to tensor refers to a single memory location."

I seems to run into this error very now and then, most times unexpectedly. I tried to explain or find explanation it using tensor storage models but still do not have a reasonable explantion.

RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.

For simplicity, I use these two similar statements in python (C++ has same result). One is successful, one has the RuntimeError

Success example

a=torch.ones(3,3)
(a.mean(1, True) + a).expand_as(a).mul_(a)

Runtime Error

a=torch.ones(3,3)
(a.mean(1, True)).expand_as(a).mul_(a)

RuntimeError: unsupported operation: ... ...
1 Like

Hi,

The error happens because you’re trying to do an inplace operation on a Tensor that has overlapping memory (created by the expand).

In the first case, this does not happen because the + a actually creates a new Tensor of the size of a and so the expand_as() does nothing in this case.

1 Like

Got it…so the +a created a tensor with a shape same as the target (3x3), while the other one relies on the expand_as().clone() to create a tensor of same shape. Thanks for the prompt help

1 Like

Yes.
expand() is very efficient because instead of allocating new memory, it only plays with the stride of the Tensor to make it the right size. But as you saw here, it has limitations.

1 Like