I have a shared tensor. Does converting it to numpy allocate new memory or does the buffer remain the same?

`my_tensor = my_tensor.numpy()`

I have a shared tensor. Does converting it to numpy allocate new memory or does the buffer remain the same?

`my_tensor = my_tensor.numpy()`

The numpy array will share the same memory as seen here:

```
x = torch.tensor([1.])
print(x)
# tensor([1.])
y = x.numpy()
print(y)
# [1.]
y[0] = 2.0
print(x, y)
# tensor([2.]) [2.]
```

but I’m unsure what “I have a shared tensor” means in this context.