Does tensor.numpy() allocate new memory?

I have a shared tensor. Does converting it to numpy allocate new memory or does the buffer remain the same?

my_tensor = my_tensor.numpy()

The numpy array will share the same memory as seen here:

x = torch.tensor([1.])
# tensor([1.])

y = x.numpy()
# [1.]

y[0] = 2.0

print(x, y)
# tensor([2.]) [2.]

but I’m unsure what “I have a shared tensor” means in this context.