nn.Parameter changes input tensor

I want to use pytorch to optimize a simple loss function. The only parameters are w. Now I want to initialize w with w0. However, I find that after loss.backward(), w0 also change. How to make sure the initial tensor w0 is not changed? Does nn.Parameter copy reference?

import numpy as np
import torch
import torch.nn as nn

class mymodel(nn.Module):
    def __init__(self, w0):
        super().__init__() 
        self.w = nn.Parameter(w0)
    def forward(self, x):
        diff = self.w * x - 10
        self.loss = torch.mean(diff * diff)
        return self.loss

w0 = torch.randn(2, 3)
model = mymodel(w0)
print(w0)
print(model.w)

optimizer = torch.optim.Adam(model.parameters(), lr = 0.1)
inputs = torch.randn(2, 3)
loss = model(inputs)
optimizer.zero_grad()
loss.backward()
optimizer.step()

print(w0)
print(model.w)

Results:

tensor([[-0.6700,  1.4510, -0.0059],
        [-0.0139, -1.0681,  0.4844]])
Parameter containing:
tensor([[-0.6700,  1.4510, -0.0059],
        [-0.0139, -1.0681,  0.4844]], requires_grad=True)

tensor([[-0.5700,  1.5510,  0.0941],
        [ 0.0861, -0.9681,  0.3844]])
Parameter containing:
tensor([[-0.5700,  1.5510,  0.0941],
        [ 0.0861, -0.9681,  0.3844]], requires_grad=True)

I think nn.Parameter(w0.clone().detach()) will fix the problem.

notebook: nn-parameter-changes-input-tensor.ipynb · GitHub

Reference: See warning block of new_tensor torch.Tensor — PyTorch 1.8.0 documentation