Initialising nn.Module from tensor keeping gradient flow

I want to initialize weights in, for example, nn.Linear from given tensor. Then I will get gradients in nn.Linear and I want them to copy automatically to the first tensor. Code example:

module = nn.Linear(10, 1)
W = torch.rand((1, 10), dtype=float, requires_grad=True)
# here something like module.weight = W
out = module(torch.rand(10))
out.backward()
assert W.grad is not None

d)