I am using PyTorch 0.4
import torch
X=torch.randn((100,3))
Y=torch.randn((100))
w1=torch.tensor(0.1, requires_grad=True)
w2=torch.tensor(0.1, requires_grad=True)
w3=torch.tensor(0.1, requires_grad=True)
W=torch.tensor([0.1, 0.1, 0.1], requires_grad=True)
W[0]=w1 * w2; W[1]=w2 * w3; W[2]=w3 * w1
#W=torch.cat([w1.view(1),w2.view(1),w3.view(1)])
Yp=torch.sum(X*W, dim=1)
loss = torch.nn.MSELoss()(Yp, Y)
loss.backward()
run the code and I got:
RuntimeError: leaf variable has been moved into the graph interior
uncomment #W, and it is fine
use case:
w1, w2, w3,ā¦, are many many ā¦ tensors from outputs of some modules
then we can assemble them to a big tensor via (1) or (2) below:
(1) use torch.cat
(2) create a tensor W, and assign w1, w2, w3 to subsections of W. It is easier to control where the w1/w2/w3 should be put into the W