Weights not getting updated when creating tensor in the middle of the computation

I want W to get updated, and as you can see, “x” variable acts as an index variable, which helps in getting relevant weights from W and then combine them to get predictions. But the way I have done it does not seem to update “W”. Also note that if I make requries_grad=False where it is True now in the following code, I get “tensor 0 does not require grad” kinda error. I will be grateful if I can get some explanation why this does not work rather than the workarounds

x=[torch.tensor([0,1]),torch.tensor([[2,3],[0,1]]),torch.tensor([4,5])]
W=torch.tensor([0.1,0.2,0.3,0.4,0.5,0.6])
# W[x[0]]
optimizer = torch.optim.Adam([W], lr=0.001) #, weight_decay=5e-4)
y=torch.tensor([0.,1.,1.])
for epoch in range(3):
#     W_rel=[W[i] for i in x]
    pred=torch.tensor([torch.sigmoid(torch.sum(W[i])) for i in x],requires_grad=True)
    loss=torch.nn.BCELoss()(pred,y)
    loss.backward()
    optimizer.step()
    print(W)

You would have to initialize W as an nn.Parameter or set it requires_grad attribute to True in the creation.
Also don’t wrap outputs in new tensors, as this will detach the computation graph.
This code snippet should work:

x=[torch.tensor([0,1]),torch.tensor([[2,3],[0,1]]),torch.tensor([4,5])]
W=torch.tensor([0.1,0.2,0.3,0.4,0.5,0.6], requires_grad=True)
optimizer = torch.optim.Adam([W], lr=0.001) #, weight_decay=5e-4)
y=torch.tensor([0.,1.,1.])
for epoch in range(3):
    pred= torch.stack([torch.sigmoid(torch.sum(W[i])) for i in x])
    loss=torch.nn.BCELoss()(pred,y)
    loss.backward()
    optimizer.step()
    print(W)