RuntimeError: element 0 of tensors does not require grad

I got thiserror “RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn” when I run this simple code. I don’t know what the problem is?
def cost(params, Y):
res = [myNetwork(params)]
return square_loss(res, Y)
def square_loss(res, Y):
loss = 0
for l, p in zip(res, Y):
loss += (l - p) ** 2
loss = loss / len(Y)
loss = torch.from_numpy(loss)
return torch.mean(loss)

var1 = Variable(torch.tensor([0.2]), requires_grad=True)
opt = torch.optim.Adam([var1], lr=0.1)
print(cost(var1, Y))
for i in range(100):
opt.zero_grad()
loss = cost(var1, Y)
loss.backward()
opt.step()
print(“Cost:”, loss)

You are detaching the loss from the computation graph by creating a new tensor:

loss = torch.from_numpy(loss)

If looks like you are passing PyTorch tensors to the function, so I think you don’t need this operation.

PS: You can add code snippets by wrapping them in three backticks ``` :wink:
Also, Variables are deprecated, so you can now directly use tensors.

Ok, I changed the code but got the same error?
def cost(var1, Y):
res = myNetowrkt(var1)
loss = torch.sqrt(torch.mean((res-Y)**2))
return loss
X = torch.FloatTensor([[0, 0, 0, 0]])
Y = torch.FloatTensor([[0, 1, 1]])
var1 = torch.tensor(3.14159, requires_grad=True)
opt = torch.optim.Adam([var1], lr=0.5)
loss = cost(var1, Y)
print(loss)
opt.zero_grad()
loss.backward(retain_graph=True)

Could you post an executable code snippet to reproduce this issue?

PS: you can add code snippets by wrapping them in three backticks ``` :wink:

Actually, I am using a quantum library to implement the network.