# Calculating loss with autograd: One of the differentiated Tensors appears to not have been used in the graph

I have a convolutional neural network that predicts 3 quantities: Ux, Uy, and P. They are all 2D arrays of size [100,60], and my batch size is 10.

I want to compute the loss and update the network by calculating the CURL of the predicted velocity with the CURL of the target velocity. I have a function that does this: discrete_curl, and it computes it using (Ux_pred, Uy_pred), the predicted Ux and Uy. I want to compute the loss by comparing it to ground truth targets that I have: true_curl = curl(Ux_true, Uy_true)

To do this, I need to compute the curl loss in terms of Ux and Uy. I have not gotten past this error in Pytorch. this is my code so far:

``````# Curl function defined seperately
def discrete_curl(self,x,y,curl):
for m in range(100):
for n in range(60):
if n <= 58:
if m <= 98:
if x[m,n] != 0 and y[m,n] != 0:
curl[m,n] = ((y[m+1,n] - y[m-1,n]) / 2*1) - ((x[m,n+1] - x[m,n-1]) / 2*1)
return curl
``````

Code:

``````        inputs= torch.from_numpy(x)
targets= torch.from_numpy(y[:,0:3,:,:])  # Ux,Uy,P target

pred = self.forward(inputs)
pred_ux = pred[:,0,:,:]  #Ux of batch: size [10,100,60]
pred_uy = pred[:,1,:,:]. #Uy of batch: size [10,100,60]

predicted_curl = np.zeros((len(pred),100,60), dtype=float)
predicted_curl = torch.from_numpy(predicted_curl)
for i in range(len(pred)):

pred_ux[i] = Variable(pred_ux[i], requires_grad=True) #Ux_pred
pred_uy[i] = Variable(pred_uy[i], requires_grad=True) #Uy_pred

predicted_curl[i] = Variable(predicted_curl[i], requires_grad=True) #curl from predicted velocity values, to be updated with curl function below
predicted_curl[i] = self.discrete_curl(pred_ux[i], pred_uy[i], predicted_curl[i])

``````

However, it fails when I try to define grad_tensor, before I can even compute the loss. It fails with this error:

``````    grad_tensor = torch.autograd.grad(outputs=new_arr[i], inputs=(pred_ux[i], pred_uy[i]), grad_outputs=torch.ones_like(new_arr[i]), retain_graph=True)
inputs, allow_unused)
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
``````

How do I get past this?

Really stuck on this, anything can help!

Have you tried to add `allow_unused=True` in `torch.autograd`? If you enable it the gradients would be None for the unused variables and then you can look at which variable’s gradients were None and then set that variable’s `requires_grad=False` or you can set the gradient of all variables with None gradient equal to 0, as you do not need to learn these things.

I haven’t tried that, I’m not sure what it means because the variables should be used.

The error means not all the tensors for which `requires_grad=True` are used in gradient computation. An example of this would be

``````a = torch.randn(3, 4, requires_grad=True)
b = torch.randn(3, 4, requires_grad=True)
loss = a.sum()
So try setting `allow_unused=True` and it should work.