Hello all;

I can understand the use of:

loss.backward with loss = BCELoss()(pred, target), as it hook gradients in each leaf of the graph, so that the step() call implements the descent.

But I don’t understand the use of tensor.backward in the following code:

```
def generate_gradients(self, class, layer_name):
activation = self.intermediate_activations[layer_name]
activation.register_hook(self.save_gradient)
logit = self.output[:, class]
logit.backward(torch.ones_like(logit), retain_graph=True)
# gradients = grad(logit, activation, retain_graph=True)[0]
# gradients = gradients.cpu().detach().numpy()
gradients = self.gradients.cpu().detach().numpy()
return gradients
```

I’m trying to reimplement the TCAV paper with PyTorch. (TCAV ie Testing with Concept Activation Vectors), and the objective is to calculate gradients of inputs at a given layer.

Thank you very much