If I understand properly, there is no way how to get gradients for every single sample in minibatch in PyTorch. But is there an option to compute it more efficiently than running it in the loop and compute each gradient separately?

```
for im,lab in zip(images,labels):
optimizer.zero_grad()
output = model(im)
loss = criterion(output, lab)
loss.backward()
grad_new.append(copy.deepcopy([x.grad for x in model.parameters()]))
param_new.append(copy.deepcopy([x.data for x in model.parameters()]))
```