I start with pytorch, so don’t be mean to me. x)
I am currently trying to improve my code:
for x_ in range(batch_size): for y1 in range(num_classes): for y2 in range(num_classes): l =  * num_classes l[y1] += 1 l[y2] -= 1 outputs[x_].backward(torch.FloatTensor([l]), retain_graph=True) regularizer += torch.norm(batch_x.grad.data[x_], 2) batch_x.grad.data.zero_()
This code takes the output of my network and for each row I calculate the jacobian of the network with respect to the input, and I calculate the norm of the difference of all the rows and this for each output of the network (and thus input).
As you can imagine, this code is not very efficient, so I would like to know if there are any tricks to avoid the 3 loops for.
Is it therefore possible with pytorch to perform the same task on all the lines of a matrix?
Sorry if I’m not clear enough.