Calculated values goto INF with unknown reason


I want to implement an invertible neural net with the loss function related to the output and the jacobian determinant of the network’s output wrt its input. The original version of the invertible neural net gives analytical expressions on the jacobian determinant while I want to use torch’s autograd to calculate this jacobian determinant now to make it more general.

I implemented these two versions of codes. The version with analytically calculated jacobian works well but the version with autograd somehow fails (it runs but some of the values are INF). I tried to debug it by comparing the jacobian determinants but the weird thing is that 1. initiallly, the batched jacobian determinants align 2. Aftering training for a few iterations, for the batched jacobian determinants, some still align with the analytical values while some go to INF which confused me.

some results like the following: 1. use autodiff
tensor([[ inf],
[ inf],
[ inf],
[ inf],
[ inf],
[ inf]],

  1. analytical, tensor([134.4002, 77.4460, 111.4590, 135.3076, 104.6687, 73.4892, 81.2950,
    99.3901, 73.0275, 97.8689], device=‘cuda:0’, grad_fn=)

Anyone possibly knows the reason? if the code is needed, I could also try to show it. Thank you!