Generated Hessian is not symmetric


I’m using the following code to generate the Hessian of my loss function according to variable x
the problem is that the generated Hessian is not symmetric which must be. Could anybody help if there might be some numerical instability with torch.autograd?
p.s: my model is an official PyTorch Resnet model.

        out = model(x)  
        loss = -criterion_nr(out.squeeze(), torch.sigmoid(out.squeeze()))
        loss = torch.mean(loss, dim=[1, 2])
        first_drv = torch.zeros(batch_size, x_dim).to(device)
        hessian = torch.zeros(batch_size, x_dim, x_dim).to(device)  
        for n in range(batch_size):
            first_drv[n] = torch.autograd.grad(loss[n], x,
                                                     create_graph=True, retain_graph=True)[0][n]
            for i in range(x_dim):
                exp_hessian[n][i] = torch.autograd.grad(first_drv[n][i], x,
                                                        create_graph=True, retain_graph=True)[0][n]

       > a = torch.tensor([[ 857.2029,  196.4826],
                  [ 196.4857, 1563.9629]])

196.4826 and 196.4857 are different in the last digit and it makes the rest of the computation wrong.

Hello Nima!

Your two numbers that are supposed to be equal differ in the
sixth significant digit. This is consistent with the seven-digit
round-off error of single-precision (32-bit) floating-point numbers
accumulating when a multi-step computation is done.

If you need your Hessian matrix to be exactly symmetric (e.g.,
to satisfy the pre-conditions of some subsequent processing),
you can forcibly symmetrize it (replace each pair of matching
off-diagonal elements with their average). If you need (for some
reason) your unsymmetrized Hessian to be symmetric to better
than six decimal digits, you should preform your calculations
in double precision (dtype = torch.float64).

Good luck.

K. Frank

1 Like