I use two ways to calculate the determinant of both matrix 1-product of eigenvalues 2-torch.det()
and both ways have the problem that determinant of x is not equal to the inverse of the determinant of x_inv
Your matrix is ill-conditioned, and is therefore amplifying the
round-off error in your single-precision (32-bit) calculations.
If such ill-conditioned matrices are part of your actual use case,
you will need to pay the price of performing double-precision
A torch.tensor defaults to float32 which has about 7 decimal
digits of precision. However the two rows of your matrix (when
understood as vectors) are almost exactly anti-parallel, so your
matrix is nearly degenerate. (The angle between these two vectors
is 179.9999839 degrees!)
Another way of seeing this is to calculate the condition number
of your matrix. (This is the ratio between the matrix’s largest
and smallest eigenvalues.) Your condition number is 1.5 x 10^7.
Very roughly speaking, the condition number tells you how much
your round-off error will be amplified when doing things like
inverting a matrix or solving a system of linear equations.
Your single-precision round-off error is about 10^-7, so – very
roughly – it gets blown up to order 1.
If you use 64-bit double-precision numbers you will have about
16 decimal digits of precision. Even amplifying this round-off
error by your condition number of 10^7, you will still have
enough precision to get satisfactory results.
Redo your tests with a tensor of dtype = torch.float64,
and see if that works for your use case.
This is illustrated by the following script (that uses numpy):
import numpy as np
mmd = np.array([[757.7089, -196.4800], [-196.4800, 50.9489]]) # your matrix
mmd.dtype # 64-bit double precision
np.linalg.cond (mmd) # matrix is ill-conditioned
msin = np.linalg.det (mmd) # det is proportional to sin (theta_vecs)
mcos = np.dot (mmd, mmd) # dot is proportional to cos (theta_vecs)
np.degrees (np.math.atan2 (msin, mcos)) # angle between rows of matrix
mmdi = np.linalg.inv (mmd)
np.matmul (mmd, mmdi) # inverse is quite accurate
1.0 / np.linalg.det (mmd)
np.linalg.det (mmdi) # determinants match well
mms = np.float32 (mmd) # convert to float
mms.dtype # 32-bit single precision
mmsi = np.linalg.inv (mms)
np.matmul (mms, mmsi) # in single precision the inverse is quite inaccurate
1.0 / np.linalg.det (mms)
np.linalg.det (mmsi) # calculated determinants differ significantly