# Numerical Problem with determinant of matrix and determinant of it's inverse matrix

Hi. I’m trying to calculate the determinant of the following matrix and compare it with the determinant of its inverse

``````x = torch.tensor([[ 757.7089, -196.4800],
[-196.4800,   50.9489]])
``````
``````x_inv = torch.inverse(x)
>tensor([[ 1092.4257,  4212.8447],
[ 4212.8447, 16246.4883]])
``````

I use two ways to calculate the determinant of both matrix 1-product of eigenvalues 2-torch.det()
and both ways have the problem that determinant of `x` is not equal to the inverse of the determinant of `x_inv`

``````print(torch.det(x))
>tensor(0.0466)
``````
``````print(torch.det(x_inv))
> tensor(19.6592)
``````

And

``````1/0.0466  == 19.6592
> False
``````

which is wrong.

I even try to use

``````eig_val_x, _ = torch.symeig(x)
dete_x = torch.prod(eig_val_x)

eig_val_x_inv, _ = torch.symeig(x_inv)
dete_x_inv = torch.prod(eig_val_x_inv)
``````

but still same problem exsit. and even two way of calculating determinant provide different values for the determinant. Can some one help me with this issue.

Hi Nima!

Your matrix is ill-conditioned, and is therefore amplifying the
round-off error in your single-precision (32-bit) calculations.

If such ill-conditioned matrices are part of your actual use case,
you will need to pay the price of performing double-precision
calculations.

A `torch.tensor` defaults to `float32` which has about 7 decimal
digits of precision. However the two rows of your matrix (when
understood as vectors) are almost exactly anti-parallel, so your
matrix is nearly degenerate. (The angle between these two vectors
is 179.9999839 degrees!)

Another way of seeing this is to calculate the condition number
of your matrix. (This is the ratio between the matrix’s largest
and smallest eigenvalues.) Your condition number is 1.5 x 10^7.
Very roughly speaking, the condition number tells you how much
your round-off error will be amplified when doing things like
inverting a matrix or solving a system of linear equations.

roughly – it gets blown up to order 1.

If you use 64-bit double-precision numbers you will have about
16 decimal digits of precision. Even amplifying this round-off
error by your condition number of 10^7, you will still have
enough precision to get satisfactory results.

Redo your tests with a `tensor` of `dtype = torch.float64`,
and see if that works for your use case.

This is illustrated by the following script (that uses numpy):

``````import numpy as np

mmd = np.array([[757.7089, -196.4800], [-196.4800, 50.9489]])   # your matrix
mmd.dtype              # 64-bit double precision
np.linalg.cond (mmd)   # matrix is ill-conditioned

msin = np.linalg.det (mmd)       # det is proportional to sin (theta_vecs)
mcos = np.dot (mmd, mmd)   # dot is proportional to cos (theta_vecs)
np.degrees (np.math.atan2 (msin, mcos))   # angle between rows of matrix

mmdi = np.linalg.inv (mmd)
mmdi
np.matmul (mmd, mmdi)   # inverse is quite accurate

np.linalg.det (mmd)
1.0 / np.linalg.det (mmd)
np.linalg.det (mmdi)    # determinants match well

mms = np.float32 (mmd)   # convert to float
mms.dtype                # 32-bit single precision

mmsi = np.linalg.inv (mms)
mmsi
np.matmul (mms, mmsi)   # in single precision the inverse is quite inaccurate

np.linalg.det (mms)
1.0 / np.linalg.det (mms)
np.linalg.det (mmsi)    # calculated determinants differ significantly
``````

Good luck.

K. Frank

3 Likes

tnx. with float64 they are now much similar