Can I use pytorch or numpy or anything, either by off-the-shelf method, script or library…
Given random d-1 d-dimensional vectors (specifically a random d-1xd matrix), get an d-dimensional vector (specifically (d,) shape tensor) that is perpendicular to every vector mentioned above ?
something like this
vectors = torch.randn(d-1,d)
perpendicular_vector = torch.get_perpendicular(vectors) # (d,)
assert vectors.matmul(perpendicular_vector).sum().item() == 0
Not sure if this functionality already exists, but you could try writing a custom function to do this perhaps.
Let’s say that the given matrix is called A. You figure out the eigenvalues of A and if the smallest (ordered by absolute values) is 0, then the corresponding (right) eigenvector is your desired vector. If not, you can calculate something like Gram-Schmidt process. Basically, start from some random vector and keep subtracting the component of each eigenvector from it iteratively. That should give you the vector that is perpendicular to all rows of A.
Hope this helps.
Hi Arna (and Richard)!
Note that A is not square in this case.
Probably the most numerically satisfactory way to do this is to
compute the singular-value decomposition of A with torch.linalg.svd():
>>> import torch
>>> _ = torch.manual_seed (2021)
>>> vectors = torch.randn (4, 5)
tensor([[-0.1452, 0.9747, 0.6023, 1.5775, 0.7978],
[-0.4101, -1.2622, -0.3932, 0.4675, -0.2879],
[-0.0742, 0.1680, 1.6226, 0.7453, -2.9535],
[-1.1143, -0.8463, -0.0393, -1.9088, -0.3105]])
>>> perpendicular_vector = torch.linalg.svd (vectors).Vh
tensor([ 0.4183, -0.4666, 0.7019, -0.1041, 0.3222])
>>> vectors @ perpendicular_vector
tensor([-2.9802e-08, -8.9407e-08, -2.3842e-07, -8.9407e-08])
Note that if A (
vectors) is degenerate (that is, that some linear
combination of its rows is zero), then
will no longer be (in essence) unique. In such a case A will have
a multi-dimensional null space (sometimes calls its kernel), and
any vector lying in A’s null space will be (by definition) perpendicular
to A’s rows.
You are absolutely right @KFrank ! Thanks for pointing out that singular values, and not eigenvalues, are what’s needed here.
A small suggestion: It’s probably worth clarifying the in the code that you used the last right eigenvector, perhaps changing it to
torch.linalg.svd(vectors).Vh[-1] adds a little more insight into the entire solution?