# Nullspace of a tensor

Hi

I’m looking for a method to calculate the nullspace of a tensor and has a gradient (.backward()).

For example, Pytorch has torch.symeig method to calculate eigenvalues and eigenvectors and I can backprop. However, I can’t find something similar for Nullspace.

Any suggestions would be appreciated!

Hi,

I don’t think we have such think.
You would be looking for the vectors that form a base of the nullspace?

Yes.

I already wrote my own function in a differential way “hopefully”
Here is:

def my_nullspace(At, rcond=None):

``````  ut, st, vht = torch.Tensor.svd(At, some=False,compute_uv=True)
vht=vht.T
Mt, Nt = ut.shape[0], vht.shape[1]
if rcond is None:
rcondt = torch.finfo(st.dtype).eps * max(Mt, Nt)
tolt = torch.max(st) * rcondt
numt= torch.sum(st > tolt, dtype=int)
nullspace = vht[numt:,:].T.cpu().conj() ===> problem here
# nullspace.backward(torch.ones_like(nullspace),retain_graph=True)
return nullspace
``````

Everything works fine even when I use .backword() from nullspace (in case nullspace = vht.T.cpu().conj() ) , however when I take just a part from the tensor vht (i.e., vht[numt:,:] ) the gradients become zero I dont know why shoudld I call it like this vht[numt:,:].clone() or there is smoothing wrong?

Hi,

Here is my test script, it seems to work fine no?

``````import torch

a = torch.rand(10, 2)

A = torch.mm(a, a.t()) # rank 2 input to have a nullspace of size 8

def my_nullspace(At, rcond=None):

ut, st, vht = torch.Tensor.svd(At, some=False,compute_uv=True)
vht=vht.T
Mt, Nt = ut.shape[0], vht.shape[1]
if rcond is None:
rcondt = torch.finfo(st.dtype).eps * max(Mt, Nt)
tolt = torch.max(st) * rcondt
numt= torch.sum(st > tolt, dtype=int)
nullspace = vht[numt:,:].T.cpu().conj()
# nullspace.backward(torch.ones_like(nullspace),retain_graph=True)
return nullspace

out = my_nullspace(A)

print("out size, should be 10x8: ", out.size())

out.sum().backward()
``````
2 Likes

It works fine, thank you.

Just realised that the problem wasn’t pytorch-wise.
I had a saved matrix-variable with the same name in the environment of python which was (6*6) so the null space of it was always Zero.

2 Likes

Hi @albanD and @Abdelpakey,

I am relatively new to PyTorch and deep learning in general and was wondering what are these 2 pieces of code that @albanD had is doing?

``````A.requires_grad_()   # what is this doing?
out = my_nullspace(A)
``````
``````out.sum().backward()   # what is this doing?
``````

Would really appreciate if you could help. Many thanks in advance!

It tells Pytorch that the “A” variable can be part of the optimisation process. In other words, makes it differentiable and updates it in the gradient descend.

It is used as loss function, in general to back prop from the loss function to network weights your loss whatever it is should be differentiatabke and a SCORE measure. For example, loss= value, where value here is just a tensor number.

Hope this helps.

1 Like

Many thanks @Abdelpakey! Your explanation for these 2 lines of code makes sense.

However, I am just confused with your function `my_nullspace()` where it is just calculating the null space of a matrix, so why would calculating the null space of a matrix involve things like making the “A” variable/matrix differentiable and also things like back prop the loss function?

Many thanks again @Abdelpakey.