I’m looking to back-propagate gradients through a singular value decomposition for regularisation purposes.
I know that I could write my own custom function that operates on a Variable; takes its .data tensor, applies the torch.svd to it, wraps a Variable around its singular values and returns it in the forward pass, and in the backward pass applies the appropriate Jacobian matrix to the incoming gradients.
However, I was wondering whether there was a more elegant (and potentially faster) solution, where I could overwrite the “Type Variable doesn’t implement stateless method svd” Error directly, call Lapack, etc. ?
If someone could guide me through the appropriate steps and source files I need to look at, I’d be very grateful. I suppose these steps would similarly apply to other linear algebra operations which have no associated backward method currently.
Hi! Thanks for your reply. I know I can write my own custom function. However, I’m looking for a solution that directly overrides the existing implementation of torch.svd - for educational purposes mainly and because I’m wondering if it may be quicker if I can call LAPACK directly for example.
yes I did not change any of the backend files in C. I just used the custom interface of PyTorch to define a new function and implemented backward in it myself using python functions. it worked okay i think. I can probably dig it out for you later today. What linear algebra operation do you require? svd?
Does Simon’s comment help you or do you still require help? Wasn’t aware that this functionality got added already, but it’s good to know! Thanks Simon!
Daniel, the derivative of svd is implemented after we released 0.3.
You can wait for the next release, or install from source using instructions at: https://github.com/pytorch/pytorch#from-source