The ^{n}A matrices is constructed from several component ^{n}A_{i} matrices multiplied by an element of vector v: ^{1}A = ^{1}A_{a} * v_{a} + ^{1}A_{b} * v_{b} + ^{1}A_{c} * v_{c}… The ^{n}A_{i} matrices are large, sparse and there are many of them. I would thus like to store the matrices as sparse tensors. I was wondering whether there were any plans to implement a sparse linear solver (direct, since I guess iterative would be impossible) that was amenable to autograd? I need autograd to compute the Jacobian dx/dv in the end.

For now I have it working with dense systems, but I was wondering whether there would be a version of lu_solve that works with backpropagation, since I would like to store the factorization of A for use down the road. Discussed in issue: https://github.com/pytorch/pytorch/issues/22620

As mentioned in the issue, we would be happy to add this.
Do you know what the gradient of the output wrt LU_data would be? Or do you have a reference where we can find that formula?

Oeh, sorry I misinterpreted, I thought you guys were already working on it. I unfortunately do not know of such a publication.

On another note: are there any plans to add a torch.sparse x torch.sparse matrix multiplication method? It would not have to work with autograd, but when constructing my system this would save tons of memory.

Unfortunately no one is working on this at the moment no The github issue is here so that if someone wants to contribute it, we will be happy to help and accept a PR

For sparsexsparse mm, I think it is on the roadmap for sparse. There is an issue discussing this as well that will have more informations,

Thanks a bunch! I am now kind of working around it by doing all sparse x sparse multiplications as sparse x dense and subsequently turning the result back into sparse. All of the resulting tensors are static, so do not need to be autogradable.