Status of Sparse Matricies, operations, autograd, etc

Hi there,

What is the current status of sparse matricies in pytorch? What operations currently work? Are we able to create “sparse” versions of something like torch.nn.Linear for example?

Is autograd working currently for something like sparse_nn_parameter.matmul(dense_variable) + dense_nn_parameter = dense_nn_variable?

Is GPU usage for sparse tensors actually less than their dense counterparts? Empirical tests show otherwise.

Thanks.

Bump because a reply would be greatly appreciated.