Sparse Linear layer

Hi All, I would appreciate an example how to create a sparse Linear layer, which is similar to fully connected one with some links absent. It turns out the “torch.sparse” should be used, but I do not quite understand how to achieve that. I start from the dense tensor (image in my case), the next (hidden) layer shoud be a dense image of smaller size, and so on following the autoencoder layout. The only difference with the conventional pattern is that not every pair of entries of successive layers is connected (for performance reason).

2 Likes

Any news on this? I am also looking for sparse linear layers.

here you go , full implementation on git hub for sparse layer https://github.com/numenta/htmpapers/tree/master/arxiv/how_can_we_be_so_dense

@Aamir_Mirza thanks for sharing. Does this implementation result in saving GPU memory instead of using dense layers? Or is it that it enforces sparsity of the weights for convergence reasons?

Is there any update on this? Any new libraries that implement sparse layer?

https://github.com/huggingface/pytorch_block_sparse is the latest I am aware of but dont know about its performance.