Hi all,
I’d like to share SparseLab, a library for training neural networks with dynamic sparse topologies where the weights are actually stored sparsely (not mask-on-dense). It integrates with PyTorch through a torch.autograd.Function and a custom nn.Module, so it participates in the standard ecosystem: .parameters(), .state_dict(), torch.optim, DDP-compatible (untested at scale in v0.1).
Install: pip install sparselab
Repo: GitHub - DarshanFofadiya/sparselab: Actually-sparse dynamic training for PyTorch. CPU-native, Apple Silicon first. Pluggable routers, drop-in SparseLinear. · GitHub
License: MIT, 372 tests including torch.autograd.gradcheck on the custom Function.