Sparse Matrix Forcing 0 Weights Crashing

Setup:
I have a matrix of weights, a lot of which are originally 0. I do not want these weights to change.

As I understand, if I use these weights as follows:

output = torch.sum(torch.addmm(biases, weights, inputs))
output.backward()
optim.step()

Then the weights which were initially 0 can become non-zero.

However, I do not want this happening. Can I use a sparse matrix to represent the weights and force it to ignore gradients for zero-valued elements so that they never become non-zero?

Additionally:

RuntimeError: set_indices_and_values_unsafe is not allowed on a Tensor created from .data or .detach().
If your intent is to change the metadata of a Tensor (such as sizes / strides / storage / storage_offset)
without autograd tracking the change, remove the .data / .detach() call and wrap the change in a `with torch.no_grad():` block.

seems to be coming up whenever I try to update the weights in my sparse matrix for the second time. Does this mean we’re not allowed to backprop through and update sparse matrices?

I’m not sure if the sparse approach would work, but you could use register_hook for the dense weight tensor and zero out the gradients instead.