Counting number of shared active sites in sparse tensors

I’m trying to create a custom loss that penalizes a sparse network (SparseConvNet) for incorrectly activating inactive sites in its output. The only way it’s occurred to me to do this is to simply count the number of shared active sites between the network’s output and the target output, then compute a loss that increases as this number of shared active sites decreases. Is there a way to do make this differentiable without expanding the sparse output and sparse target to their dense representations and then doing something like loss=torch.norm(output-target, p=1)?

Hi,

Doesn’t doing loss=torch.norm(output-target, p=1) with output and target being the sparse Tensors do what you want?

1 Like

Yep. I hadn’t tried using the builtin torch sparse tensors and that seems to give me what I want. Do you know where in the source code this logic is implemented for sparse tensors? I’d like to take a look

Actually, I’m encountering some issues now with backprop. After implementing loss like this,

class SparseActivityLoss(torch.nn.Module):
    def __init__(self):
        torch.nn.Module.__init__(self)

    def forward(self, activations, heatmap, densities):
        indices, values = activations.get_spatial_locations(), activations.features
        indices = indices[:, :3]
        activations = torch.sparse.FloatTensor(indices.t(), values, torch.Size([128, 128, 128, 1]))
        heatmap = torch.sparse.FloatTensor(heatmap.t(), densities, torch.Size([128, 128, 128, 1]))

        return torch.norm(activations-heatmap, p=1)

I encounter the following error

Epoch 0
Traceback (most recent call last):
  File "pbox.py", line 82, in <module>
    loss.backward()
  File "/home/jack/miniconda3/envs/thesis/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/jack/miniconda3/envs/thesis/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Could not run 'aten::sign.out' with arguments from the 'SparseCPU' backend. 'aten::sign.out' is only available for these backends: [CPU, CUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

CPU: registered at /pytorch/build/aten/src/ATen/CPUType.cpp:2127 [kernel]
CUDA: registered at /pytorch/build/aten/src/ATen/CUDAType.cpp:2983 [kernel]
BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: fallthrough registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
AutogradOther: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradCPU: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradCUDA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradXLA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradPrivateUse1: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradPrivateUse2: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
AutogradPrivateUse3: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:8078 [autograd kernel]
Tracer: registered at /pytorch/torch/csrc/autograd/generated/TraceType_2.cpp:9654 [kernel]
Autocast: fallthrough registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [backend fallback]
Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:511 [backend fallback]
VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]

I’m guessing this has something to do with converting from SparseConvNetTensors to torch.sparse.FloatTensors but I really have no idea what I’m doing

Ho given the function that is involved it is most likely that the abs backward (used in the norm(p=1) function) is not implemented for sparse…

You can check general implementations for sparse Tensors like any other backend for torch in the aten/src/aten/native folder

Do you know what a way around this might be?

I can’t think of a way to get a norm 1 without using abs…
I’m afraid there aren’t that many functions implemented for sparse Tensors yet…

alright ill go bother people in the repository thanks