Spiking Activation

I’m trying to implement spiking activation on CNN. However, it is results dimension errors. I’m tried snntorch, pytorch_spiking etc.

I’m unable to implement snn activation

Please kindly provide any source for snn activation which works with pytorch cnn

I’m the maintainer of GitHub - norse/norse: Deep learning with spiking neural networks (SNNs) in PyTorch.
We have examples on CNN integration directly in the REAME :slight_smile:

@Jegp, I have tried Norse also. still I’m getting dimension errors

I have passed 3D input for text [batch_size, sequence_length,embedding_dim]. with this input I got dimension errors then I converted the input to [batch_size, hidden_dim].I observed the batch_size is changing after applying spiking activation

Did you try the example in the README? The dimensionality should not change after an activation layer. Sounds like something else could be happening.

@Jegp, It’s working fine. I’m getting 3 different outputs from LIFCell(). What those outputs are related too?

Happy to hear :slight_smile:

The outputs of the LIFCell are shaped exactly like its inputs. So the LIF* modules are simply just stateful activation functions.

I observed that all spikes are zero’s and not showing any impact on the model

For the example code which is mentioned in norse documentation/github

import torch, torch.nn as nn
from norse.torch import LICell # Leaky integrator
from norse.torch import LIFCell # Leaky integrate-and-fire
from norse.torch import SequentialState # Stateful sequential layers

model = SequentialState(
nn.Conv2d(1, 20, 5, 1), # Convolve from 1 → 20 channels
LIFCell(), # Spiking activation layer
nn.MaxPool2d(2, 2),
nn.Conv2d(20, 50, 5, 1), # Convolve from 20 → 50 channels
LIFCell(),
nn.MaxPool2d(2, 2),
nn.Flatten(), # Flatten to 800 units
nn.Linear(800, 10),
LICell(), # Non-spiking integrator layer
)

data = torch.randn(8, 1, 28, 28) # 8 batches, 1 channel, 28x28 pixels
output, state = model(data)

print(output)

tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], grad_fn=)

Even this is resulting output with zero’s

1 Like

The problem with spiking neural networks is that if they don’t spike, the subsequent layer only receives 0’s. Which means your output will be zero.

What happens if you, for instance, set your data
data = torch.randn(8, 1, 28, 28) + 100 ?