How to trace just neuron activations

Hello, I am wondering if there is some written method already to extract whether neuron is just activated or not.
Something like this:

if neuron_value >=0:
    then neuron_value =1
else:
   neuron_value = 0

but for all neurons in the network. The final output of one layer (linear or convolutional) in this case would be e.g.

[0 1 0 1 1 1 1 1 1]

or

[[0 1 0 0 1 0 ], [0 1 0 1 0 0], [0 1 0 1 0 0], [0 1 1 1 0 0]]

I know now how to get activations values (from this post: How can l load my best model as a feature extractor/evaluator? - #55 by dugr), but now want just to track whether the neuron is on or off in a network.
It can be calculated manually also (wit if and else in every layer), but in case of convolutional and dense neural networks and requires lots of programming and considering tensors shapes and dimensions (especially in convolutional layers), so I was wondering if there is some other solution that calculates it for all layers at once for specific input.

Thank you.