Finding activations per layer

I am interested in finding a way to find the activated neurons per layer for a given neural network model. Say for a VGG styled CNN, if any compression algorithm is applied like “Deep Compression” by Han et al., " Learning both Weights and Connections for Efficient Neural Networks" by Han et al., “The Lottery Ticket Hypothesis” by Frankle et al. the number of connections are pruned, per trainable layer. I want to find the activations per layer, after the pruning has happened.

Help?