Activation of every neuron at test time

Can someone please tell an easy way to get activation matrix of every layer at test time?

I am not sure it is the best option but I would write a custom function of my new module class which outputs the output of each layer. So lets say you have 3 layers:

class custom_module(nn.Module):
    def __init__(self):
    def forward(self, x):

    def forward_test(self, x):
        res = {}
        x = self.layer1(x)
        res["layer_1"] = x
        x = self.layer2(x)
        res["layer_2"] = x
        x = self.layer3(x)
        res["layer_3"] = x
        return res

And at test time you call:

for batch in dataloader:
     with torch.no_grad():
         out = custom_module.forward_test(batch)