Visualise CNNs: how to `unbatchnormalise`

I’m in the process of reproducing the seminal paper Visualising and Understanding Neural Networks (a bit old, 2013) mostly to visualise my own networks and improve the training.

Now, the paper basically requires this:

  1. Have an max unpool layer (which will be approximated in torch.)
  2. then a Relu
  3. Then a convolution transposed which is also available in torch

The code currently looks like:

    def deconv1(self, x: torch.Tensor) -> torch.Tensor:
        x = self.maxupool(x)
        x = self.relu(x)
        x = self.batch1(x) // <-----------probably incorrect
        x = self.uconv1(x)
       // maybe scale it up here ?
        return x

My only issue is, that there isn’t an obvious layer to undo the batch normalisation.

Do you have any idea if there is any built-in way to perform such operation ?

One way, I am somewhat weary, would be to actually use batch as it is currently done in the code and finally scale up the images manually by *255 it, with something like this:
x = x.mul(255.0).clamp(0.0, 255.0)