How can I normalize across batches?

My task requires a batch size of 1 so I can’t normalize activations within the batch. Is there a way I could normalize the activations across all the batches in the epoch? Training would improve immensely if the activations were normalized, but I can’t find a way to do that.

Thank you