Issue with multiple Batch Normalization in U Net

So I have a UNet architecture with 4 encoders and decoders, along with max pooling and bottleneck, all in a single class. I also wanted to apply Batch Normalization after each convolutional layer within each block, but once I do that, the validation loss doesn’t decay as much as training loss, and stays relatively flat. However, when I apply a single BN, to just one of the blocks, it seems to run with no issues. I’m not sure what causes this problem, or how it can be fixed.