Group Norm vs Batch Norm

Hello everyone, I am currently doing a project where I replaced batch normalization with group norm so that I can train in batch size 1.
However, the model seems to fail only a specific data during training which did not happen during batch norm.
For example, for my validation iou, it goes 0.9,0.91 and then suddenly 0.07 and the model does not seem to improve on this data during training. On the other hand, the model did not fail like this during batch norm training.
I know there could be many reasons but I think this is due to changing batch norm to group norm or possibly using batch size 1. Is there a difference in group norm which could have caused the problem?? Also, could there be a solution to this?

Thank you!

The issue might arise from the changed norm layers, but I haven’t seen a similar issue before.
You could try to isolate the problematic sample and check the output as well as the internal stats of the norm layers to check, if these layers are responsible for the drop in accuracy.