Pytorch suddenly slow down at a batch

I am training my network. It works well and fast if I use part of the train and test dataset. However, if I use the full dataset, it starts with the normal fast speed but suddenly slow down at a batch in epoch 1…

Is there anything that got full at that time? Also notice at the beginning of epoch 2, it will be faster again then slow down at somepoint again…

You could profile your code to narrow down where the bottleneck appears. E.g. maybe your system is overheating and is thus reducing the clocks.