Training slowing down with time

I’m training a custom cnn architecture on change detection dataset (http://www.changedetection.net/) .Total number of training images are 35000. It took 7 hours for 1 epoch in lasagne but 15 + hours in pytorch. To load the data, i used DataLoader function present in torch.utils.data. The code is running on a Tesla K20c 5 gb gpu. Batch size of 5 occupies 4.5 gb. num_workers = 4

are you loading the data in a sequential order in lasagne? pytorch’s DataLoader by default loads the Dataset in a randomly shuffled order, and this might affect disk loading speeds if you are using a HDD (instead of an SSD disk).

The data is mounted in the current cpu but exists in a different one

Hi, I met the same problem ,have you solved it?