I know that increasing the number of workers will decrease the total time taken to read batches but I found out that the actual training time scores = Model(batch)
will take longer and longer(it will start small but keep increasing) as the training continue inside the loop
for i, batch in enumerate(train_data_loader):
On the other hand, If the num_workers
equals 0
, the training time will be constant throughout the training inside the loop.
Is that normal? Or Is that happening only with me?