Multiprocessing using torch.multiprocessing

I’ve found a work around to this problem by forcing the loader to go to sleep for 2 seconds after it finishes loading everything. This gives time for the trainer to finish training before the loader comes back and shuts everything down. I can’t explain why the behavior was this way though.