Hi, I have one question when I try to use data_loader. I need to load a big file into memory (using torch.load in data_loader initialize() function and the file size is about 15 GB). And then data loader will get item from this file. My question is when I use multi gpus with torch.distributed, will every process have an independent memory space to load this big file or just share one memory space ?( as memory size is 15GB * num_gpus or only 15GB? ) If it is 15GB * num_gpus, can someone give me some suggestions about how to handle this situation? Thank you very much!