I’m dealing with CIFAR10 and I use torchvision.datasets to create it. I’m in need of GPU to accelerate the calculation but I can’t find a way to put the whole dataset into GPU at one time. My model need to use mini-batches and it is really time-consuming to deal with each batch separately.
You can write a dataset class where in the
__init__ function, you red the entire dataset and apply all the transformations you need, and convert them to tensor format. Then, send this tensor to GPU (assuming there is enough memory). Then, in the
__getitem__ function you can simply use the index to retrieve the elements of that tensor which is already on GPU.
That’s wonderful. I am gonna try but I have a feeling that it would work:) Thank you
Sure, let me know if it didn’t help.