How can I feed the big data in Pytorch when the CPU sources are limited?

My train dataset is 20GB saved in a .mat file.Before I train the model,I need to transform the array format to tensor, this caused the error that RAM is not enough. Can Pytorch feed the array directly?I think the tensor is occupied so much computational space.How can I transform the big dataset into tensor to train the model successfully?

I don’t think it has default implementation for this. How about saving the .mat file into several parts?