CUDA out of memory in transfer learning VGG16

Hello, I tried to run the code from this website, about transfer learning using VGG16.
https://www.kaggle.com/carloalbertobarbano/vgg16-transfer-learning-pytorch
But I have trouble with cuda out of memory, in the train process.
I already tried to minimize this error by change the batch_size and num_workers in torch.utils.data.DataLoader into the small value but it still out of memory.
Is there anyone has any suggestion how to get the proper batch size and num_workers???
Thank you very much

My GPU spec. is only 1050 with 2GB VRAM

VGG16 is a very memory-heavy network. Try using some lighter network architecture (i.e. resnet18)