But I encounter the problem of:
RuntimeError: CUDA out of memory. Tried to allocate 286.00 MiB (GPU 0; 4.00 GiB total capacity; 1.39 GiB already allocated; 227.40 MiB free; 1.97 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I am not sure if I am using the GPU optimaly. I have also attached the screenshot of the task maanger. Please let me know if I am doing something wrong?
Thanks for the reply. I had this query that is there any way of verifying whether I am utilising the GPU capabilities of my system correctly, as I am not able to understand the significance of the shared GPU memory here.
I’m not familiar enough with Windows, so don’t know what each metric shown by the task manager means.
The Cuda window should show the compute utilization, which would show the GPU util. when PyTorch uses the device for computations.
Alternatively, use nvidia-smi, which would show the same.