RuntimeError: CUDA error: out of memoryy

Hey.

I started working on a new workplace virtual system and did all the necessary installations. But currently, I am unable to create a simple cuda variable. I don’t know what requirement is I am lacking.

I am using Windows Server 2016 and just installed latest official pytorch

Attached is the screenshot and any guidance is highly appreciated.



thanks alot.

Could you check the memory usage on the server via nvidia-smi and make sure you have enough device memory left?
If you are using multiple GPUs, you can specify a specific via
x = torch.randn(1, device='cuda:id'), where you would have to replace id with the device ID.