Using GPU but it still use CPU

i try to GPU on PyTorch after i formatted my computer (before i formatted and it worked)
then, i try this code


and output is


when i try to do like this

x = torch.rand(250, 250)
x = x.cuda()
while 1:
    y = x*x


tensor([[0.7734, 0.6112, 0.9969,  ..., 0.3762, 0.1328, 0.0282],
        [0.7798, 0.9056, 0.4914,  ..., 0.0698, 0.9631, 0.7149],
        [0.1020, 0.7871, 0.2783,  ..., 0.6885, 0.8713, 0.1423],
        [0.4408, 0.7382, 0.0786,  ..., 0.5183, 0.6337, 0.5479],
        [0.2822, 0.8666, 0.2791,  ..., 0.6299, 0.9231, 0.6004],
        [0.7444, 0.7096, 0.2371,  ..., 0.3882, 0.2971, 0.2674]],

you will see my tensor is cuda device. but when i see my GPU usage it’s not using.
it’s use CPU

Can you guys tell me why ?

ps. I have installed my VGA driver and CUDA 9.2 driver already
ps2. i used Jupyter Notebook(Anaconda) on WIndows
ps3. It worked before I formatted my computer

The workload might be a bit too small to see some utilization in nvidia-smi.
Change the size to torch.randn(2500, 2500) and you should see some GPU utilization.

i used Windows OS then i use task manager to check it. and when i try a big tensor it will notice “cuda memory error”.

Then it is way too big and you are overflowing the GPU memory. Try something in between

Correct me if I’m wrong, but 2500*2500 float32 values should take (2500*2500*4)/1024**2 = 23.84MB?
@Amornpat_Champa Is your GPU somehow already filled?

yep, you are right. It’s around 24MB

You’re right. but it’s still not working.

Do you get the error after pushing the tensor to the GPU?
Could you post the complete error message?

Is it possible that the GPU is not freeing memory fast enough when processing the while 1 loop?

i didn’t get any error message.

i don’t think so. because i use this code on another notebook and it’s work.

@adrianjav @ptrblck thank you both for your help.

How did you notice the “cuda memory error” without an error message?