Uploading functions and classes on gpu in pytorch

Hi to all,
Is it possible to upload any external function or class present in a external module on GPU in pytorch?
I tried to do it by
I_Val=pf.linear_index(m,n).to(dev)
but it says ::
AttributeError: ‘list’ object has no attribute ‘to’
Please help!!

No, you cannot move arbitrary objects to the GPU as the underlying library/frameworks needs to support GPUs. Your current code snippet fails since you are trying to move a plain list to the GPU, which Python doesn’t understand as it does not support GPUs natively.
Instead, create a PyTorch tensor and move this object to the device.

Hi,
def linear_index(m,n):
Mu=[]
for i in range(m):
for j in range(m):
for k in range(m):
for l in range(m):
for p in range(n):
x=mmmni+mmnj+mnk+nl+p
Mu.append(x)
I_Val_1=[]
for i in Mu:
a=(i//(mmmn))
b=(i%(m
mmn))//(mmn)
c=((i%(mmmn))%(mmn))//(mn)
d=(((i%(mmmn))%(mmn))%(mn))//n
e=(((i%(mmmn))%(mmn))%(mn))%n
I_Val_1.append([a,b,c,d,e])
return torch.as_tensor(I_Val_1)
This is the function i am talking about. in short it is giving a nested lists as [[1,1,1,1,1][2,1,2,0,1]…]. It is a tensor, if i send it .to(dev). Will it be on GPU?

Your code is quite unreadable as it’s not properly formatted, but I assume you want to call to(device) on the returned tensor. If so, then yes the result will be moved the the GPU.

Hi,
I have a question, while running the code on GPU, my code is finished running but the memory allocated to the GPU is still remain. I want to remove this memory. I tried del tensors, model etc. but it is still there.
I used command nvtop to see the usage and stats of GPU, after finishing it shows this, 894MiB GPU memory is still there. How can I completely remove it.
gauge_neural.py is the name of my python script.

PID    USER DEV    TYPE  GPU        GPU MEM    CPU  HOST MEM Command                                                                                                                                                                   

1265975 vchahar 0 Compute 0% 894MiB 8% 0% 2429MiB python3 gauge_neural.py

The ~900MB could be used by the CUDA context and will be freed once you close your application.