Assigining torch.Tensor(x).cpu() to cuda in runtime

I’ve a C++ application with main and worker threads calling the python modules for the inference.

So with the main thread I can execute the below lines

img = torch.Tensor(x).cpu().unsqueeze(0)
img = img.cuda()

The tensor are switching to cuda tensor.

But the above line img.cuda() will crash when calling from the worker thread.
It is understood that worker thread wont have access to create a cuda memory so cant switch from cpu to cuda tensor.
Anyway to solve this problem switiching from cpu() to cuda() in the worker thread?

Hi,

All the threads should have the same access to cuda. So it should not be an issue.
What is the exact error message you’re seeing?