Can a thread be assigned to each GPU?

In Torch, we can assign a thread to each GPU as like below.

net = nn.DataParallelTable(1, true, true):add(net, {1, 2}):threads(function() require 'cudnn' cudnn.benchmark = true  cudnn.fastest = true end)

Is it possible to do this in pytorch ?
If this is not supported now, do you have any plan to support this ?

Also, when I employed DataParallel, it seemed to me that pytorch use only one thread.
First GPU consumed almost all memory, while the others consumed only half of their memories.
Is this normal ?

unlike LuaTorch, we dont dispatch DataParallel via python threads in PyTorch.

If this is not supported now, do you have any plan to support this ?

No, we plan to (and do) dispatch multi-GPU differently at high performance, without the user needing to do anything.

Also, when I employed DataParallel, it seemed to me that pytorch use only one thread.

This is irrelevant to the user, we are working on improving the internals and multi-gpu perf.

First GPU consumed almost all memory, while the others consumed only half of their memories.
Is this normal ?

Depending on how many parameters you have in your model, this is possible.

Thanks for your reply.