Newbie question: how does PyTorch manage parallel computing on a single GPU?

Hello to everybody,
I’m new at PyTorch and I have a question about moving the computations on GPU.
Suppose we have two different operations that involves different tensors, for example something like this:

z1 = x * y # first operation
z2 = x + y # second operation

where x and y are two tensors in the same device.
My question is: will the GPU calculate both z1 and z2 in parallel?
If not, how can I force the GPU to performs these two operations at the same time?

Thanks in advance!

it doesn’t do both computations at the same time. There is no exposed mechanism to do it parallely. (there is an advanced mechanism using CUDA streams that allows to do this in pytorch, but it is too error-prone for most users)