Pytorch parallel calculations

Two questions:

  1. Does pytorch perform any internal calculations in parallel? For example in Linear layer or broadcasting?
  2. I have code like this
x= [x1, x2, x3]
res = Variable(torch.zeros(3))
for i in range(3):
    res[i] = myfunc(x[i])
return res

But in my opinion it look little ugly and want some parallel calculations (x1, x2, x3 independent variables). How I can make it better?

Could you share some information about myfunc?
Depending on its operation maybe one could apply some SIMD operations.

something like this:

prior = Linear(...)
dec = Linear(...)
def myfunc(xi):
   z_prior = prior(h)
   z = xi * z_prior[:,compressed_size:].exp() + z_prior[:, :compressed_size]
   return dec(, z))

xi - this x1,x2,x3. In this moment they can be replaced by Variable(torch.randn(…)) and this function will be without any parameters.

Ok, thanks for the code. Could you give me this shaped of the layers and Variables?

I tried to guess the shapes, but couldn’t figure it out.

Sorry yesterday I make several mistakes. There is fixed prototype:

hidden_size = 4
compressed_size = 3
input_size = 5
batch_size = 1

prior = nn.Linear(hidden_size, compressed_size * 2)
dec = nn.Linear(hidden_size + compressed_size, input_size)
state = Variable(torch.ones(batch_size, hidden_size))
result = Variable(torch.ones(3, input_size))
def myfunc():
   z_prior = prior(state)
   z = Variable(torch.randn(batch_size, compressed_size)) * z_prior[:,compressed_size:].exp() + z_prior[:, :compressed_size]
   return dec([state, z], dim=1))

for i in range(3):
    result[i] = myfunc()

The first part of the z calculation seems to be xi.
Wouldn’t it work if you just initialize state with Variable(torch.ones(batch_size*3, hidden_size)) and use z = Variable(torch.randn(batch_size*3, compressed_size)) * ...?

In this way a simple call to myfunc would result in a Tensor with dims [3, 5].

No, state calculated in other place and used not only there.