Does torchscript optimize models automatically to execute in parallel?

Suppose I have a model like this

class Model(nn.Module):
  def __init__(self):
    self.part1 = nn.Linear(10, 10)
    self.part2 = nn.Linear(10, 10)
  def forward(self, x):
    x1 = self.part1(x)
    x2 = self.part2(x)
   return x1 + x2

In eager mode, I’m sure it’s not executed in parallel but when I torch.jit.script the model, does torchscript automatically turn them into to async?
If so does the optimization depend on device being cpu or cuda?

No, the ops are executed serially, but there may be intra-op parallelism.

1 Like