Can I run two subnetworks of one model parallelly?

I have a network with four subnetworks: A_model, B_model, C_model, D_model

During forwarding,

  out_A = A_model(input) 
  out_B = B_model(out_A)
  out_C = C_model(out_A)
  out_D = D_model(out_B, out_C)

B_model and C_model both take out_A as input, but they run independently.

To my understanding, PyTorch will run these lines serially, right? If B_model and C_model can run in parallel (e.g., multi-threading), will it save a lot of time?

2 Likes

Very, very, very interested in this! Anyone?
I’m hoping there would be a way to parallelize this somehow on one GPU (which should generalize to multi-GPU and multi-node). But lack the details to judge if that’s possible.

Briefly, you need either multiple torch.cuda.stream contexts, or jit.fork (in jit compiled code) to also separate cpu threads (that enqueue cuda operations). Unfortunately, speedup on one gpu may be limited, if gpu utilization in affected code is already high, or code fragments are small.