Parallelizing Asynchronous For-Loop in Model Forward

Hello, how I can make sure that the following forward function is being processed in parallel?

Note: You can consider Network() nn.Module in branches of mainnet simply as nn.Linear()

Result of the each branch do not depend on the each other so they should be done in parallel.

class BigNetwork(nn.Module):
    def __init__(self, k = 16):
        super(BigNetwork, self).__init__()

        self.networks = []
        for i in range(k):
            self.networks.append(nn.DataParallel(Network()).cuda())

    def forward(self, x):
        return [self.networks[i](x) for i in range(len(self.networks))]

The full-code is available here: https://github.com/kayuksel/pytorch-ars/blob/master/ars_dataparallel.py

Note: nn.DataParallel was also not able to handle branches that I have created using a for-loop; hence I had to encapsulate the Network() branch initializations in BigNetwork() with nn.DataParallel() as well.

1 Like