Is there a way that feeding a input to layers parallelly in PyTorch?

I’m implementing ELMo with PyTorch. I want to feed ELMo’s CNNs, which have different filter map sizes, on word matrices but I’m considering an efficent way. Here’s my code:

# fill the empty tensor iteratively
batch_size = word.size(0)
y = torch.zeros(batch_size, self.kernel_dim)
 
cnt = 0

for kernel in self.kernels:
    temp = kernel(word)
    pooled = torch.max(temp, dim=2)[0]
    y[:, cnt:cnt+pooled.size(1)] = pooled
    cnt += pooled.size(1)

# Using torch.cat
y = []
for kernel in kernels:
    temp = kernel(a)
    y.append(torch.max(temp, dim=2)[0]) # max pooling
    y = torch.cat(y, dim=1)

I have two questions. The first one is “Is there a way that feeding input to layers in parallel?”. It makes me avoiding for loop iteration so that make my code more efficient. The second one is "Which one is faster between using torch.cat and just filling an empty tensor.

1 Like