Hi all!
I have a quick “validation” question. Given n parallel model heads, which do not share parameters and should be trained on different data, is it ok to append the outputs in a list and then call torch.cat on the list?
Here’s few lines of code to better illustrate the problem:
import torch
import torch.nn as nn
class DenseNet(nn.Module):
def __init__(self):
super(DenseNet, self).__init__()
self.dense_list = nn.ModuleList([nn.Linear(16, 16) for i in range(3)])
def forward(self, x):
tcn_out = [] # torch.empty(0, self.num_channel_out*self.seq_lenght)
for i, dense_i in enumerate(self.dense_list):
o = dense_i(x[:,i,:])
tcn_out.append(o)
return torch.cat(tcn_out, dim=1)
input = torch.rand((32, 3, 16))
dense_net = DenseNet()
out = dense_net(input)
print(out.shape)
My concern is possible confusions during the back propagation due to the mixup of tensor and list.
Thanks for your time!