Dear Community,
I was wondering whether one can for loop over some encoder network (e.g resnet 18 without the last fully connected layer) and afterwards concatenate the embedding. Eventually there is a set of fully connect layers at the end to classify the batch entities. Here is some example code.
# x.shape = (bs, i, c, h, w) where "i" is some stack of images for each batch entity
embs = []
for i in range(int(x.shape()[1])):
embs.append(self.encoder(x[:, i]))
# concatenate the embeddings
x = torch.cat(embs, 1)
x = self.dense_layer1(x)
x = self.dense_layer2(x)
x = self.classifier(x) # last dense layer
Can the autograd backpropogate the loss or does anyone see some mistakes?
Would be thankfull for any kind of feedback.
Best,
Paul