Thank you sir for your valuable comment.
I have tried to apply the encoder without forward layer forward src = self.fc_o(src)
, and I got the same error. Here, the issue is how to concatenate layers output dynamically. The updated code as below:
outputs = []
for layer in self.layers:
outputs.clear()
for layer in self.layers:
src = layer(src, src_mask)
outputs.append(src)
x = torch.cat(outputs, dim=1)
x = self.fc_o(x)
Moreover, the objective of this type of combination is that I’m trying to apply routing by agreement using capsule network following this paper. The first step is to aggregate the encoder layers output dynamically, then feed them to CapsNet.
Could you help with this case?