Using results of torch.concat in Forward Pass

Hey all,
I am working on a Model in which a layer takes two inputs.
The first input is part of the input data, and the second input are embeddings (of other input data.)

I’ve tried to combine the two using torch.concat(), which returns a tensor.
That tensor however cannot be used downstream without modifications;
I receive errors like

TypeError: cannot assign 'torch.FloatTensor' as child module 'l1' (torch.nn.Module or None expected)

I’ve tried to cast the tensor as a Parameter using nn.Parameter() leading to the same error message.

The next option would be to create a custom Module just to cast the tensor from torch.concat() to the type nn.Module, but there is probably a better way.

How would you solve this issue?

I don’t fully understand the use case as a tensor cannot be used or transformed to an nn.Module.
Could you describe the use case a bit more and how the tensor should be used?

I try to use the result of a torch.concat() further in my forward pass, which looks like this:

  def forward(self, A, B):
        self.flat_embeddings = self.flatten(self.embeddings(A)) 
        self.combined =, B), 1)
        self.l1 = self.relu(self.combined)  <-- error here

So should I either use something different than torch.concat() or is feeding a single layer with 2 inputs not possible in PyTorch?

self.combined is a single tensor which is passed to an nn.ReLU and should work.
However, I guess you are trying to assign the output to self.l1 which seems to be an nn.Module and which is wrong.

thanks, I overlooked that one should pass the result tensor (res) of the layers above to the next layer using self.layer(res)