You need to provide detailed information such as the stack trace when the error occurs.
I think it could be the TypeError so that you maybe miss the positional arguments.
nn.Sequential container use a single input and output tensor as the input and output activation.
You could write a custom nn.Module for multiple inputs or check e.g. this topic for more information and potential workarounds.
Yes, this should be possible as long as you are using your custom layers which would then unpack the input in their forward method. Most of the PyTorch layers defined in the nn namespace would fail.