I have a model where each input from a list of inputs gets processed by its own Embedding and the results are concatenated. The underlying Embedding objects don’t have to be consistent with each other in any way.
I subclass my model from Transformer and store Embeddings in a list. But this list does not show up in transformer.named_parameters(), it doesn’t get transferred to CUDA when I transfer the transformer object to CUDA, it doesn’t really get used. Is there an alternative way to store these Embedding objects so that Transformer treats them correctly?
If I had a couple of them I could make each one a separate member of by subclass but in my case I need to store them in something like a list.