Multi-gpu of model

I’m implementing the seq2seq model with pytorch 0.4.1. For encoder, I want to save its parameters and tensors into cuda:0. For decoder, I want to save all its parameters and tensors into cuda:1. So can i do it? use encoder.to(‘cuda:0’), decoder.to(‘cuda:1’)?But I find the input tensor of encoder are not in gpu 0. Can I use a command to transfer the all parameters and tensors of encoder into gpu 0 ?

The code snippets you’ve posted should already create the encoder on GPU0 and the decoder on GPU1.
You would just have to make sure the input tensors are on the right device.
Have a look at this small example for model sharding.

Yeah, I will see it and Thanks a lot