How to train identical torch.nn.Modules with different parameters in parallel in GPU

Say, I have two instances obj1 and obj2 of a pytorch nn.Module like this one

class Embedding(nn.Module):

    def __init__(self, dropout=0, emb_dim):
        self.lin = nn.Linear(1, emb_dim)
        init.xavier_uniform(self.lin1.weight, gain=np.sqrt(2.0))

    def forward(self, values)
        return F.relu(self.lin1(values))

And I have two tensors of data t1, t2 and I would like to do:

obj1(t1)
obj2(t2)

in parallel in the GPU, and intuitively this should be possible to do, since I am executing the same operations but with different data. How can I specify different GPUs for these two operations to execute them concurrently?