Say, I have two instances obj1 and obj2 of a pytorch
nn.Module like this one
def __init__(self, dropout=0, emb_dim):
self.lin = nn.Linear(1, emb_dim)
def forward(self, values)
And I have two tensors of data t1, t2 and I would like to do:
in parallel in the GPU, and intuitively this should be possible to do, since I am executing the same operations but with different data. How can I specify different GPUs for these two operations to execute them concurrently?