How to index with torch.cuda.LongTensor

Assume proposals is of type torch.cuda.FloatTensor, and of size (some_big_num, 4). It follows that

scores, idx = torch.topk(scores, pre_nms_topN, 0, sorted=True)
proposals = proposals.index_select(0, idx)

raise the error “index_select received an invalid combination of arguments - got (int, !torch.cuda.LongTensor!), but expected (int dim, torch.LongTensor index)”. I can fix the bug with

scores, idx = torch.topk(scores, pre_nms_topN, 0, sorted=True)
proposals = proposals.index_select(0, idx.cpu())

But it incurs data exchange b/w GPU and CPU. Is there a more elegant solution? This issue is quite common when you implement some layers which are expected to run on the GPU.

1 Like

I think your proposals Tensor in this case must be on the CPU.

>>> x = torch.cuda.FloatTensor(10)
>>> x

 0.4566
 0.3533
 0.4751
 0.8944
 0.6360
 0.7817
 0.9507
 0.5861
 0.4613
 0.4808
[torch.cuda.FloatTensor of size 10 (GPU 0)]

>>> scores, idx = torch.topk(x, 2, 0, sorted=True)
>>> scores

 0.9507
 0.8944
[torch.cuda.FloatTensor of size 2 (GPU 0)]

>>> idx

 6
 3
[torch.cuda.LongTensor of size 2 (GPU 0)]

>>> proposals = torch.cuda.FloatTensor(10)
>>> proposals.index_select(0, idx)

 0.3681
 0.9374
[torch.cuda.FloatTensor of size 2 (GPU 0)]
1 Like

@killeent I don’t understand. Doesn’t your example show that it works fine if both the tensor and the indices are on the GPU? This is certainly how it currently works for me (but maybe this was introduced in pytorch 0.2).

1 Like

I also get a similar error…Thougn I checked and all my data seems to be on GPU…msg I get:

*** TypeError: torch.index_select received an invalid combination of arguments - got (torch.cuda.FloatTensor, int, torch.cuda.FloatTensor), but expected (torch.cuda.FloatTensor source, int dim, torch.cuda.LongTensor index)

my question is at:

I’ve got the same problem as OP.

1 Like