Can I use dictionary in an extension of torch.autograd.Function when using GPU?

I have the following code in an extension of torch.autograd.Function:

self.avarwic_infor = {
c: {
‘ind_c’: ind_range[y==c],
‘mask_c’: y==c,
‘prob_c’: torch.FloatTensor([torch.sum(y==c)]) / torch.FloatTensor([self.N])
} for c in torch.range(0, self.C-1).type(‘torch.ByteTensor’)
}

where y has type LongTensor, self.N and self.C have type long.

it works fine if I use CPU, but got the following error if GPU is used:

TypeError: indexing a tensor with an object of type ByteTensor. The only supported types are integers, slices, numpy scalars and torch.ByteTensor.

is this error because of I am using dictionary and GPU simultaneously or I did something else wrong?

Thank you very much!

1 Like

You need to use torch.cuda.ByteTensor for CUDA tensor indexing.

1 Like

I really appreciate your answer, but the next problem is how I can know the function is run in GPU or CPU, is there any flag indicates it? I tried to pass a bool argument from outside of the function but another error raises.

RuntimeError: expected a Variable argument, but got bool

Is it because of the torch.autograd.Function accepts Variable only? if so, how can I determine the function is run in GPU or not?

Thank you!

You can check variable.data.is_cuda.

Many thanks!! it works now!!