Can not assign variable to gpu

Hi, I have some problems with my code.

Fisrt, set device variable as below.
device = torch.device(‘cuda:0’) if torch.cuda.is_available() else torch.device(‘cpu’)
Then, I used like below.

batch_iterator =
dataset=data, batch_size=self.batch_size,
sort=False, sort_within_batch=True,
sort_key=lambda x: len(x.src),
device=device, repeat=False)

Rest of codes are following:

batch_generator = batch_iterator.iter()
for batch in batch_generator:
input_variables, input_lengths = batch.src
target_variables = batch.tgt

The problem is that when I print the input_variables, there is a tensor with cpu.
… bunch of right results …
tensor([[ 8, 3, 569, 466, 1533, 11, 481, 13]], device=‘cuda:0’) <class ‘torch.Tensor’>
tensor([[ 8, 3, 5, 49, 4, 661]], device=‘cuda:0’) <class ‘torch.Tensor’>
tensor([[ 320, 1751, 68, 2, 7, 4, 42, 12]], device=‘cuda:0’) <class ‘torch.Tensor’>
tensor([[ 35, 2, 7, 3, 2428, 2384, 2045]], device=‘cuda:0’) <class ‘torch.Tensor’>
tensor([[912]]) <class ‘torch.Tensor’>
(End, and I set one for batch size. It just for easy_check)

I think the last line is the reason why python said like “Expected object of type torch.cuda.LongTensor but found type torch.LongTensor”

How can I fix the problem?


Oh, I’m sorry.
I found what is the problem out.
The problem is validation data set.

I assigned cuda not for both training set and validation set but just for training set.