Expected object of backend CPU but got backend CUDA for argument

I used .to (device) to move both model and data to gpu. However, an error occurs.

My Source: https://github.com/yudonggeun/text2speech

Traceback (most recent call last):
  File "/home/modeep/Documents/GitHub/Text2Speech/train.py", line 493, in <module>
    main()
  File "/home/modeep/Documents/GitHub/Text2Speech/train.py", line 490, in main
    train_init(config.model_dir, config, multi_speaker)
  File "/home/modeep/Documents/GitHub/Text2Speech/train.py", line 285, in train_init
    config=config, multi_speaker=multi_speaker)
  File "/home/modeep/Documents/GitHub/Text2Speech/train.py", line 368, in train
    y_pred = model(multi_speaker, inputs, sorted_lengths, loss_coeff, mel_targets, linear_targets, stop_token_target, speaker_id)
  File "/home/modeep/anaconda3/envs/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/modeep/anaconda3/envs/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/home/modeep/anaconda3/envs/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/home/modeep/anaconda3/envs/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
    raise output
  File "/home/modeep/anaconda3/envs/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
    output = module(*input, **kwargs)
  File "/home/modeep/anaconda3/envs/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/modeep/Documents/GitHub/Text2Speech/tacotron/tacotron.py", line 39, in forward
    char_embedded_inputs = F.embedding(inputs, self.char_embed_table)
  File "/home/modeep/anaconda3/envs/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1506, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 'index'

I’m not sure this is the cause, but you also want to call .to(device) on your criterion!

I want to use multi gpu.

So how do i change the .to (device)?

The way you did it is fine. Just add the .to(device) on your criterion, I don’t think you need to use DataParallel for it.

I delete and run. but same error occurs.

Delete:

    if torch.cuda.device_count() > 1:
        print("Use", torch.cuda.device_count(), "GPUs")
        model = nn.DataParallel(model)