Error while running the pre-trained waveglow model on specific GPU device

Hello,

I am trying to execute the pre-trained waveglow example given here : https://pytorch.org/hub/nvidia_deeplearningexamples_waveglow/

The example works fine if executed as provided. However, it fails if I change the device from “cuda” to “cuda:1” with following error :

File "waveglow_example.py", line 76, in inference
    _, mel, _, _ = self.tacotron2_model.infer(data)
File "/home/ubuntu/.cache/torch/hub/nvidia_DeepLearningExamples_torchhub/PyTorch/SpeechSynthesis/Tacotron2/tacotron2/model.py", line 651, in infer
    encoder_outputs)
 File "/home/ubuntu/.cache/torch/hub/nvidia_DeepLearningExamples_torchhub/PyTorch/SpeechSynthesis/Tacotron2/tacotron2/model.py", line 546, in inference
    not_finished = not_finished*dec
RuntimeError: expected device cuda:0 but got device cuda:1

Sample code :

import torch
import numpy as np
from scipy.io.wavfile import write

waveglow = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_waveglow')

waveglow = waveglow.remove_weightnorm(waveglow)
waveglow = waveglow.to('cuda:1')
waveglow.eval()

tacotron2 = torch.hub.load('nvidia/DeepLearningExamples:torchhub', 'nvidia_tacotron2')
tacotron2 = tacotron2.to('cuda:1')
tacotron2.eval()

text = "hello world, I missed you"

sequence = np.array(tacotron2.text_to_sequence(text, ['english_cleaners']))[None, :]
sequence = torch.from_numpy(sequence).to(device='cuda:1', dtype=torch.int64)

# run the models
_, mel, _, _ = tacotron2.infer(sequence)
audio = waveglow.infer(mel)
audio_numpy = audio[0].data.cpu().numpy()
rate = 22050

write("audio.wav", rate, audio_numpy)

Is there something I am missing here?

Could you please create an issue here?
As a workaround you could select GPU1 in your script via CUDA_VISIBLE_DEVICES="1" python scripy.py and leave the .cuda() and .to() calls as they are.

1 Like