RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. Can't use GPU with tacotron2

Shifting CUDA to CPU for Inferencing

I am trying to generate inference results of my trained Text-to-Speech Tacotron2 model on CPU. However, initially the model provide inferencing on GPU but due to the non-availability of GPU I am transferring to CPU device. I have made the required changes like map_location = torch.device('cpu')

CUDA to CPU Inferencing

I am trying to generate inference results of my trained Text-to-Speech Tacotron2 model on CPU. However, initially the model provide inferencing on GPU but due to the non-availability of GPU I am transferring to CPU device. I have made the required changes like map_location = torch.device('cpu').

Still the error is not resolved. Please help me understand the issue and get the error resolved. Thanks!!

I cannot reproduce the issue, as the torch.hub.load method would already load the model as described in the docs:

>>> import torch
>>> waveglow = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_waveglow', model_math='fp32')
Downloading: "https://github.com/NVIDIA/DeepLearningExamples/archive/torchhub.zip" to /root/.cache/torch/hub/torchhub.zip
Downloading checkpoint from https://api.ngc.nvidia.com/v2/models/nvidia/waveglow_ckpt_fp32/versions/19.09.0/files/nvidia_waveglowpyt_fp32_20190427
>>> waveglow
WaveGlow(
  (upsample): ConvTranspose1d(80, 80, kernel_size=(1024,), stride=(256,))
  (WN): ModuleList(
    (0): WN(
      (in_layers): ModuleList(
        (0): Conv1d(512, 1024, kernel_size=(3,), stride=(1,), padding=(1,))
        (1): Conv1d(512, 1024, kernel_size=(3,), stride=(1,), padding=(2,), dilation=(2,))
   [...]

Calling torch.load afterwards on it yields the expected seek error:

>>> torch.load(waveglow, map_location='cpu')

AttributeError: 'WaveGlow' object has no attribute 'seek'

PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier and allows to index the code for a better search.

Is it capable of shifting to CPU from GPU ? Also, what is this seek error indicating?

torch.hub.load loads the model onto the CPU so there is no need to push it to the CPU again.

The seek error is raised, since waveglow is an object of the nn.Module type and not a file (which is seek’able).

1 Like