Error in using cuda (ValueError: Expected a cuda device with a specified index or an integer, but got: )

Error in using cuda.

Here is the Code:

import robust_loss_pytorch.general

adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(
    num_dims = 4, float_dtype=torch.cuda.FloatTensor, device="cuda")

And here is the document of robust_loss_pytorch.adaptive.AdaptiveLossFunction:

Args:
  num_dims: The number of dimensions of the input to come.
  float_dtype: The floating point precision of the inputs to come.
  device: The device to run on (cpu, cuda, etc).

Got the error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-414-22ef6dd72b45> in <module>()
      2 
      3 adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(
----> 4     num_dims = 4, float_dtype=torch.cuda.FloatTensor, device="cuda")

~/anaconda3/envs/torch36/lib/python3.6/site-packages/robust_loss_pytorch/adaptive.py in __init__(self, num_dims, float_dtype, device, alpha_lo, alpha_hi, alpha_init, scale_lo, scale_init)
    130        (isinstance(device, str) and 'cuda' in device) or\
    131        (isinstance(device, torch.device) and device.type == 'cuda'):
--> 132         torch.cuda.set_device(self.device)
    133 
    134     self.distribution = distribution.Distribution()

~/anaconda3/envs/torch36/lib/python3.6/site-packages/torch/cuda/__init__.py in set_device(device)
    241             if this argument is negative.
    242     """
--> 243     device = _get_device_index(device)
    244     if device >= 0:
    245         torch._C._cuda_setDevice(device)

~/anaconda3/envs/torch36/lib/python3.6/site-packages/torch/cuda/_utils.py in _get_device_index(device, optional)
     32         else:
     33             raise ValueError('Expected a cuda device with a specified index '
---> 34                              'or an integer, but got: '.format(device))
     35     return device_idx

ValueError: Expected a cuda device with a specified index or an integer, but got: 

When I try code adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(num_dims = 4, float_dtype=torch.cuda.FloatTensor, device="cuda:0")

Got error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-415-c7998f8405f9> in <module>()
      2 
      3 adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(
----> 4     num_dims = 4, float_dtype=torch.cuda.FloatTensor, device="cuda:0")

~/anaconda3/envs/torch36/lib/python3.6/site-packages/robust_loss_pytorch/adaptive.py in __init__(self, num_dims, float_dtype, device, alpha_lo, alpha_hi, alpha_init, scale_lo, scale_init)
    154               latent_alpha_init.clone().detach().to(
    155                   dtype=self.float_dtype,
--> 156                   device=self.device)[np.newaxis, np.newaxis].repeat(
    157                       1, self.num_dims),
    158               requires_grad=True))

TypeError: to() received an invalid combination of arguments - got (device=str, dtype=torch.tensortype, ), but expected one of:
 * (torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
 * (torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
 * (Tensor tensor, bool non_blocking, bool copy, *, torch.memory_format memory_format)

Or trying :

adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(
    num_dims = 4, float_dtype=torch.cuda.FloatTensor, device=torch.device("cuda"))

Got the same error as :

ValueError: Expected a cuda device with a specified index or an integer, but got:

Would you please provide me the correct usage of cuda?

If you need detailed information,please let me know!

2 Likes

Print this: torch.cuda.current_device(). The value you get from the print, can you pass it to the ‘device’ argument? Example if the value is 0, can you try as below?

robust_loss_pytorch.adaptive.AdaptiveLossFunction( num_dims = 4, float_dtype=torch.cuda.FloatTensor, device=0)

1 Like

The value is 0, then I try this code,still get an error.

TypeError                                 Traceback (most recent call last)
<ipython-input-418-2de402a52488> in <module>()
----> 1 robust_loss_pytorch.adaptive.AdaptiveLossFunction( num_dims = 4, float_dtype=torch.cuda.FloatTensor, device=0)

~/anaconda3/envs/torch36/lib/python3.6/site-packages/robust_loss_pytorch/adaptive.py in __init__(self, num_dims, float_dtype, device, alpha_lo, alpha_hi, alpha_init, scale_lo, scale_init)
    154               latent_alpha_init.clone().detach().to(
    155                   dtype=self.float_dtype,
--> 156                   device=self.device)[np.newaxis, np.newaxis].repeat(
    157                       1, self.num_dims),
    158               requires_grad=True))

TypeError: to() received an invalid combination of arguments - got (device=int, dtype=torch.tensortype, ), but expected one of:
 * (torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
 * (torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
 * (Tensor tensor, bool non_blocking, bool copy, *, torch.memory_format memory_format)

So, to() received an invalid combination of arguments. This happened if we pass a string (“cuda:0”) and an int (0). One of the right approaches to to() is shown as in the message:

(torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)

Can you try this as here, we try to create a torch.device first?

cuda0 = torch.device(‘cuda:0’)
adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(num_dims = 4, float_dtype=torch.cuda.FloatTensor, device=cuda0 )

Still not working. I am about crying :pensive: :innocent:
trying code:

cuda0 = torch.device("cuda:0")
adaptive a robust_loss_pytorch.adaptive.AdaptiveLossFunction(num_dims = 4, float_dtype=torch.cuda.FloatTensor, device=cuda0 )

got error:



TypeError                                 Traceback (most recent call last)
<ipython-input-420-7ab0f8c31272> in <module>()
      1 cuda0 = torch.device("cuda:0")
----> 2 adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(num_dims = 4, float_dtype=torch.cuda.FloatTensor, device=cuda0 )

~/anaconda3/envs/torch36/lib/python3.6/site-packages/robust_loss_pytorch/adaptive.py in __init__(self, num_dims, float_dtype, device, alpha_lo, alpha_hi, alpha_init, scale_lo, scale_init)
    154               latent_alpha_init.clone().detach().to(
    155                   dtype=self.float_dtype,
--> 156                   device=self.device)[np.newaxis, np.newaxis].repeat(
    157                       1, self.num_dims),
    158               requires_grad=True))

TypeError: to() received an invalid combination of arguments - got (device=torch.device, dtype=torch.tensortype, ), but expected one of:
 * (torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
 * (torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format memory_format)
 * (Tensor tensor, bool non_blocking, bool copy, *, torch.memory_format memory_format)

When I exceute the below line in Google Colab, it runs without issues:

adaptive = robust_loss_pytorch.adaptive.AdaptiveLossFunction(num_dims = 4, float_dtype=torch.float32, device=“cuda:0”)

Notice the change in the float_dtype parameter. Can you run with the above and see if it helps?

1 Like

Thanks sir! It does work.However,what I am wondering is that all of my data and model is in cuda, if I use torch.float32, isn’t it a mismatch of data type?
Again,Thanks for such an caring responses : )
You have a nice day, sir!

1 Like

In the code, there is the use of .to() and to this function, we pass the cuda value you give during initialization. So, I believe, your variables and model are moving to cuda.

You can print the model parameters’s device value, eg:

model = torchvision.models.resnet18()
for name in next(model.parameters()):
print(name.device)

2 Likes

Thanks sir.I really appreciate your answer. Wish you all the best. : )