What is the difference between cuda usage declaration?

I have just starting using pytorch and i got a question it might be trivial but i couldn’t get the difference of two ways of using cuda model

    if use_cuda:
        model = model.cuda() # GPU model

and the second way is :

        if use_cuda:
            x, target = x.cuda(), target.cuda()

Thanks in Advance

I don’t really understand the question.

If you want to use the GPU with your model you have to allocate both, model and input/gt in the gpu using .cuda()

If you don’t do it, it will display an error like tensor and model are not in the same device

1 Like