Which device is model / tensor stored on?

The common way is to start your code with:

use_cuda = torch.cuda.is_available()

Then, each time you create a new instance of any tensor/variable/module, just do:

if use_cuda:
    my_obect.cuda()

That way you make sure that everything is stored or not on GPU or CPU (by default, without calling .cuda() it will be on CPU)