Automatically move everything to gpu without calling `.cuda()`

Hi, I am a theano user and new to pytorch. I was wondering if there’s any way to automatically move everything (modules, loss, variables) to gpu without calling .cuda() for each of them. In theano it could be easily done by setting the environmental variable THEANO_FLAGS=device=gpu. It would be great if pytorch has similar mechanisms. Thanks!

2 Likes

You can rewrite the Variable class to automatically use cuda:

class Variable(autograd.Variable):
    def __init__(self, data, *args, **kwargs):
        data = data.cuda()
        super(Variable, self).__init__(data, *args, **kwargs)

Won’t help w models tho obv.

2 Likes

If you judiciously use things like tensor.new you should be able to reduce the number of .cuda() calls to two: one for your data and one for the model.

1 Like

I have a suggestion. tensor.new() looks unobvious in the code. Maybe torch could mimic numpy and provide torch.empty_like(tensor), torch.ones_like(tensor) and torch.zeros_like(tensor) instead? Or maybe new() does something more sophisticated?

1 Like

One option is something like torch.zeros(...).type_as(tensor), But I agree that those convenience functions would be appreciated.

yes we’ve been thinking of adding the *_like functions like numpy.