Can we make type conversions less verbose in pytorch?


While coding in pytorch feels way “closer to the essence” than in static graph frameworks, i can’t help but notice that i spend a third of my coding time converting stuff between variables, cpu tensors, gpu tensors, numpy arrays, sparse tensors, etc.

To inject data into pytorch, one first has to torch.FloatTensor(data), then Variable(…) and then .cuda(). To cast loss back into python float one must .data.cpu().numpy().[0]. It isn’t that bad, but it is a bit too verbose in my humble opinion.

Is there a common set of “best practices” to minimize such kinds of code?

p.s. personally i found myself using a mini-library like this one to make code less verbose. If it is of any help to anyone else, help yourself :slight_smile:

1 Like

Just as a note, to get your loss back to a python float, you can just do .data[0], no need to convert to cpu and numpy.
Also you have access to torch.cuda.FloatTensor() directly if you need.
And removing Variable altogether is a work in progress :slight_smile: (they will be merge into the Tensor class).

I personally don’t find it as verbose, but it may be because I never use numpy and so never have to convert stuff explicitly. From my experience, not using numpy reduces a lot this kind of very verbose code.