Casting to FloatTensor moves the tensor to cpu?

I’m curious why the .cuda is necessary in this line of code when targs is on the gpu?

ts = targs[::self.ns].type(torch.FloatTensor).cuda()-1.

without the .cuda I get an error about a cpu backend. Does casting really move a tensor, and what’s the reasoning behind that?

Also I’m hoping behind the scenes this is being kept on gpu?

I’m guessing I should be specifying it as a torch.cuda.FloatTensor?

.type(torch.FloatTensor) should move the tensor to the CPU.
I assume you would like to change the dtype of the tensor without changing the device?
If so, you could use
targs[::self.ns].float()
or
targs[::self.ns].to(torch.float).

3 Likes

That’s perfect. I was looking for a solution that would let me run on both cpu (for debugging) and gpu without having to change it. Thanks!

1 Like