Hi I have a similar question here, if I have already created the Tensor in CUDA, do I still need to explicitly copy tensor?
which means should I do it like create tensor and copy?
torch.tensor(batchD, dtype=torch.float).cuda(device=self.device, non_blocking=True)
or directly create tensor in CUDA?
torch.tensor(batchD.state, device=self.device, dtype=torch.float)