for using GPU (Cuda) I should call to(device) on torch objects to put data into GPU
so it is a little annoying.
is there any way to use GPU accelerator by default without changing the device on each torch object?
By default, torch creates objects on CPU and you have to transfer whatever you need to other devices.
You can define a global variable like
dtype=torch.cuda but you still need to specify that when making new objects.