I understood that we need to move to gpu, but why do we need to move to GPU it’s not the case with keras and all right?
I am learning to to transfer learning from a tutorial and I found this.
I understood that we need to move to gpu, but why do we need to move to GPU it’s not the case with keras and all right?
I am learning to to transfer learning from a tutorial and I found this.
Keras might transfer the model parameters and data automatically for you behind the scenes, while you are responsible to do it in PyTorch.
I guess it comes down to balance “convenience vs. flexibility” and I personally like to manually specify which tensor is places in which device, which e.g. enables models sharing easily.