Pytorch memory issue

Hai @ptrblck

PyTorch memory issue:

while I m running my PyTorch model after changing " to.device()" it’s taking 1GB of GPU memory too high at the same time when I m running Keras model, its taking 93mb only.

can you please help me as soon as possible?

Are you seeing a large memory difference also during the actual training/inference?
I’m not sure, how Keras/TF initializes the CUDA context, but the memory usage in PyTorch should be due to the creation of the CUDA context on the device.
You can check it by comparing the memory usage in nvidia-smi to print(torch.cuda.memory_allocated()).