Pytorch gpu memory management

I tried to measure the gpu memory occupation when launching a DL model process. When I launched a process in conda env1(cuda10, pytorch 1.7), I observed that total 880MB memory was occupied by nvidia-smi while it became 1912MB when I measured in conda env2(cuda11, pytorch 1.11). I was a little confused and will someone give some further explanations ? Thanks so much

I don’t now in-depth details. Simply just to mention that initializing cuda requires different amounts of memory depending on the version.
Note that if they add more functionalities (for example complex support) they need to add a bunch of headers and files for it.

So basically it will depend on the gpu you use and the pytorch/cuda/cudnn versions that it uses.
Anyway I find 1912 Mb excessive.