PyTorch1.7 seems to consume more memory?

I was training a model on two 11G memory GPUs, and the memory consumption is about 10.3 G per GPU observed by nvidia-smi. After I upgrade PyTorch to 1.7, this project could not be run any more and encountered CUDA out of memory. So does PyTorch1.7 need more memory? Besides, my CUDA version is 10.2.

Are you seeing this for a variety of models or a specific model in particular?

That’s strange. I tested on another two GPUs now and everything is OK. Since the two GPUs which I meet the problem is training now with PyTorch 1.4. After the training is over, I’ll test whether the problem occurs by accident.

I noticed too that with 1.7 I can run twice less threads than I used to to with 1.6
I’m curious about any explanation for that. Could the default precision on Cuda 11 be float64 or something like that?

What kind of threads do you mean?

No, that’s not the case.