2080ti cost more memory than 1080ti?

TIM%E5%9B%BE%E7%89%8720190522102742 TIM%E5%9B%BE%E7%89%8720190522102747

mycode:

from torchvision.models import vgg16
import time
model = vgg16.cuda()
time.sleep(1000)

it costs only 895MB memory on 1080ti ,while 1369MB memory on 2080ti.

@ptrblck Do you know the reason?

1080ti environments:
pytorch version:1.1
cuda version:9.0
ubuntu version:18.04

2080ti environments:
pytorch version:1.1
cuda version:10.1
ubuntu version:16.04

It’s probably memory reserved by the CUDA driver. That seems to increase with newer cards. NVIDIA doesn’t explain why, but it might have to do with changes to the instruction set on newer architectures.

You can look at how much is reserved by the driver by doing a minimal allocation, which creates a CUDA context:

import torch
import time
torch.randn(1).cuda()
time.sleep(1000)

It costs 357MB memory on 1080TI and 471MB on 2080TI. Thank you .