try to create 2 models but them use same gpu-memory.
model1 = torch.jit.load(CHECKPOINT_DIR)
model2 = torch.jit.load(CHECKPOINT_DIR)
with torch.no_grad():
im_gpu1 = im_in.cuda()
output1 = model1.forward(im_gpu1)
im_gpu2 = im_in.cuda()
output2 = model2.forward(im_gpu2)
when run 1 model, use 2G gpu-memory, I want to create more models to use more gpu-memory.
but when run 2 model, use 2G gpu-memory too, the second model don’t use extra more gpu-memory, why?