GPU gets out of memory

Hi

batch_size = 100

train = TensorDataset(train_X,train_Y)
trainLoader = DataLoader(train,batch_size, shuffle=True,num_workers=1)

model=model.cuda()

for i,data in enumerate(trainLoader):        
    train_x, train_y = data
    train_x = train_x/255
    #train_x,train_y = Variable(train_x), Variable(train_y)
    train_x = train_x.cuda()
    train_y = train_y.cuda()           
        
    encoded, decoded = model(train_x)

I trained model using the code above successfully.

After training done, I try to get the decoded(output) using the code above same manner.

But my GPU gets out of memory only when computing decoded(output) using trained model.

I don’t know why GPU memory get full although it is trained successfully during training with same code.

Are you using the same batch size to compute the new output?
If you don’t need to train the model, wrap the code in a with torch.no_grad() block to save memory.

1 Like

You may be interested in this topic. There are some discussion here.

1 Like