Hi
batch_size = 100
train = TensorDataset(train_X,train_Y)
trainLoader = DataLoader(train,batch_size, shuffle=True,num_workers=1)
model=model.cuda()
for i,data in enumerate(trainLoader):
train_x, train_y = data
train_x = train_x/255
#train_x,train_y = Variable(train_x), Variable(train_y)
train_x = train_x.cuda()
train_y = train_y.cuda()
encoded, decoded = model(train_x)
I trained model using the code above successfully.
After training done, I try to get the decoded(output) using the code above same manner.
But my GPU gets out of memory only when computing decoded(output) using trained model.
I don’t know why GPU memory get full although it is trained successfully during training with same code.