Memory problem after finish one epoch?


#1

I want to write a loop for tune parameter, here is my code tune.py.

First, Fix a pair of parameter set.
When I run mini-batch loop, the memory is look like normal.
But after that, I want to check fully train accuracy, but the memory will increase very fast.
So I set break in my code, how do I release memory after get predict score?

##
##
##  Parameter
ParameterSet = {"Batch":[64], "Epoch":[10], "Optimizer":["Adadelta"], "LearnRate":[1e-3], "Loss":["Log"]}
ParameterSet = ParameterGrid(ParameterSet)
for ParameterIndex, OneParameter in enumerate(ParameterSet):
    if(ParameterIndex!=0):
        break
    ##
    ##
    ##  Torch data loader
    DataNumber = float(Train["LabelCode"].size()[0])
    Dataset    = torch.utils.data.TensorDataset(Train["Image"], Train["LabelCode"])
    DataLoader = torch.utils.data.DataLoader(Dataset, batch_size=OneParameter["Batch"], shuffle=False)
    ##
    ##
    ##  Load model
    from Python.Pytorch.Example.Model import I2C1FO as CreateModel
    Model = CreateModel()
    ##
    ##
    ##  Set loss
    if(OneParameter["Loss"]=="Log"):
        LossFunction = torch.nn.CrossEntropyLoss()
    ##
    ##
    ##  Set optimizer
    if(OneParameter["Optimizer"]=="Adadelta"):
        Optimizer = torch.optim.Adadelta(Model.parameters(), lr=OneParameter["LearnRate"], rho=0.9, eps=1e-6, weight_decay=0)
    ##
    ##
    ##  Epoch
    Epoch = {"Number":[], "Train":{"Loss":[],"Accuracy":[]}, "Valid":{"Loss":[], "Accuracy":[]}}
    for EpochIndex in range(OneParameter["Epoch"]):
        print(EpochIndex)
        Batch = {"Number":[], "MinTrain":{"Loss":[], "Accuracy":[]}, "Valid":{"Loss":[], "Accuracy":[]}}
        for BatchIndex, OneBatch in enumerate(DataLoader):

            ##
            ##
            ##  Inital gradient
            Optimizer.zero_grad()
            OneScore = Model(OneBatch[0])
            OneLoss = LossFunction(OneScore, OneBatch[1])
            _, OnePrediction = torch.max(OneScore, 1)
            OneAccuracy = accuracy_score(numpy.array(OneBatch[1]), numpy.array(OnePrediction))
            ##
            ##
            ##  Batch summary
            Batch["Number"].append(BatchIndex)
            Batch["MinTrain"]["Loss"].append(float(OneLoss))
            Batch["MinTrain"]["Accuracy"].append(OneAccuracy)
            ##
            ##
            ##  Update gradient
            OneLoss.backward()
            Optimizer.step()
            ##
            ##
            ##  Check on valid
            OneScore = Model(Valid["Image"])
            OneLoss = LossFunction(OneScore, Valid["LabelCode"])
            _, OnePrediction = torch.max(OneScore, 1)
            OneAccuracy = accuracy_score(Valid["LabelCode"], numpy.array(OnePrediction))
            ##
            ##
            ##  Valid summary
            Batch["Valid"]["Loss"].append(float(OneLoss))
            Batch["Valid"]["Accuracy"].append(OneAccuracy)
            pass
        print("Finish a batch set and problem is after this")
        break
        ##
        ##
        ##  After finish batch
        ##
        ##
        ##  Check on train
        ##  !!!!Why repeat this code, the memory will increasing!!!!
        ##  !!!!I want to save predict score and release memory!!!!
        OneScore = Model(Train["Image"])


        # OneLoss = LossFunction(OneScore, Train["LabelCode"])
        #  _, OnePrediction = torch.max(OneScore, 1)
        # OneAccuracy = accuracy_score(numpy.array(Train["LabelCode"]), numpy.array(OnePrediction))
        ##
        ##
        ##  Summary train
        # Epoch["Number"].append(EpochIndex)
        # Epoch["Train"]["Loss"].append(OneLoss)
        # Epoch["Train"]["Accuracy"].append(OneAccuracy)
        ##
        ##
        ##  Check on valid
        # OneScore = Model(Valid["Image"])
        # OneLoss = LossFunction(OneScore, Valid["LabelCode"])
        # _, OnePrediction = torch.max(OneScore, 1)
        # OneAccuracy = accuracy_score(numpy.array(Valid["LabelCode"]), numpy.array(OnePrediction))
        ##
        ##
        ##  Summary valid
        # Epoch["Valid"]["Loss"].append(OneLoss)
        # Epoch["Valid"]["Accuracy"].append(OneAccuracy)
        # print(OneAccuracy)
        pass
    pass

#2

When evaluating your model, you need to wrap your code in with torch.no_grad() as explained here.
In the commented code, variable OneLoss is still in the graph. If you append it in Epoch["Train"]["Loss"], a new graph is saved after every iteration. I guess that’s probably the reason of increasing memory.

Best


#3

Thank you for your answer, I will try!