Using scheduler lead to out of memory

my gpu memory is increasing in circulation of define scheduler

from torch.optim import lr_scheduler
import torch.nn as nn
import torch
class network(torch.nn.Module):
    def __init__(self):
        nn.Module.__init__(self)
        self.layer=nn.Sequential(
            nn.Linear(4096, 2048),
            nn.ReLU(),
            nn.Linear(2048, 1024),
            nn.ReLU(),
            nn.Linear(1024, 512),
            nn.ReLU(),
        )
    def forward(self, ftr):
        pass
device=torch.device('cuda:0')


for i in range(100**100):

    net = network().to(device)
    optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9,
                                weight_decay=0.9)
    scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.9)

    for epoch in range(2):
        optimizer.step()
        scheduler.step()


The code which i use actually is more complex. This simple one is still have the problem. The first time when i use lr_scheduler function ,my gpu memory is low. While As the iteration goes on, this code need more gpu memory. I don’t know why it would be like this, but when a del the lr_scheduler, the increasing of gpu memory disappear. Is there somebody watch this quesion? I really need help.

Thanks for the executable code snippet!
I failed to reproduce the issue with 1.4.0.dev20191109, as my memory usage stays constant.
Which PyTorch version are you using?

i use pytorch 1.2.0 ,which was download from the https://pytorch.org/

This code may be more clear

from torch.optim import lr_scheduler
import torch.nn as nn
import torch
class network(torch.nn.Module):
    def __init__(self):
        nn.Module.__init__(self)
        self.layer=nn.Sequential(
            nn.Linear(4096, 2048),
            nn.ReLU(),
            nn.Linear(2048, 1024),
            nn.ReLU(),
            nn.Linear(1024, 512),
            nn.ReLU(),
        )
    def forward(self, ftr):
        pass
device=torch.device('cuda:0')


for i in range(100**100):
    print(i)
    net = network().to(device)
    optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9,
                                weight_decay=0.9)
    scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.9)

    for epoch in range(1000*100):

        optimizer.step()
        scheduler.step()


Could you update to the latest stable release (1.3.1) and rerun the code again?

yes, it is solved. Thank you very much.