Loss.backward() cause CPU memory leak

hello, thank you for pytorch

I am studying beginner tutorials
when I run cifar10_tutorial.py in Deep Learning with PyTorch: A 60 minutes Blitz, I find memory leak in loss.backward()

To run using gpu and train a larger network, I revised the tutorial code like below

origianal code--------------------------------------------------------------------------------------

from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
    x = self.pool(F.relu(self.conv1(x)))
    x = self.pool(F.relu(self.conv2(x)))
    x = x.view(-1, 16 * 5 * 5)
    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))
    x = self.fc3(x)
    return x

net = Net()
import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(2): # loop over the dataset multiple times

running_loss = 0.0
for i, data in enumerate(trainloader, 0):
    # get the inputs
    inputs, labels = data
    # wrap them in Variable
    inputs, labels = Variable(inputs), Variable(labels)
    # zero the parameter gradients
    optimizer.zero_grad()
    # forward + backward + optimize
    outputs = net(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
    # print statistics
    running_loss += loss.data[0]
    if i % 2000 == 1999:    # print every 2000 mini-batches
        print('[%d, %5d] loss: %.3f' % (epoch+1, i+1, running_loss / 2000))
        running_loss = 0.0

revised code---------------------------------------------------------------------------------------------

from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(3,128, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(128, 128, 5)
self.fc1 = nn.Linear(128 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
    x = self.pool(F.relu(self.conv1(x)))
    x = self.pool(F.relu(self.conv2(x)))
    x = x.view(-1, 128 * 5 * 5)
    x = F.relu(self.fc1(x))
    x = F.relu(self.fc2(x))
    x = self.fc3(x)
    return x

net = Net().cuda()

import torch.optim as optim

criterion = nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(2): # loop over the dataset multiple times

running_loss = 0.0
for i, data in enumerate(trainloader, 0):
    # get the inputs
    inputs, labels = data
    # wrap them in Variable
    inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
    # zero the parameter gradients
    optimizer.zero_grad()
    # forward + backward + optimize
    outputs = net(inputs)
    loss = criterion(outputs, labels)
    loss.backward()
    optimizer.step()
    # print statistics
    running_loss += loss.data[0]
    if i % 2000 == 1999:    # print every 2000 mini-batches
        print('[%d, %5d] loss: %.3f' % (epoch+1, i+1, running_loss / 2000))
        running_loss = 0.0

when I run the above code, I found memory leak(abount 70MB) from one epoch and the next.
If I delete loss.backward, memory leak doesnโ€™t occur
Also, If I add torch.backend.cudnn.enalbled = False and donโ€™t delete loss.backward, memory leak doesnโ€™t occur, but speed is slow

Install information

1 Like

์•ˆ๋…•ํ•˜์„ธ์š”. 3๋…„์ „ ๊ฒŒ์‹œ๊ธ€์ด์ง€๋งŒ ํ˜น์‹œ ๋ฌธ์ œ ํ•ด๊ฒฐํ•˜์…จ๋Š”์ง€ ๊ถ๊ธˆํ•ฉ๋‹ˆ๋‹คโ€ฆ!

Translated:

Good morning. It was posted three years ago, but I wonder if you have solved the problemโ€ฆ !

@bearbear
Could you try to translate your posts before posting, please, as this would make it easier to continue the discussion? :slight_smile: