Memory Leak in LSTM

Hi,

The following script consumes about 1 GB of memory in 100 iterations, and continually increases memory usage until it runs out of memory. I am using version 0.3.1 on Ubuntu 16.04.4 LTS(xenial)

class View(nn.Module):
“”“docstring for View"nn.Module”“”
def init(self):
super(View, self).init()
def forward(self, input,shape):
return input.view(shape)

class LSTMAcousticModel(nn.Module):
“”“docstring for LSTMAcousticModel”“”
def init(self, input_size=1800, lstm_hidden_size=1024, output_size=1945, nb_layers=3, context=True):
super(LSTMAcousticModel, self).init()
self._hidden_size = lstm_hidden_size
self._hidden_layers = nb_layers
self._context = context
self._input_size = input_size
self._output_size = output_size
self.bn = nn.BatchNorm1d(num_features=self._input_size)
self.View = View()
self.LSTM = nn.LSTM(input_size=self._input_size, hidden_size=self._hidden_size, num_layers=self._hidden_layers, dropout=0.5, bias=True)
self.dropout = nn.Dropout(p=0.5)
self.hidden = nn.Linear(in_features=self.hidden_size, out_features=self.output_size, bias=True)
def forward(self, x):
x = self.bn(x)
x = self.View(input=x, shape=(-1, x.data.size(0), x.data.size(1)))
x, (
,
) = self.LSTM(x)
x = self.dropout(self.hidden(x))
return x
model = LSTMAcousticModel(input_size=1800, lstm_hidden_size=1024, output_size=1945, nb_layers=3, context=True)
cross_entropy_loss = nn.CrossEntropyLoss()
optimizer = optim.Adam(
list(model.parameters()),
lr=args.lr,
betas=(args.beta_1, args.beta_2))
best_loss = 100000
patience_count = 0
start_iter = 0
for epochs in range(args.epochs):
running_loss =
for i, data in enumerate(train_loader, start=start_iter):
inputs, targets, input_percentages, target_sizes = data
inputs = inputs.view(inputs.size(0), inputs.size(2), -1)
if args.cuda:
inputs, targets = inputs.cuda(), targets.cuda()
model.cuda()
cross_entropy_loss.cuda()
target_sizes = target_sizes.cuda()
inputs, targets ,target_sizes = Variable(inputs), Variable(targets), Variable(target_sizes)
optimizer.zero_grad()
output = model(inputs)
loss = cross_entropy_loss(input=output.view(output.data.size(0)*output.data.size(1), -1), target=targets.view(-1))
loss.backward()
optimizer.step()

Is there something wrong in my code? I am not able to figure out. Kindly help. Sorry for the bad formatting. I am not able to figure out code formatting on this page.

I don’t know if this is the cause of your issue, but I noticed you have model.cuda() inside the for loop. And the same for the cross_entropy_loss. You can send the model and loss to the gpu outside the loop once. You can also set the loss to zero at the beginning of second for loop. Hope this helps

I moved model.cuda and model.cross_entropy_loss outside the for loop but I am still getting the out of memory error. The exact is error message is posted below.

RuntimeError: cuda runtime error (2) : out of memory at/pytorch/torch/lib/THC/generic/THCStorage.cu:58

can you try loss = 0 at the beginning of the second for loop? Also, you have a variable called running_loss, are you accumulating loss into that variable? if yes, how?

I tried with loss =0 at the beginning of every loop. I am accumulating loss by adding minibatch loss to the running_loss Variable. My current code is not able to handle large batch size (40 here). It works fine upto a batch size of 20. My model is 3 LSTM lsyers with 1024 units and 1945 output units and 1800 input units. My system config is Nvidia- 1080Ti GPU with 11 GB GPU memory and 32 RAM. I think model is not too big to handle batch size of 40. There must be something sub-optimal in the code

that should be fine. I use batch sizes of 128 with 4 layers LSTM and larger model. My guess is you might want to accumulate loss.data to running loss, instead of loss. Maybe you are already doing that?