Getting Out of memory error for epoch greater than 1

My code works fine for num_epochs=1 but when i increase the epoch it gives out of memory error here is the code

for epoch in range(num_epochs):
    for x,y in train_loader:
        real_data=Variable(y.type(dtype))
        fake_data=g(Variable(x.type(dtype))).detach()
        d_error, d_pred_real, d_pred_fake = train_d(d_optim, real_data, fake_data)

        fake_data=g(Variable(x.type(dtype)))
        g_error=train_g(g_optim,fake_data,real_data)
        logger.log(d_error, g_error, epoch, n_batch, num_batches)
    if(c%100==0):
            logger.display_status(epoch, num_epochs, n_batch, num_batches,d_error, g_error, d_pred_real, d_pred_fake)
    n_batch=c+1
    c=c+1
    if(c>4000):
        break
    del real_data,fake_data,x,y,d_error,d_pred_real,d_pred_fake
    torch.cuda.empty_cache()

RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/py
torch_1518238409320/work/torch/lib/THC/generic/THCStorage.cu:58

Do d_error and g_error require gradients?
Check if with print(d_error.requires_grad).

If so, is your memory continuously increasing? Try to use X_error.detatch() or X_error.data to log the information. Maybe the logger holds some reference to the Tensors.

Yes it reuires gradients, tried your way and also tried without logging the info, but again getting out of memory

you can try logger.log(d_error.item(), g_error.item(), epoch, n_batch, num_batches) (I assuming you are using v0.4, in which case you also don’t need to use Variable)