Training model with out-of-memory

I have already trained a lstm model with epoch 30~.
But when I add a CRF decode layer, it has a error with out-of-memory in epoch 6 or 7.
I think it’s true. The model has many parameters, so I want to release some no-used parameters.
what should I do? I don’t know which parameters is in the memory.
All tensor.to(‘cuda’) ? or clone() ?
Has pytorch the auto function to release the tensor?
Thanks a lot

You can try to reduce the number of threads for DataLoader

Are you seeing the out of memory error on the GPU or are you running out of system RAM?
In the former case could you check, if the memory usage is increasing in each iteration or epoch via nvidia-smi?
If so, make sure to not save tensors attached to the computation graph, e.g. by appending the loss to a list without calling loss.detach().

thanks for your reply, actually it’s the error on the system RAM. How to solve this question? I pay attention to check the memory usage through ‘nvidia-smi’, but the GPU memory usage is very low, I have already put the device on Cuda.

I have already reduce the number of threads, and it’s working property on the original model.