Out of memory when executing loss.backward()

Hello everyone!
I am new in PyTorch and I tried to run a network for a semantic segmentation task on Pascal VOC 2012 and its augmented dataset. After the first forward propagation, there was a memory leak issue when it was going to back propagation.

Traceback (most recent call last):
  File "train.py", line 175, in <module>
    loss.backward()
  File "F:\Anaconda3\lib\site-packages\torch\tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "F:\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: CUDA error: out of memory

And my question is how much memory would further occupied during the back propagation process? Will it double?
Thanks!

This might not be a memory leak – your GPU might simply not have enough memory to compute the backward pass. How much memory is used by the backward pass really depends on how the model looks.

Thank you for your answer! I use shared GPU in a lab. Therefore, if the memory use varies in forward pass and backward pass, I am concern labmates will have other jobs in the same GPU when its memory usage is low, but it may be out of memory when it comes back to backward pass. Is there any solution for it?