I am new in PyTorch and I tried to run a network for a semantic segmentation task on Pascal VOC 2012 and its augmented dataset. After the first forward propagation, there was a memory leak issue when it was going to back propagation.
Traceback (most recent call last): File "train.py", line 175, in <module> loss.backward() File "F:\Anaconda3\lib\site-packages\torch\tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "F:\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA error: out of memory
And my question is how much memory would further occupied during the back propagation process? Will it double?