Hi everyone!
I am new in PyTorch and I constantly have to solve problems with memory.
- Why does my data loader (for train) eat so much memory?
- How can I care about memory when I use data loader for train/validation?
- Is it necessary to point “volatile” in inputs and targets and it will help?
- Should I use
cuda.empty_cache()
between two epochs? Or maybe any others? - I noticed that volatile flag has been removed in master. Why?
Thanks in advance!