Hi. I have a problem with memory management. Before in 0.3.1,
empty_cache worked very well for my code but now it does not work efficiently anymore. My code outline is as follows
for dataset in datasets: T.cuda.empty_cache() net = MyNet() while epoch < n_epochs: #training goes here
The memory still slightly increases after each iteration. Since the number of datasets is big, i think at some point the memory will blow up. Please have a look at the problem.