How to share GPU memory between two large networks?

Hi. I am trying to train two large networks (net1 and net2) by turns. For example, train net1 for one epoch, and then net2 for one epoch, and repeat this process. Is there a way to free the memory consumed by a network after its training epoch finishes?