Running out of memory while running a model

Right. I have faced something similar to this, but while doing inference on GPU.
Link to my post: Gpu memory gets accumulated during consecutive forward passes
Haven’t found any solution yet.