How do I apply Gradient accumulation in GANs?

I am aware that people have answered such a question once or twice here however I still do not get how to apply gradient accumulation work while training a generator and a discriminator in an alternate way where I first update the Generator and then with the same data in the batch I update the Discriminator

# Train Generator

g_loss= Criterion(discriminator(fake_data), real_label)

# Train Discriminator

d_loss= Criterion(discriminator(fake_data), fake_label)+Criterion(discriminator(real_data), real_label)

Is it possible for someone to provide some kind of coded example of how to implement this please? Thank you in advance

I am not the expert, but I wonder why would you want to accumulate gradient?
I’ve learnt that I need to set the gradients zero after each epoch is done.

sometimes we need large batch size, but there isn’t enough gpu memory

I am fully aware of the purpose of Gradient accumulation but not fully sure how to implement it in my situation