How do I apply Gradient accumulation in GANs?

I am aware that people have answered such a question once or twice here however I still do not get how to apply gradient accumulation work while training a generator and a discriminator in an alternate way where I first update the Generator and then with the same data in the batch I update the Discriminator

# Train Generator

optimizer_G.zero_grad()
g_loss= Criterion(discriminator(fake_data), real_label)
g_loss.backward() 
optimizer_G.step() 

# Train Discriminator

optimizer_D.zero_grad()
d_loss= Criterion(discriminator(fake_data), fake_label)+Criterion(discriminator(real_data), real_label)
d_loss.backward() 
optimizer_G.step() 

Is it possible for someone to provide some kind of coded example of how to implement this please? Thank you in advance

I am not the expert, but I wonder why would you want to accumulate gradient?
I’ve learnt that I need to set the gradients zero after each epoch is done.

sometimes we need large batch size, but there isn’t enough gpu memory

I am fully aware of the purpose of Gradient accumulation but not fully sure how to implement it in my situation