I am aware that people have answered such a question once or twice here however I still do not get how to apply gradient accumulation work while training a generator and a discriminator in an alternate way where I first update the Generator and then with the same data in the batch I update the Discriminator
# Train Generator
optimizer_G.zero_grad()
g_loss= Criterion(discriminator(fake_data), real_label)
g_loss.backward()
optimizer_G.step()
# Train Discriminator
optimizer_D.zero_grad()
d_loss= Criterion(discriminator(fake_data), fake_label)+Criterion(discriminator(real_data), real_label)
d_loss.backward()
optimizer_G.step()
Is it possible for someone to provide some kind of coded example of how to implement this please? Thank you in advance