Training specific layer (module) in a model

I’ve define a auto-encoder model as below.

class AE(nn.Module):
def init(self, img_c, z_dim):
super(AE, self).init()
self.enc = Encoder(img_c, z_dim)
self.gen = Generator(z_dim, img_c)
self.model = nn.Sequential(
self.enc,
self.gen
)
def forward(self, x):
return self.model(x)

Encoder and Generator are another class(nn.Module) I have designed.

First, I want to train the whole network which can be easily done with

ae = AE()
ae.zero_grad()
ae_loss.backward()
opt_ae.step()

Second, I want to only train Encoder not the whole network.
How can I specifically train only the Encoder part of the model?

Create a separate optimizer by passing ae.enc.parameters() to it and use this new optimizer to update the encoder parameters only in opt_enc.step().

Appreciate, and can i ask you another question?
I am trying to train encoder not only with the ‘gradient backpropagated from decoder’ but also with another loss(gradient) that been calculated in the latent space.
In order to do, before call

enc_loss = loss + backpropagated loss
enc_loss.backward()
opt_enc.step()

I need to get backpropagated loss from decoder.