Will parameters not for forwarding be updated during backward propagation?

I have created the following network, where linear is for forwarding and ‘linears’ in modlst are for calculating loss function. The question is, will the parameters in modlst been updated after ‘loss.backward()’ and ‘optimizer.step()’? Thank you for your assistance!

class Net(torch.nn.Module):
def init(self,lag_list,in_size,out_size):
super(Net,self).init()
self.modlst=torch.nn.ModuleList([])
self.lag_list=lag_list
self.linear=torch.nn.Linear(in_size,out_size,bias=None)
self.l=len(lag_list)
for i in range(self.l):
self.modlst.append(torch.nn.Linear(in_size,out_size,bias=None))
def forward(self,x):
return self.linear(x)

In your current code snippet the forward method only uses self.linear. Since none of the linear layers in self.modlst is used, they won’t receive and gradients and thus won’t be updated.
In case you are using these layers at a later stage, they should be updated as long as you don’t detach the computation graph etc.

PS: you can post code snippets by wrapping them into three backticks ``` :wink:

Thank you for your assistance!My current goal is to train a model whose output satisfies the property of autoregression (time lag might not be continuous), so my loss function hopes to include the loss function between the model output and target, as well as the property of autoregression of the model output itself. So how can I set and train the autoregressive parameter here? I’m quite confused with it…

Here is the main part of my code

    def __init__(self, T: int, dim_size: int,lag_list):
        super(MF, self).__init__()
        self.t_embeddings = nn.Embedding(T, dim_size)
        
        self.modlst=torch.nn.ModuleList([])
        self.lag_list=lag_list
        self.l=len(lag_list)
        for i in range(self.l):
             self.modlst.append(torch.nn.Linear(dim_size,dim_size,bias=None))

    def forward(self, times):
         mt = self.t_embeddings(times)
         return(mt)

     def var(self, times):
        tmp = self.t_embeddings(times)
                
        for i in range(max(self.lag_list),tmp.shape[0]):
            for j in range(self.l):
                tmp[i]=tmp[i]+self.modlst[j](tmp[i-self.lag_list[j]])

        return(tmp)
            
def ARLoss(model,T,lag_list):
    times=torch.arange(T).to(device)
    ar_y=model.var(times)
    y=model(times,lag_list)
    loss=torch.linalg.norm(ar_y-y)
    return loss

model=MF(t,lag_list)
loss_func=torch.nn.MSEloss()
loss=loss_func(target,model(times,lag_list))+ARLoss(model,T,lag_list)

......