Gradients in ModuleList

Hello -

My model looks something like this:

class Model(nn.Module):
    def __init__(self,labels):
        super(Model,self).__init__()
        self.labels=labels
        n_labels=len(labels)
        label_models=nn.ModuleList()
        for n_models in range(n_labels):
            label_models.append(MLP())
        self.labelspecific_models=label_models

    def forward(self,data):
        prediction=torch.zeros(batch_size) '''batch_size=10'''
        prediction.require_grad=True
        for index,label in enumerate(self.labels):
            model_inputs=data[label][0]
            contribution_index=data[label][1]
            label_outputs=self.labelspecific_models[index].forward(model_inputs)
            for index,label_output in enumerate(label_outputs):
                prediction[contribution_index]+=label_output
        return prediction

As you can see, my input data is fed into different models and then joined together depending on their corresponding contribution index. My question is, is there any issue here with adding the outputs to ‘prediction’ and returning that value and training accordingly? Will the gradients be computed without issue and train properly. I seem to have issues but I can’t identify if its from this part. Thanks!