How to train additional parameters

Hi, I am trying to implement a trainable parameter in an existing codebase, which is overwhelmingly large for me at the moment.

The original codebase can be found at

My main contribution should be trainable intrinsic parameters (which is basically a 3x3 matrix).

My best attempt is the following: a new Parameter is added to the SelfSupModel class which is set to zero in __init__ and the initial value is set at the first call of forward():

class SelfSupModel(SfmModel):
    def __init__(self, **kwargs):
        self.intrinsic = torch.nn.Parameter(torch.zeros((3, 3)))
        self.intrinsic.requires_grad = True
    def forward(self, batch, return_logs = False, progress = 0.0):
        if self.intrinsic.sum() == 0:
            self.intrinsic = torch.nn.Parameter(batch['intrinsics'][0])

        stored_intrinsic = torch.stack(batch['intrinsics'].shape[0] * [self.intrinsic], 0)
        self_sup_output = self.self_supervised_loss(
            batch['rgb_original'], batch['rgb_context_original'],
            output['inv_depths'], output['poses'], stored_intrinsic,
            return_logs = return_logs, progress = progress)

I set requires_grad = True and the intrinsic can be found in SelfSupModel.named_parameters() with requires_grad = True.

Upon printing out this Parameter, it is not updated after each epoch.
What can be the reason for not being updated?

I assume that this is not the best possible way to initialize this Parameter and this might not be optimized for GPU.

Thanks, Fabian

Sample-specific values are not considered model parameters. Though you can train such values, you won’t be able to make out-of-sample predictions. Perhaps you need group-specific values. In both cases, you should probably use nn.Embedding index lookups (do as it is initialized with gaussian noise by default)

Thank you Alex,
out-of-sample prediction might be useful later on, so I think I should add prediction of intrinsic parameters to the PoseNet.