How to include correlation among parameter vectors in the cost function?

I have the following model:

class Model(nn.Module):
    def __init__(self, dim_in, lambda_=.3):
        super(FeatExtractorGR, self).__init__()

       '''The linear transform layer''' = nn.Linear(in_features = dim_in, out_features = 10, bias=True)
    '''The encoder'''
    self.feature_extractor = \
            #nn.Linear(in_features = 10, out_features = 30, bias=True),
            nn.Linear(in_features = 10, out_features = 20, bias=True),
            nn.Linear(in_features = 20, out_features = 10, bias=True),

def forward(self, x):
    transformed =
    return self.feature_extractor(transformed)

I want to force the weight vectors of the linear transformation layer to be uncorrelated. I tried to include the dot products among the vectors in the cost function (as a proxy for correlations among them):

        dotprod=torch.tensordot(params, params, dims=([1],[1])).abs().fill_diagonal_(0).sum()/2

        loss = other_losses + dotprod * weight

But this is not working, even with really high weight. The weight vectors from the lt layer are still highly correlated. I have also tried to remove other_losses, but no effect. What am I doing wrong?

A special feature of biological models is that they usually contain a largeIn this paper, “parameter correlations” means a group of parameters in Based on Eq. (​1), MathML is expressed as vectors of the first order partial derivative functions (1) to a data set (j) by minimizing the following cost function.

Thanks for the solution