Implmentations of the Laplacian regularization loss

Referrence implementaion: tfl.lattice_layer.LaplacianRegularizer

I am trying to reimplemente the laplacian regularization loss using pytorch. But found the code might be wrong.

    def calc_laplacian_regularizer_loss(self, weights, lattice_sizes, l1=0.0, l2=0.0):
        # weight:  B C H W
        if not l1 and not l2:
            return 0.0
        B, _, H, W = weights.shape

        weights = weights.view(B, lattice_sizes[0], lattice_sizes[1], H, W)

        diff1 = weights[:, 1:, :, :, :] - weights[:, :-1, :, :, :]
        diff2 = weights[:, :, 1:, :, :] - weights[:, :, :-1, :, :]
        if l1:
            resutl1 = torch.abs(diff1).sum()
            resutl1 += torch.abs(diff2).sum()
        if l2:
            resutl2 = torch.pow(diff1, 2).sum()
            resutl2 += torch.pow(diff2, 2).sum()

        if l1 and not l2:
            return resutl1
        elif not l1 and l2:
            return resutl2
        else:
            return resutl1 + resutl2
  1. if therer are one tensor( [4, 12, 256, 256]), How to comput the laplacian regularization loss over their six-connected neighbors ?
  2. Waiting for your reply!!!
  3. Waiting for your reply!!!
  4. Waiting for your reply!!!

Tensorflow implementation is here