Can't load parameter of same shape from state_dict

While copying the parameter named "toolbox_modules.backbone.kx", whose dimensions in the model are torch.Size([3, 3, 3, 3]) and whose dimensions in the checkpoint are torch.Size([3, 3, 3, 3]

Err… what?

Hier is my module definition:

        self.kx = nn.Parameter(torch.tensor([[1, 0, -1],
                                             [2, 0, -2],
                                             [1, 0, -1]], dtype=torch.float32).view(1, 1, 3, 3).expand(3, 3, 3, 3),
                               requires_grad=False)

        self.ky = nn.Parameter(torch.tensor([[1, 2, 1],
                                             [0, 0, 0],
                                             [-1, -2, -1]], dtype=torch.float32).view(1, 1, 3, 3).expand(3, 3, 3, 3),
                               requires_grad=False)

That’s not a very helpful message.
The error comes from the expanded parameter inside your model and you could avoid it by calling .contiguous() on it or use repeat instead of expand.

Also, since your parameter does not require gradients, you could register a buffer instead (via self.register_buffer). If you want to update this parameter (and set requires_grad=True), you’ll most likely get an error claiming:

RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.

Would you mind creating an issue on GitHub?