Indirect filter creation (and update)

I’m designing a custom conv2d layer that convolves the input with a set of filters that are generated as a combination of a few constant valued tensors:

filter = w1T1 + w2T2 + … + wn*Tn

And what my network needs to learn are those weights (w) that pre-multiply each to the static tensors (T). During backprop I only want to update the weights (w) instead of the filter directly.

I’ve set T to be standard tensors (with requires_grad=False) and weights w to use requires_grad=True. My question is: what should “filter” be? a tensor with gradient enabled? a torch.nn.Parameter?

I think it should be a torch.nn.Parameter since that’s what i have to pass to the torch.nn.functional.conv2d(self, FILTER,.....) in the forward method for my custom conv2d module. My problem is, should this filter be stored as a parameter in my module? I’m having issues when adding a L2 loss for regularization purposes to the weights (w) only (and not the entire filter)