# Shared weights in convolution filters

Hi,

I want to create a new module that implements a custom convolution layer that, given one input channel, produces two output channels using e.g. the following two filters:

$\begin{bmatrix}&space;w0&space;&&space;w1&space;&&space;w2&space;\\&space;w7&space;&&space;0&space;&&space;w3&space;\\&space;w6&space;&&space;w5&space;&&space;w4&space;\end{bmatrix}&space;\text{and}&space;\begin{bmatrix}&space;w7&space;&&space;w0&space;&&space;w1&space;\\&space;w6&space;&&space;0&space;&&space;w2&space;\\&space;w5&space;&&space;w4&space;&&space;w3&space;\end{bmatrix}$

Those two filters share the weights w0, …, w7 and the center weight is fixed to zero. Optimally, I want to use the filters together in one F.conv2d(inputs, filters) call.

Is this possible? If yes, how do I get the shared weights and the weights fixed to zero?

The simplest solution to get these is for you only to store a 1D Tensor that contains [w0, w1…, w7] and then reconstruct the filters on every forward before giving them to F.conv2d.