Let’s suppose we have convolutional module. Right now it is boring:

```
class My_Conv(nn.Module):
def __init__(self, kwargs):
super(My_Conv, self).__init__()
self.conv = nn.Conv2d(**kwargs)
def forward(self, x):
return self.conv(x)
```

But I want to do something like this:

```
class My_Conv(nn.Module):
def __init__(self, kwargs):
super(My_Conv, self).__init__()
self.something = Something()
self.conv = nn.Conv2d(**kwargs)
def forward(self, x):
alpha=self.something(x)
return self.conv(x,weight_multipliers=alpha)
```

Let’s suppose I have an alpha tensor. Alpha contains multipliers for the convolution weights. But not globally for the whole convolution. It contains different multipliers for each stride of the convolution window. Think of it as some kind of attention.

Now I want to get the convolution outputs just as before, but first I want the weights to be multiplied by the relevant numbers from alpha, for each given window stride.

Is this possible to do efficiently in Pytorch? Is there a hack for this without reimplementing the convolution operation? I looked in the source, and I don’t feel like messing around on the C++ level.