Hi,

I want to use some mathematical expression over kernels in convolution. I wrote some codes as a dummy example.

1- My question is this kind of definition is true for gradient flow?, and

2- Should we use `super()__init__`

definition?

3- Does autograd allow logical constrains like x = x.*(x<5) (matlab-like) in forward?

```
class new_conv(nn.Conv2d):
def __init__(self, dim):
self.W = Variable(torch.randn(dim, 1).type(dtype), requires_grad=True)
self.act = F.relu()
def forward(self, x):
xMean = x.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
x = x - xMean
w = self.W**2
wMean = w.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
w = F.relu(w-wMean)
x = torch.cat((x,x**2),2) # Channelwise concatenating
return self.act(F.conv2d(x, w))
```

4- Last question, can we express this module as function? In such a definition, every iteration I use new_conv in the forward step, I doubt that weights are reassigned. So even if the expression works, it will be untrainable, as if.

Many thanksâ€¦

```
def new_conv(x,dim):
W = Variable(torch.randn(dim, 1).type(dtype), requires_grad=True)
xMean = x.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
x = x - xMean
w = W**2
wMean = w.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
w = F.relu(w-wMean)
x = torch.cat((x,x**2),2) # Channelwise concatenating
return F.relu(F.conv2d(x, w))
```