# Convolution with Non-linearity Before Summation

I need to implement the following function:

out(N_i, C_{out_j}) = bias(C_{out_j}) +
\sum_{k = 0}^{C_{in} - 1} f(weight(C_{out_j}, k) \star input(N_i, k))

Where f is a simple elementwise non-linearity such relu function, applied before summation.

I can easily do this via setting group=C_in and doing it one-by-one for each output channel. However, this gives memory error for large C_in. Is there a more efficient way to do this?

Maybe I’m misunderstanding your question, but wouldn’t it be possible to apply your non-linearity on the weights between each iteration?

model.weight.data = F.relu(model.weight.data)


I need to apply the non-linearity on the result of each single convolution result (weight*image[i]) before summation. That is f(weight*image[i]). This isn’t the same as f(weights)*image[i], which you suggest.

Ah ok, you are right. I misinterpreted the parenthesis.

Hm, I don’t think there’s a workaround that would currently allows this. I guess the easiest way to do that would be to write your own Conv2D layer. E.g. by modifying the convolution op at https://github.com/pytorch/pytorch/blob/master/aten/src/TH/generic/THTensorConv.cpp#L61

Or, which is a bit more involved, making an own Conv2DSpec layer, by making a copy of the existing convolution at THTensorConv.cpp and then making a copy of each Conv2D call (naming it Conv2DSpec or so in the library).