Multiply two features in a convolutional neural network

I am trying to build a convolutional neural network that takes two images (say, x and y) as input and predicts a numerical output (say, T). Intuitively, I know that the output T is a function of (some convolution of x)*(some convolution of y), * being element-wise multiplication.
I know I can concatenate them but it is not working well enough. I want to element-wise multiply them instead of concatenate.My question is if I can multiply convolutional channels element wise without breaking auto-grad ? For example, can I do something like this without breaking auto-grad ? If not, how should I handle it ?

def forward(x,y):
x = F.relu(self.convX1(x))
x = F.relu(self.convY2(x))

y = F.relu(self.convB1(y))
y = F.relu(self.convB2(y))

z = torch.mul(x,y)
z = torch.flatten(z,1)
z = F.relu(self.linear1(z)) #Maybe a few more linear layers with the last layer giving one output
return z

Welcome to the forums!

Anytime you use a PyTorch operation, as long as the operation is differentiable, it will be tracked appropriately with autograd.

In fact, many standard Python arithmetic operators will be tracked with autograd, if the underlying variables are torch tensors. In other words you could also do this:

z = x * y #elementwise multiplication
z = x + y #elementwise addition