Is it possible to do backward in this case?

Let’s see an example with a simple architecture.

... -- conv1 -- conv2 -- ...

If the weights of the conv2 layer can be learned then it is possible to learn the weights of the conv1.
Now, let’s see another example with a similar architecture except adding tile function.

... -- conv1 -- tile -- conv2 -- ...

In this case, is it feasible to apply backpropagation algorithm to conv1 layer?
Thank you.

Hi,

If you’re tile function is differentiable, then yes. Could you give more details about that function?

An example for the function is torch.tile — PyTorch master documentation. Btw, I found this link Are all operations defined in torch.Tensor differentiable?. :smiley: