Remove custom layer from backprop graph

Thanks for the update.
The error in the first approach is raised, since you are storing the self.tiled_out tensor inside the module and assign activations to it.
If I understand the use case correctly, you are pre-allocating tiled_out only to store the results and don’t care about its initial values. If that’s the case, then the second approach looks correct.