Computational cost, clamp vs pooling

Hello,
i am currently looking for some input regarding computational costs.

What would be computationally more efficient?:

  • using 3x3 max pooling on a single channel tensor to create a new variable
  • or using clamping on a different single channel tensor to create a new variable

The dimensions of both tensors are equal. If possible, please provide reasoning :slight_smile:

Note that the theoretical computational cost might differ from the used implementation, so a proper way would be to profile both workloads via e.g. torch.utils.benchmark.

Thank you :slight_smile:
That function will also be useful to me in the future for other projects.