I don’t think any idea is too crazy to run some experiments.
Note that the returned indices from this pooling layer are “detached”, i.e. they do not have a grad_fn and thus Autograd won’t backpropagate through them, so you should keep this information in mind when experimenting with it.
Not directly, since you won’t be able to calculate gradients for these indices (unless you can come up with a valid backward method to do so and would implement it manually). Autograd will properly backpropagate to the max. values, but not the indices.