Hi,
Currently when we define a model we have the flexibility of using a torch.flatten or torch.nn.Flatten.
A user may choose to use either of them. I agree that torch.nn.Flatten is a module class and uses torch.flatten inside the forward call.
The advantage torch.nn.Flatten offers is that it has capability of forward hooks.
What I would like to do is either register a forward hook to the Tensor or wrap the tensor in nn.Module Class so I can register a hook to it.
tensor.flatten() is just an example. Ideally I would want to wrap all the torch_function to the nn.Module .
This should not change the way user defines the model. Any suggestions would be helpful.