Custom autograd.Function support mini-batch operation?

Hi, I’d like to write a custom autograd.Function so it can be called in nn.Module.
So for the mini-batch, do we need to manually write the code to support this operation or PyTorch can automatically handle it?
Thanks in advance!

If you write your custom Function, you can have it do whatever you want. But the batching is not done automatically. It will be called with the Tensor that you give it. So you need to write your Function to work with batches.