Pass batches to custom Python function

I have a custom Python function, which has several parameters and performs some logic. It only uses native Python functions or Numpy functions:

def my_function(param_1, param_2, ..., param_N):
    # Logic
    ...
    return output

I want to speed up my function execution by using GPUs and passing a torch batch to this function instead of individual parameters. I could pass a numpy array to this function instead of individual parameters and ravel() the entries into variables inside the function. But I dont know how to go from here to passing a whole batch of data and process it as a batch.

Is this possible? Do I need to use only torch functions instead of numpy?

Yes, if you want to use your GPU you should use PyTorch operations since numpy doesn’t support GPUs and you would have to move the data back to the CPU to apply numpy operations.
Additionally, using numpy (or another 3rd party library) will detach the tensor from the computation graph as Autograd won’t be able to track these operations.
PyTorch operations mostly accept batched inputs so you should be able to directly use them.