Ignore some op in torch.autograd.profiler/torch.profiler

Background: I am profiling my model to see the bottleneck. I am not interested in the dataloader, so ignoring its performance is ok for me.

I use the profiler as shown below:

    with torch.autograd.profiler.profile(
        enabled=True,
        use_cuda=False,
        record_shapes=True,
        with_flops=True,
        profile_memory=True,
        use_cpu=True
    ) as prof:
        with torch.no_grad():
            for _ in range(args.nepochs):
                for i, test_batch in enumerate(test_ld):
                    X_test, y_test, z_test, T_test = test_batch
                    R_test = model_wrap(
                        model,
                        X_test,
                        y_test,
                        z_test,
                    )

And later I found the enumerate(DataLoader)#_SingleProcessDataLoaderIter._... occupies more than 90% in the breakdown, though I don’t care about it. It makes the log unreadable. How can I remove its information from the profiler?