Reproducibility when calculating AUC ROC

I try to solve classification task with LSTM. For reprodusibility I set the seed values:

def seed_everything(seed):
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    np.random.seed(seed)  
    random.seed(seed) 
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True
    torch.use_deterministic_algorithms(True)
    os.environ['PYTHONHASHSEED'] = str(seed)

But after epoch I evaluate my model with the function below:

def eval_model(model, dataset_val, cat_features, numeric_features, val_ids=None, batch_size=32, device=None) -> float:
    metric = BinaryAUROC(device=device) #https://pytorch.org/torcheval/stable/generated/torcheval.metrics.BinaryAUROC.html
    val_generator = batches_generator(dataset_val, cat_features, numeric_features, val_ids, batch_size=batch_size, shuffle=False,
                                      device=device, is_train=True)
    model.cuda()
    model.eval()

    for batch in tqdm(val_generator, desc='Evaluating model'):
        targets = torch.flatten(batch['label'].to(device))
        with torch.no_grad():
            output = torch.flatten(model(batch['features'], batch['group'])).to(device)
        metric.update(output, targets)
    return metric.compute().detach().cpu().numpy().flatten()[0]

And the error occurs:

RuntimeError: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation, or you can use the 'warn_only=True' option, if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.

How can I fix it? (and use metric at the same time)

Maybe there are comments?