Deterministic problem for torch.multinomial on different device

I observed that torch.multinomial behaves deterministically on the CPU device. However, I am unable to reproduce the same results on the GPU. The following script demonstrates the issue I encountered:

import torch
import random
import numpy as np
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'

seed = 2147483647
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

w1 = w2 = torch.rand(3065713)

device='cuda'
g = torch.Generator(device=device).manual_seed(2147483647)           #############
random_selection1 = torch.multinomial(w1.to(device), 150, replacement=True, generator=g)          ##############
print(random_selection1)

g = torch.Generator(device=device).manual_seed(2147483647)           #############
random_selection2 = torch.multinomial(w2.to(device), 150, replacement=True, generator=g)          ##############
print(random_selection2)

print((random_selection1 != random_selection2).sum())

Not reproducible after setting torch.use_deterministic_algorithms(True) as described in the Reproducibility docs.

1 Like

When I enable torch.use_deterministic_algorithms(True), I encounter the following error: RuntimeError: cumsum_cuda_kernel does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'.

I don’t see the error using 2.8.0.dev20250404+cu128 so you might need to update your PyTorch binary. Which release are you using?

I am using PyTorch version 2.5.0+cu118.

Hi Ivan!

I can reproduce your issue using pytorch 2.5.1 (with cuda 12.4) when I add
torch.use_deterministic_algorithms (True) to the code you posted, but the error goes
away – that is, torch.multinomial() becomes deterministic with no errors raised – when I
use version 2.6.0+cu126 (the current stable release and that choice of cuda).

Try upgrading to the current stable version (or higher).

Best.

K. Frank