Seeding a torch::Generator in C++

In Python, we create and seed a random generator like this

  gen = torch.Generator()
  gen.manual_seed( 42 )

to be used for example in

  x = torch.multinomial( x, num_samples=1, generator=gen )

Now, I would like to do the same thing in C++. So, I tried

  auto gen = torch::Generator();
  gen.manual_seed( 42 );  // Does not exist !!!

but the method manual_seed does not exist.
I found another method called set_current_seed but it crashes at execution!

  auto gen = torch::Generator();
  gen.set_current_seed( 42 );  // Crashes !!!

Any idea ?

1 Like

I don’t know why your code crashes, so you would need to try to get the stacktraces etc. as the same code is used e.g. here with an additional lock.

It appears that in C++ the torch::Generator() class is just an empty wrapper.
The set_current_seed method is defined as

  void set_current_seed(uint64_t seed) { impl_->set_current_seed(seed); }

and of course, when the impl_ pointer is null, the app craches!

I finally found a default generator equivalent to the Python one.

So, the following code in Python

  gen = torch.Generator().manual_seed( 42 )
  p = torch.rand( 5, generator=gen )
  print(p)
  x = torch.multinomial(p, num_samples=10, replacement=True, generator=gen)
  print(x)

prints exactly the same result as the following C++ code

  auto gen = at::detail::createCPUGenerator( 42 );
  auto p = torch::rand( {5}, gen );
  std::cout << p << std::endl;
  auto x = torch::multinomial( p, /*n_samples*/10, /*replac*/true, gen );
  std::cout << x << std::endl;

outputs

tensor([0.8823, 0.9150, 0.3829, 0.9593, 0.3904])
tensor([3, 1, 0, 3, 2, 1, 2, 0, 2, 3])
1 Like

Unfortunately, and surprisingly, if the number of samples is 1 the Python version and C++ version does NOT give the same result! So, to solve that, I ask two samples and keep the first one…

I think the non-at::detail:: (because stuff in a library’s detail:: namespace is likely not under any API stability guarantees) method of doing this is with at::make_generator<Impl>, where for CPU generators the at::CPUGeneratorImpl seems like the type to use. E.g.

auto gen = at::make_generator<at::CPUGeneratorImpl>(/*seed*/);
auto ten = torch::randn({2, 3, 4}, gen);

This seems to work nicely and for PyTorch 2.4.1+cpu is matching up with the Python way:

ten = torch.randn((2, 3, 4), generator=torch.Generator())
1 Like