Unable to reproduce the same result with torch::manual_seed

Hello guys, I’m now training an DDPG model with libtorch c++ api
before the network training, I’ve called

int seed = SEED;
srand(seed);
torch::manual_seed(SEED)

in the main function,
but I still get different result every time.

Here are my network structures:
Critic:

    net = register_module("Sequential", nn::Sequential(
        nn::Linear(ops.state_dim + ops.action_dim + ops.max_num_contour, 32),
        nn::ReLU(),
        nn::Linear(32, 64),
        nn::ReLU(),
        nn::Linear(64, 32),
        nn::ReLU(),
        nn::Linear(32, 1)
    ));

Actor:

    net = register_module("Sequential", 
        nn::Sequential(
            nn::Linear(ops.state_dim + ops.max_num_contour, 64),
            nn::ReLU(),
            nn::Linear(64, 32),
            nn::ReLU(),
            nn::Linear(32, (ops.action_range.second - 1) + 2)
        )
    );       

Optimizers:

    optimizer_q = std::make_shared<torch::optim::Adam>(net->q->parameters(), lr_q);
    optimizer_pi = std::make_shared<torch::optim::Adam>(net->pi->parameters(), lr_pi);    

Weight Initializations

    {
        torch::NoGradGuard no_grad;
        // weight init
        auto initialize_weights_norm = [](nn::Module& module) {
            torch::NoGradGuard no_grad;
            if (auto* linear = module.as<nn::Linear>()) {
                torch::nn::init::kaiming_uniform_(linear->weight);
                torch::nn::init::constant_(linear->bias, 0.01);
            }
        };
        this->apply(initialize_weights_norm);
    }

And the c-library rand() is used for exploration.

Is it possible that there are other function or algorithm that has indeterministic behaviors that I didn’t notice?

Have you tried using at::globalContext().setDeterministicCuDNN(true)?

1 Like

Hi, I’ve tried this and the randomness is still there.
Btw, I didn’t use GPU in my code, so I don’t really get where the randomness comes from.

The only other thing I can think of would be to use deterministic algorithms as well. I achieved deterministic results in my PPO implementation that way. Code of my seeding:

int m_seed = 0;
bool m_torch_deterministic = true;
srand(m_seed);
torch::manual_seed(m_seed);
at::globalContext().setDeterministicCuDNN(m_torch_deterministic ? true : false);
// https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility
at::globalContext().setDeterministicAlgorithms(m_torch_deterministic ? true : false, true);

I would still do this even if you don’t use GPU. Other than that, I would try to check every aspect of your program through unit testing. It’s possible that maybe the way you designed your code may cause reseeding to occur at a time you don’t want to. Please also read Reproducibility — PyTorch 1.12 documentation and see if anything is there that you have not tried yet. Maybe if you are still running into issues you could share a small working example that I could test out. Let me know!

3 Likes