C++ BatchNorm results differed in Python results

Hi, There was a problem while I was converting the code. (python to c++)
C++ BatchNorm results differed in Python results.

# Python 
class My_model(nn.Module):
    def init(self):
        super(My_model), self).__init__()
        self.conv1 = torch.nn.Conv1d(15, 64, 1)
        self.bn1 = nn.BatchNorm1d(64)
   def forward(self, x):
        x = self.conv1(x)        # result is same in python and C++
        x = self.bn1(x)            # result is different in python and C++
        print(x)
//C++
class My_model : public torch::nn::Module{
public:
   torch::Tensor forward(torch::Tensor x);
private:
    torch::nn::Conv1d conv1 = torch::nn::Conv1d(torch::nn::Conv1dOptions(15, 64, 1));   // result is same in python and C++
    torch::nn::BatchNorm1d bn1= torch::nn::BatchNorm1d(torch::nn::BatchNorm1dOptions(64));   // result is different in python and C++
}
My_model model = My_model();
model.forward(x);

I checked same Input x.
Python model weight and model bias reflected in the weight and bias of the C++ model.
Is there a solution or something mistake?

It’s likely related to the initialization of the conv1 layer. Ensure that the manual seed is the same on both versions if you’re not loading in the weights of a pre-trained network.

Thank you for your reply.
I make pre-trained model in python and save .pt file including model_state_dict.
And using .pt file

// C++
My_model model = My_model();
auto container = torch::jit::load(".pt path");
for (auto p : model.named_parameters().keys())                 // pt 데이터 가중치 모델에 넣기
	model.named_parameters()[p].data() = container.attr(p).toTensor();

I loaded the weights with the code above and can checked loaded the weights in C++…
Using torch.manual_seed(0), torch::cuda::manual_seed_all(0);
And the above procedure was performed again, but the results were the same.

Finally I find it.
In Test
python : nn.BatchNorm1d(64)
c++ : torch::nn::BatchNorm1d(torch::nn::BatchNorm1dOptions(64).track_running_stats(false);