Have no cue about the batchnormalization1d impacts the reproducibility or may be a bug

Welcome to the forums!

How are you defining GCNConv? It is not clear from your code and this class does not exist natively in PyTorch.

At any rate, I’m not finding this issue with the BatchNorm1d layers with ReLU. The output is unchanged. Here is a simple example:

import torch
import torch.nn as nn

model = nn.Sequential(nn.Conv1d(3, 64, kernel_size=3, padding=1),
                      nn.BatchNorm1d(64), nn.ReLU(),
                      nn.Conv1d(64, 1, kernel_size=3),
                      nn.BatchNorm1d(1), nn.ReLU(),
                      nn.AdaptiveAvgPool1d(1))

temp_data=torch.rand((1, 3, 128))

model.eval()
with torch.no_grad():
    while True:
        print(model(temp_data))

If you mean you are getting a difference between having .eval() on and off, that is to be expected. See here and here.