Different outputs on the same input to the pytorch torchscript model model

I am trying to deploy bert with elastic inference on sagemaker.
It requires converting BERT torch model to torchscript. When I converted pytorch model to torchscript and predicted on it got different output for each run of same input to the model.
Pytorch version=1.3.1
Following are the outputs from the model for same input.

1: [[-4.0112, -4.7550, -4.6516, -3.7064, -3.3309, -5.2377, -7.5709, -7.8646,
-9.1199, -8.8557, -6.8730, -2.2870, -2.9781, -4.6967, -5.6849, -5.1054,
-8.1800, -7.7518, -3.2441]]

2: [[-4.2761, -4.5443, -4.6977, -2.6534, -2.8268, -5.3280, -7.1985, -7.1800,
-9.5470, -8.7595, -6.3672, -1.5636, -3.0138, -4.7585, -6.3858, -5.0327,
-8.6203, -8.0351, -3.9619]]

3: [[-4.5412, -4.4034, -4.4401, -2.7413, -3.2421, -6.0487, -7.3131, -7.4065,
-9.4242, -8.2876, -6.2156, -1.1610, -3.7632, -5.3920, -6.9708, -4.7110,
-8.0398, -8.1571, -4.3893]]

Prediction function is as follows:

def predict_fn(input_data, model):
    model.eval()
    print('Generating prediction based on input parameters.')
    device = 'cuda' if cuda.is_available() else 'cpu'
    try:
        with torch.no_grad():
            with torch.jit.optimized_execution(True, {"target_device": "eia:0"}):
                outputs = model(input_data["ids"],input_data["mask"], input_data["token_type_ids"])
    except Exception as ex:
        exc_type, exc_obj, exc_tb = sys.exc_info()
        fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
        raise Exception(f'3- Error in predict_fn {exc_type, fname, exc_tb.tb_lineno, input_data["ids"].shape, " with exception message: ", ex}')    
    outputs = torch.sigmoid(outputs).cpu().detach().numpy()
    return outputs

I don’t understand why I am getting this odd behavior from torchscript converted model. Whereas simple pytorch model which is not converted into torchscript gives same output on each run.