Non-deterministic behaviour of Huggingface Model

I am currently working with a model from Huggingface (Donut) and have trained the model on a custom dataset.

During evaluation, I noticed, that the predictions are non-deterministic and I do not understand why that is the case.
In my evaluation loop, I set model.eval() and I double checked that is False during the test loop.

When trying to set torch.use_deterministic_algorithms to True, I get an error (CuBLAS). However, I do not understand why I would have to set this setting in the first place, as I assumed the model is deterministic in evaluation mode.

I would appreciate any input and am happy to provide more information as well (this is my first post and I am unfamiliar as to what should be included in a question)

There are multiple sources of non-determinism where the model’s behavior could be one of them (calling model.eval() would e.g. disable dropout layers and thus eliminate this source of randomness) while also algorithms themselves can create non-deterministic outputs (e.g. if the order of operations isn’t strictly defined). Calling torch.use_deterministic_algorithms fixes this source of randomness and non-deterministic behavior.

1 Like