Bus error finetuning whisper model in multi GPU instances

Hi, I am trying to finetune Whisper according to the blog post here. The finetuning works great in a single GPU scenario, however, fails with multi GPU instances. While executing trainer.train(), multi GPU instances return Bus error (core dumped).

I am working on g5.12xlarge instance for multi GPU on AWS with AMI ID: ami-071323fe2bf59945b on Ubuntu. I would appreciate any guidance or suggestions to resolve this issue.|

Do you have a code snippet to share?