RuntimeError: Broken pipe using NVIDIA Megatron-LM

Hello!
I’m experimenting with distributed training using NVIDIA Megatron-LM project. And I get an error when running the script bash scripts/pretrain_gpt2_model_parallel.sh

Traceback looks like

File "pretrain_gpt2.py", line 625, in <module>
    main()
  File "pretrain_gpt2.py", line 569, in main
    args.eod_token = get_train_val_test_data(args)
  File "pretrain_gpt2.py", line 536, in get_train_val_test_data
    group=mpu.get_model_parallel_group())
  File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: Broken pipe
Traceback (most recent call last):
  File "pretrain_gpt2.py", line 625, in <module>
    main()
  File "pretrain_gpt2.py", line 569, in main
    args.eod_token = get_train_val_test_data(args)
  File "pretrain_gpt2.py", line 536, in get_train_val_test_data
    group=mpu.get_model_parallel_group())
  File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: Broken pipe
Traceback (most recent call last):
  File "pretrain_gpt2.py", line 625, in <module>
    main()
  File "pretrain_gpt2.py", line 569, in main
    args.eod_token = get_train_val_test_data(args)
  File "pretrain_gpt2.py", line 536, in get_train_val_test_data
    group=mpu.get_model_parallel_group())
  File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: Broken pipe
Traceback (most recent call last):
  File "pretrain_gpt2.py", line 625, in <module>
    main()
  File "pretrain_gpt2.py", line 569, in main
    args.eod_token = get_train_val_test_data(args)
  File "pretrain_gpt2.py", line 536, in get_train_val_test_data
    group=mpu.get_model_parallel_group())
  File "/home/ubuntu/Env/ml/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 810, in broadcast
    work = group.broadcast([tensor], opts)
RuntimeError: Broken pipe

The error occurs in the file pretrain_gpt2.py

Could anybody help me with this issue?

I would recommend to create an issue in the GitHub repository directly, as the authors of the code might help there.

I did, but unfortunately I didn’t get an answer.
The error traceback refers to lib/python3.6/site-packages/torch/distributed/distributed_c10d.py . That’s why I thought I might be able to get a hint here.

I assume you’ve created this issue?
It looks like your datasets are empty and the actual error message is:

TypeError: iteration over a 0-d array

when calculating the dataset lengths.
I’ll also post in the issue directly.

@ptrblck That’s a great idea, I’ll check it out . Thank you