Torch elastic scaling down can't work

I start the training using the following command:

torchrun \
    --nnodes=1:3\
    --nproc_per_node=2\
    --max_restarts=3\
    --rdzv_id=1\
    --rdzv_backend=c10d\
    --rdzv_endpoint="$endpoint_ip:1234"\
    train_elastic.py

I can start one node first and then start another, and it can scale up normally.
When I end the worker node process with ctrl c, the node where the master is located will get stuck, instead of a new rendezvous.

I want to know what happens when the elastic agent is killed, and how to make other agents aware and initiate a new round of rendezvous.

(btw, I read the Kafka streaming data, but I guess the problem is not here.)