Clip gradient norm from chatbot tutorial

I’m looking at a line of code from the chatbot tutorial. The tutorial is here:

https://pytorch.org/tutorials/beginner/chatbot_tutorial.html#single-training-iteration

The code is for clipping the gradient. How does the author come up with the clip value of 50? Below is some code that mimics the code in the tutorial

        clip = 50.0
        _ = torch.nn.utils.clip_grad_norm_(self.model_0_wra.parameters(), clip)

Thanks for any help.

The author probably got a value by training a model, and seeing gradient values in tensorboard. Another option might have been by hyperparameter tuning. Also they might have took an arbitrary value seeing in other people’s code. I