How to implement reusable dropout for freeLB in pytorch?

I’m trying to implement FreeLB adversarial training strategy in my custom model, and find out that as it was mentioned in the paper, reusing dropout mask could be quite beneficial in adversarial training, which means you reuse or in other word “freeze” any dropout mask in your model when doing gradient ascent for generating perturbation. But I find out in their source code it was implemented with pytorch1.4, quite complicated, and then get stuck :sleepy:. Is there any quick solution for this today in pytorch1.8?

Just found out that torch.manual_seed can help.