In essence, the problem is how to reproduce the Pytorch version of MI-DI2-FGSM. As we all know, the original code of the MI-DI2-FGSM method is the TensorFlow version. Now I want to implement the Pytorch version of MI-DI2-FGSM. The difference between MI-FGSM and MI-DI2-FGSM is that MI-DI2-FGSM has an additional step of data augmentation. In MI-FGSM, the gradient is obtained from the original picture data, while in MI-DI2-FGSM, the gradient is obtained from the randomly transformed picture. Of course, only when calculating the gradient, the original picture is transformed immediately, and then the gradient is obtained for the randomly transformed picture. At other times, the original picture is not transformed randomly. However, I have reproduced the Pytorch version of MI-FGSM, but I have encountered difficulties in applying data augmentation technology to MI-FGSM. In Pytorch, the following methods are generally used for data augmentation: “transform = transforms.Compose([transforms.RandomHorizontalFlip(),transforms.ToTensor(), transforms.Normalize((0.0,), (1.0,))]); dataset = datasets.MNIST(root = ‘./data’, train=True, transform = transform, download=True)”, that is, the data augmentation step must be before transforms.Totensor(). However, this data augmentation method leads to random transformation not only when calculating the gradient, but also at other times. The result is that the generated adversarial examples = transformed image + perturbation, which is wrong. I hope you can provide ideas to solve this problem. Thank you.