Hello, I’m new to PyTorch and I’ve been trying to implement the Fast Gradient Sign Method to test a models’ robustness to adversarial perturbations. However when I try taking the gradient of the loss function with respect to the input I get a zero tensor, so in this case applying FGSM wouldn’t work no matter how big of an epsilon I use.

The model I’m using is this one:

```
class L2Net(nn.Module):
def __init__(self, hidden, beta, insize = 784, outsize = 10):
super(L2Net, self).__init__()
self.fc1 = nn.Linear(insize, hidden, bias = False)
self.fc2 = nn.Linear(hidden, outsize, bias = False)
self.beta = beta
def forward(self, x):
x = torch.flatten(x, 1)
x = F.relu(self.fc1(x))
x = torch.tanh(self.beta * self.fc2(x))
x = F.log_softmax(x, dim=-1)
return x
```

And I’ve tried using mse, nll and crossentropy as losses.

I am working with the MNIST dataset and using 2,000 units in the hidden layer.

Any help would be appreciated