Why do the generated samples have `require_grad = False`?

This is my simple model:

class Generator(nn.Module):
    def __init__(self):
        super(Generator, self).__init__()
        self.latent_dim = 2
        self.model = nn.Linear(self.latent_dim, 2)

    def forward(self, z):
        return self.model(z)

    def generate_samples(self, N_samples):
        z = torch.tensor(np.random.normal(0, 1, (N_samples, self.latent_dim))).float()
        samples = self(z)
        return z

The following code returns False:

G = Generator()
G.generate_samples(20).requires_grad

What’s the problem here?

Add requires_grad=True as a kwarg when creating the tensor. Tensors do not require gradient by default. See torch.tensor — PyTorch master documentation.

1 Like

I do not want gradients w.r.t. to the noise tensor z but w.r.t. to my linear layer self.model.

You are returning z which is your input and as @mruberry has mentioned, if you need grad for your inputs, then you should enable it. But you have mentioned that you only need grads w.r.t. model parameters (not input). In that case, you should return samples instead of z to see grads. I think you are returning the wrong tensor.

PS: You can use torch.normal instead of numpy.

Bests

1 Like