0 I have some problem with getting the output gradient with respect to input. It is simple mnist model

I have some problem with getting the output gradient w.r.t input. It is simple mnist model.
I already add .reuires_grad to input.
but it doesn’t work.
sample_img.grad is None

    sample_img, sample_label = mnist_test[0]

    sample_img = sample_img.to(device)
    sample_img.requires_grad = True
    prediction = model(sample_img.unsqueeze(dim=0))
    cost = criterion(prediction, torch.tensor([sample_label]).to(device))
    optimizer.zero_grad()
    cost.backward()
    print(sample_label)
    print(sample_img.shape)

    plt.imshow(sample_img.detach().cpu().squeeze(),cmap='gray')
    plt.show()

    print(sample_img.grad)

Thank you!

Please anyone help me

i have no proplem with following code:

import torch
import torch.nn as nn

model = nn.Sequential(nn.Linear(28*28,10))
optimizer = torch.optim.SGD(model.parameters(), 0.03)
criterion = nn.CrossEntropyLoss()

sample_img, sample_label = torch.randn(1,28,28) , 0

sample_img.requires_grad = True
prediction = model(sample_img.view(-1).unsqueeze(dim=0))
cost = criterion(prediction, torch.tensor([sample_label]))
optimizer.zero_grad()
cost.backward()
print(sample_label)
#print(sample_img.shape)

print(sample_img.grad)

maybe it’s your model that cuasing problem??

I solve this problem.

I don’t know why it didn’t work.
but now it works…haha
Thank you!!