None grads after backward()

When I run this code;

Build a feed-forward network

model = nn.Sequential(nn.Linear(2352, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))

criterion = nn.CrossEntropyLoss()
dataiter = iter(trainloader)
images, labels = next(dataiter)
images = images.view(images.shape[0], -1)

Forward pass, get our log-probabilities

logits = model(images)
logits = Variable(torch.randn(10, 120).float(), requires_grad = True)
labels = Variable(torch.FloatTensor(10).uniform_(0, 120).long())

Calculate the loss with the logits and the labels

loss = criterion(logits, labels.squeeze())

than;
print(‘Before backward pass: \n’, model[0].weight.grad)
loss.backward()
print(‘After backward pass: \n’, model[0].weight.grad)

I get;
Before backward pass:
None
After backward pass:
None

Why I’m getting None grad after backward().
Could anyone help me please?

You are detaching the output of the model by rewrapping it into a new Variable with a new randomly initialized tensor. Remove this line and it should work. Also not that Variables are deprecated since PyTorch 0.4.

Thanks for answer ptrblck
But When I remove that line I got an error;

“RuntimeError: 0D or 1D target tensor expected, multi-target not supported”

Those are my line:

logps = model(images)
print(images.shape)
print(logps.shape)
loss = criterion(logps, labels)

output:
images.shape–> torch.Size([64, 2352])
logps.shape → torch.Size([64, 10])

I fixed in this way:

logps = model(images)
labels=labels.type(torch.LongTensor)
loss = criterion(logps , labels.squeeze())

Thanks for your help…