Parameters not updating after an indexing operation

Greetings,

I use a multivariate function which is a part of my forward pass of the model. I am debugging the forward pass for parameter updates. I have found that after a specific operation, i dont have gradients anymore. ( operation mentioned below).
Can someone tell me why and guide me here ?

Code in which the grads are zero
In the code below b is a tensor which is the output of a function which takes in the nn.parameters of nn.Module, and while debugging, i could see that b’s flag for “requires_grad” is True as it depends on the nn.Parameters.

a = torch.empty(leftImagetensor.shape, dtype=torch.float).to(device)
x = b[:, 0].long()    
y = b[:, 1].long()
a[y, x] = b[:, 3] * 256.0
return a #returning a here leads to zero grads and hence no parameter update 

Working part of the code for which parameter update takes place:

return b #returning b here and skipping the parts below yields grads and parameter updates 
a = torch.empty(leftImagetensor.shape, dtype=torch.float).to(device)
x = b[:, 0].long()    
y = b[:, 1].long()
a[y, x] = b[:, 3] * 256.0
return a 

Train script

model = Model().to(device)
print(model.state_dict())

lr = 1e-5
n_epochs = 5

optimiser = optim.sgd(model.parameters(), lr=lr)
L1 = torch.nn.L1Loss()
for epoch in range(n_epochs):
    model.train()

    yhat = model(left_imgtensor, Right_imgtensor, VeloScan, R_0, t_0, K_00, R_cam1_cam0, t_cam1_cam0, K_01)
    
    loss_fn = L1(yhat, y_target) #here y_target lives in the cuda, and yhat also lives in cuda
    print(loss_fn)
    loss_fn.backward()
    for p in model.parameters():
        if p.grad is not None:
            print(p.grad.data)
    optimiser.step()
    optimiser.zero_grad()