Performing in-place operations with autograd

I have been trying to perform in-place operations on tensors and am facing an issue. When i do:

model=torchvision.models.resnet50()
k=torch.sigmoid(model(torch.rand(1,3,64,64)))
k[k>=0.5]=0.9
loss=nn.BCELoss()(k,torch.rand(1000))
loss.backward()

I get an error that my gradients cannot be calculated. However when i do:

model=torchvision.models.resnet50()
k=(model(torch.rand(1,3,64,64)))
k[k>=0.5]=0.9
k_out=torch.sigmoid(k)
loss=nn.BCELoss()(k_out,torch.rand(1000))
loss.backward() 

The gradient calculation goes fine. I am a bit confused by why this happens.

autograd functions (sigmoid here) lock either inputs or output tensors, a bit arbitrarily (as sigmoid is invertible, one can store either tensor). Here I think that simpler derivative formula was the deciding factor to choose to store output.