Detach() followed by requires_grad_()?

Recently I have been reading a book called Deep Learning with Pytorch and there is a line of code that I don’t quite get it. Here is the function for training:

def training_loop(n_epochs, learning_rate, params, t_u, t_c):
    for epoch in range(1, n_epochs + 1):
        if params.grad is not None:
            params.grad.zero_()
        t_p = model(t_u, *params)
        loss = loss_fn(t_p, t_c)
        loss.backward()
        params = (params - learning_rate * params.grad).detach().requires_grad_()

        if epoch % 500 == 0:
            print('Epoch %d, Loss %f' % (epoch, float(loss)))
    return params

I was confused by the line params = (params - learning_rate * params.grad).detach().requires_grad_(). Why do we need to first .detach() and .requires_grad_() again?

This can be used to create a leaf Tensor that requires grad. Which means that if it is used in computations and we then call .backward(), its .grad field will be populated.