For example, the value of lambda_ is 10 and I want after some number of epochs lets assume 50, linearly decay the lambda_ value towards a small number like 1e-5.
Note that I think scaling the weight like this will have the same effect as if you change the learning rate of your model. Currently, you are multiplying the loss with a factor, which will result in scaling the gradients, and similarly the updates applied to the model parameters. The same effect can be done by scaling the learning rate. So, you may look at how to adjust the learning rate: https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
There are options such as linear, exponential, and many more.