I recently try to implement a self-paced learning model with pytorch. Self-paced learning is to use a regularizer and a loss function weight variable to control the learning step. For an example, our loss function may looks like:

Loss = \sum_{i=1}^n (v_i * L(w, i, j)) + \lambda * \sum_{i=1}^n (v_i)

Sorry about these symbols. I don’t know how to write the equation elegantly so I just type in Latex syntax. So we can see there is a weight v_i controls the learning process. To train the model, we have to do it in two steps in each round.

Firstly, we need to set w (which is the model parameters) unchanged to learn v_i. After v_i is updated, we need to set v_i unchanged to learn w. So each round we need to update these two parameters by controlling one to update the other. Is there any elegant way of doing this in pytorch. Any suggestion is appreciated.