How can one modify a model during training without affecting the computation tree that does backprop?

Actually, do you mind spelling out one detail that is particularly confusing to me? When we loop and do SGD, when exactly are new computation graphs created? I think thats the part that is confusing to me and even made me post the following question:

however it seems that the answer to is is just “acting on mdl.W.data does not add things to the computation graph”. Then when is the computational graph formed?