In-place in torch.nn.utils.spectral_norm

https://pytorch.org/docs/stable/_modules/torch/nn/utils/spectral_norm.html#spectral_norm

Question here:

#     Therefore, to make the change propagate back, we rely on two
#     important bahaviors (also enforced via tests):
#       1. `DataParallel` doesn't clone storage if the broadcast tensor
#          is alreay on correct device; and it makes sure that the
#          parallelized module is already on `device[0]`.
#       2. If the out tensor in `out=` kwarg has correct shape, it will
#          just fill in the values.

what does 2. help?
On this page,https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#operations, I dont see any explain whether out= is in-place operation, it just fill the value to the tensor
If out= is in-place operation, why we need to assign to v and u again?

v = normalize(torch.mv(weight_mat.t(), u), dim=0, eps=self.eps, out=v)
u = normalize(torch.mv(weight_mat, v), dim=0, eps=self.eps, out=u)

Instead,

normalize(torch.mv(weight_mat.t(), u), dim=0, eps=self.eps, out=v)
normalize(torch.mv(weight_mat, v), dim=0, eps=self.eps, out=u)

Is there any diferrence ?

Did I miss somethings?