`weight_norm` does not perform normalization?

I would like to row normalize a fully connected weight using weight_norm. But the problem is it seems NOT to normalize at all.

l = nn.Linear(2, 4, bias=False)
# tensor([[ 0.1212, -0.1683],
#        [ 0.2399,  0.1841],
#        [ 0.2489,  0.6081],
#        [ 0.3547,  0.1641]], grad_fn=<MulBackward0>)

m = nn.utils.weight_norm(l)
x = torch.tensor([1., 1.])

l(x)
# tensor([-0.0471,  0.4241,  0.8571,  0.5188], grad_fn=<SqueezeBackward3>)

m(x)
# tensor([-0.0471,  0.4241,  0.8571,  0.5188], grad_fn=<SqueezeBackward3>)

m here works exactly the same as l, including the m.weight. I expected the m should be normalized. What should I do then?

I might be mistaken, but I thought weight normalization would be a reparametrization of the weight parameter, which should yield faster convergence. The output should be the same as without weight norm.
I just skimmed though the original paper again, but please correct me, if I’m wrong.