For torch.norm(Lval) , how to get around Unexpected type(s):(int)Possible type(s):(Tensor)(Tensor)(Tensor)(Tensor)
properly without upsetting the back-propagation mechanism ?
Lval
seems to be an int
dtype. If you want to apply a normalization, first cast to float by e.g. torch.norm(Lval.float())
.
1 Like