I’m trying to implement the equivalent of the Keras max_norm constraint in my Pytorch convnet.
maxnorm(m) will, if the L2-Norm of your weights exceeds
m , scale your whole weight matrix by a factor that reduces the norm to
m . " It can also constrain the norm of every convolutional filter which is what I want to do.
(Fuller explanation of max_norm here:
I found this answer Kernel Constraint similar to the one implemented in Keras in the forums but I don’t understand how the clamp implements max_norm, or how to use it to constrain the norm for individual convolutional filters.