Random quantization

I have come up with something I call NonScalarFilter. It is basically random quantization.
It is not used for the purpose of making the model smaller. It is used in a similar way as noising.

def nonscalarfilter( tensor, r1 = 0.0, r2 = 100.0 ) :

	precision = random.uniform( r1, r2 )

	return tensor.mul( precision ).round().div( precision )

NonScalarFilter is a mechnism similar to noising in that removes information.

The idea is that by removing information during training of a neural network, one forces the neural network to learn to recreate that information.
Noising, however, has a local minimum; avaraging, blurring, the pixels of a noisy image is a reasonable denoising mechaism.

NonScalarFilter has no such local minimum. When using NonScalarFilter during training of a neural network, one can therefore improve it much further.

Upscalers, autoencoders and denoisers are examples of neural networks that can benefit from NonScalarFilter.
It is only used during training and it is placed in the information flow as follows:

input_image -> nonscalarfilter -> encoder -> decoder -> output_image

or

input_image -> encoder -> nonscalarfilter -> decoder -> output_image

or even

input_image -> nonscalarfilter -> encoder -> nonscalarfilter -> decoder -> output_image

The resulting neural network will then learn to create information without having to deal with the local minimum of a denoiser blurring the image.

I just wanted to add:

NonScalarFilter forces the neural network to look further and take detail information with a pinch of salt. As a result, the network learns to look at the big picture and produce details.

I have used this years ago to improve DeepFill2.