Self-assessing neural networks: An experimental new type of network layer

I invented a new type of layer for neural networks: It gives the network the ability to critically assess the reliability of its own features, to enable more informed decision making.

Preliminary results: Use of this layer lead to a minor improvement on the MNIST dataset, but the improvement is too small to know for sure if it is useful.

Even if it turns out that this improvement is a fluke, there are a large number of future improvements that could make the algorithm more effective.

I only do research in my spare time, so I would like to listen to some feedback about this idea before I invest more time into it.

I explain the idea and offer a code sample here

I would appreciate it if you had a look at this and tell me what you think.

Also, if you have any practical problems or benchmarking testcases lying around, be sure to give it a try and report if it worked. It’s just one line of code to swap a normal linear layer with a self-assessment layer, as described in the article.