Automatic new type of layer creation mechanism for PyTorch

IDEA: I want to offer you a method that put production of a new type of layers on an industrial scale.

PROBLEM: How to turn a neural network in mathematical formula suited for a creation of an absolutely new type of layer for TensorFlow. It might be used in TensorFlow, Keras, PyTorch or anything.


1, You trained your network for particular industry application and invested a lot of computational power to make it work. And this neural network work for solving a particular problem in a particular industry very well.

  1. You freeze all layers in your neural network.

  2. Let’s suppose we have an MSE loss for a simplicity.

  3. Then you start to training target(label) looking for INTEGRAL of a function of your neural network. It will produce a mathematical formula describing a neural network. Let’s call ANTI GRAD algorithm.**

  4. After it’s done you have a mathematical formula that can easily be turned into a new type of layer for particular industry application with a simple(you can tune simplicity) formula for Chemistry, Economics, Medicine.*


  1. Creation of new type of layers suited for particular industries and application.

  2. Research work can be enhanced. We are turning a neural network into the classic mathematical language.

It would require a team of programmers to make it work. So, I am looking for new opportunities and the team to make it happen.

I’m not sure I understand your proposal completely.
What do you mean by industrial scale? Would you like to deploy your model onto a production server?
If so, have a look at Torch Script or the C++ API.

No, @ptrblck I am talking about absolutely new algorithm ANTI-GRAD. Sorry, when I am in euphoria I can’t write properly. If you or anybody else doubt it somehow, please, state here.
What is ANTI-GRAD - it’s an absolutely new algorithm that trains labels looking for an integral to unfold mathematical formula behind the neural network. It’s opposite to auto-grad that calculate s derivatives. My algorithm calculates integral.
Why it’s doing that?
To find a formula that describes a neural network.
Why do we need that?

  1. We can create new layers.
  2. We can enhance neuroscience by using classical mathematics to describe the neural network.

So it’s absolutely new approach. We have to do the following to make it work:

  1. We have to create an ANTI-GRAD algorithm that finds an integral of a given formula.
  2. We have to create a mechanism that makes it possible to train labels in search for integral.
  3. We have to control loss between out neural network and labels that we are training. So, when loss is close to 0 we have a classical mathematical formula that describes a neural network.
  4. We have to make a mechanism to turn the complex mathematical formula into simple one for easy conversion to a new type of layer. “Simple is better than complex” - Zen of Python.