According to this and this, backward and forward transformations are tensor/matrices that learn move back and forth between different coordinate system. This simply means, given two vector space, it’s possible to:
- Build a vector for the new basis from the old basis using backward (B) transformation matrix
- Build a vector for the old basis from the new basis using Forward (F) transformation matrix
With the product of F.B = Identity matrix. also B = Inv(F)
The above images show B and F transformation matrix for 2-dimensional vector space. For n-dim vector space, we have something like that
Since the feature of an image can be thought as belonging to an n-dimensional feature space, is it possible to make pytorch learn the transformation matrices through a machine learning procedure? The two input images belong to two different views/feature space as shown below
I dont have the answer to this, new here!
But the first step of the problem would be to input the information on the neural network (better idea on what to use?)
First tough was to vectorize the matrix, but according to this paper information would be lost in the way, as well it requires much computer power, not very efficient
Matrix Neural Networks
gonna keep reading it, if I get to understand you’re going to know. The paper is about a Neural Network designed for the job
I don’t know how it would make sense, given that you want to change from one “feature space” to another “feature space”, and for obtain the forward and backward matrix, you need to define functions between feature spaces. So for example, if you have your “eigenface space”, and you want to translate that to the “manuscrit number space”, you may find a matrix that link between spaces, but then you use your eigenface space to recognize manuscrit numbers and you need to relearn the identification, and may be change your space.