I am totally new to pytorch and I want to know its capabilities in developing new optimization ideas.
More precisely, I have previously written a paper entitled “a new framework to train autoencoders using nonsmooth regularization” where the codes where implemented in Matlab from scratch. Now I want to use the capabilities of pytorch in developing my idea in the above paper.
Especially I want to know the following items:
1- Can I define a dynamic cost function in pytorch? In other words consider I have a term in cost function like ||w-v||_F^2 where w is the weight matrix for a special layer and v is a fix matrix which is changed in each epoch based on a specific rule.
2- Can we evaluate the gradient of dynamic cost function with respect to network parameters?
3- Can we change the point where the gradient is evaluated? In other word consider I want to evaluate the gradient in a point where the value of weight matrix w is w+a*v where a is a constant scalar and v is a constant matrix.
- Not without re-evaluating the model. Most if not all auto differentiation (as opposed to symbolic differentiation) offer to evaluate the derivative at the point where you evaluated the function. This is because they need intermediate outputs etc. You can, of course, evaluate the model twice.
Thank you Tom for your answer.