Solving the linear system of linear equations, when given the initial point

Hi, I want to know, is there function in pytorch, which can solving the linear system of linear equations by iteration after given the initial point? Thank you for your attention.

Hi
Can you give us more details on your optimization problem? Depending on your criteria and constraints, it may be possible to solve with matrices.

Otherwise, you to use the optimizer you must do the following. Assuming x is your parameter you want to optimize, and F is the linear function, and y is the desired output. So argmin Loss(F(x) - y). x0 is your initial point.

You should build a “network” F which does the linear operations you suggest. Then
x = Variable (x0)
optimizer = Adam([x], lr=args.lr)
optimizer.zero_grad()
y_est = F.forward(x)
loss = criterion (y_est, y)
loss.backward()
optimizer.step()

Thank you for your reply! I want to solve Ax=y with no constraints, and A is symmetric semi-positive-definite matric. Is there some function, I can input A, y and initial value of x, then output the optimal solution of x. Actually, I always can get a approximate solutions (x0) of x, so I want to let x0 be the initial value of x.

So is A given, or do you also need to solve for A? If it’s given, the optimal solution for x (i.e., x*) w.r.t. to the mean squared error is x* =y times pinv(A). where pinv is the pseudo inverse. Also y is a colum stack matrix of all y, and so is A.

otherwise, to use some form of gradient descent and assuming that A is fixed/given, you set F (the network) to be the linear layer, and set the F.linear.weight = A.

Yes, you are write, the A is given and does not need to be optimized. Gradient descent to solve A is OK, but efficiency is very low, because it can not utilize the A is symmetric semi-positive-definite and x0 is approximate to optimal solution. So, is there function, let me input A, y, and x0, to solve it efficiently.

I wrote the closed form solution above, i.e.,

Oh, no. This function is so slow. Pay attention to that, what my care is the efficiently. So, I want to apply iter method.

why can’t you use the iter method I mentioned above?

This is because, in this problem, when we can get a nice initial point, iter optimize it by SGD is slow too, because it can only get linear convergence rate, in general, most tool kits use a more efficient iterative approach, which can achieve quadratic convergence rate.

Can you give an example of a specific toolkit and approach used? What makes them converge at quadratic rate?

For example, quasi-Newton method, Newton method, Gauss-Seidel approach, and so on.

Isn’t LFBGS in the family of quasi Newtonian? If that’s good for you, you can just set optimizer to optimize.lfbgs (where I wrote Adam). you can also check all the other possible optimizers in the torch.optim package.

You mean that I can set Adam to BFGS method? How can I do this? (I seem to see hope!)

It’s actually from the torch.optim package (you can see all the optimizers here, in addition to SGD, Adam, there’s also LBFGS, and more…)

https://pytorch.org/docs/stable/optim.html

To use LBFGS, your optimizer would be:
optimizer = optim.LBFGS

1 Like

You successfully solved my problem. Thank you for your reply and discuss.

I guess you know you don’t need PyTorch for this but I assume from your question you want to use PyTorch. Is that right? If not, you can just use scipy.optimize.minimize(method='BFGS' or 'L-BFGS-B'). Another option is GEKKO but might be overkill for this I’m not sure.

The dimension of my A is 2000Ă—2000, and y is 2000Ă—1. I need pytorch because it can apply GPU to speed up my optimize.